text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Why was Mundungus banned from the Hog's Head?
In Order of the Phoenix while the trio were in the Hogs Head for the first time plotting the start of Dumbledore's Army, it transpires that ol' Dung was lurking in the pub in a disguise, having been banned 20 years previously according to Sirius.
Firstly, why was he banned? this could possibly be the tight spot that Albus had helped Dung with in the first place that made him loyal to Albus.
And secondly, how is it that he is then speaking to Aberforth in Halfblood Prince? (assuming the ban was for something rather unforgivable, 20 years is a long time?)
They both could have been in the Order by then, but unlikely given Aberforth's attitude in Deathly Hallows once the trio arrive in Hogsmeade looking for the tiara. We learn now that a lot of trafficking goes on through the Hogs Head so maybe Dung was trading with Aberforth, Sirius' mirror and various other Black artifacts, he just was not allowed in the pub.
Anyone with something in canon or more plausible?
A:
why was he banned?
I'm not able to find any canon data on that, either book text search or interviews transcripts.
how is it that he is then speaking to Aberforth in Halfblood Prince?
In HBP, he's speaking to Aberforth, NOT being inside Hog's Head. The topic was selling stuff he stole from Sirius' place:
Nikki: How did sirius twoway mirror end up with aberforth or is it another twoway mirror?
J.K. Rowling: You see Aberforth meeting Mundungus in Hogsmeade. That was the occasion on which Dung, who had taken Sirius’s mirror from Grimmauld Place, sold it to Aberforth.
(src: J.K. Rowling Interview / The Deathly Hallows Web Chat / July 2007)
As a note - this was important since one of the things sold was the 2-way mirror that Harry used to request help when they were imprisoned at Malfoy's in DH.
So, he was banned from the pub (probably, to avoid causing Aberforth's establishment further trouble), but doesn't mean Aberforth won't talk/do business with him otherwise.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using M-Test to show you can differentiate term by term.
I have the series $\sum_{n=1}^\infty \frac{\lambda^{n-1}n}{n!}=\sum_{n=1}^\infty \frac{d}{d\lambda}\big(\frac{\lambda^n}{n!} \big)$
and I would like it to be $\frac{d}{d\lambda}\big(\sum_{n=1}^\infty \frac{\lambda^n}{n!})$.
I'm trying to show that this sequence of functions converges uniformly on $(0,\infty)$ and so I'm trying the M-Test. So I need to find bounds $M_n$ for $\big|\frac{\lambda^n}{n!}\big|$, such that $\sum M_n$ converges.
Thanks. This is in order to show that I can actually do the differentiation term by term.
A:
You deal with a power series with radius of convergence $R=+\infty$ so you can differentiate term by term.
| {
"pile_set_name": "StackExchange"
} |
Q:
What's the simplest way to pass a file as an argument in a simple shell script?
The following works fine on Mac OS X:
#!/bin/bash
R CMD Sweave myfile.Rnw
pdflatex myfile.tex
open myfile.pdf
Now, I realize that these 3 lines of code are really helpful for my work – independently of some particular file. Thus I'd like to use the file as an argument. I know how to use an argument itself but have problems splitting the input after the string and concat it afterwards. If I was able to split the filename argument like:
split($1,".") # return some array or list ("name","ext")
Or is there a simpler, completely different way than using Python within a shell script?
Thx in advance for any general advice and examples as well !
A:
I do all my shell scripting in python.
It's easier to read, more powerful and works on windows as well.
A:
You could just take the base name as an argument and use $1.Rnw, $1.tex, and $1.pdf. Python is great for shell scripts, but I usually stick with bash for things less than 10 lines long.
If you really want to take a file name, you can use cut -f 1 -d '.' $1.
| {
"pile_set_name": "StackExchange"
} |
Q:
¿Porqué en este loop de JavaScript la impresión de la variable es desde counter y no desde counter-1?
en mi búsqueda por aprender programación por mis propios medios, me he topado con el tema de recursividad y este simple código... mi pregunta ya que la variable counter comienza desde 10 y dentro del loop While el contador resta 1, porqué en la "impresión" aparece desde el 10. Sé que si quisiera empezar desde 10 colocaría el contador en 11... pero obviamente tengo la curiosidad y no entiendo.
var counter = 10;
while(counter > 0) {
console.log(counter--);
}
resultado:
10
9
8
7
6
5
4
3
2
1
A:
La razón es simple, en recursividad lo que haces es pasar una variable o arreglo en la mayor parte de los caso para modificarlos o simplemente imprimirlos, en tu caso quieres restar un numero por cada iteracion dentro de tu ciclo while pero aqui lo que tu quieres conseguir es que primero te imprima el 9 por la lógica que encuentras en tu programa y aunque no es del todo errónea eso no sucederá jamas por la siguiente razón.
En tu codigo lo que tienes es la impresion de tu variable e imprimes lo que es counter-- y a pesar de que si te resta -1 en esa misma iteracion sucede que primero te imprimira la variable antes de hacer dicha operacion ya que es lo que primero lee javascript, es como si tu codigo estuviera dividido en dos partes.
EJEMPLO
var counter = 10;
while(counter > 0) {
console.log(counter); // Lee antes el valor variable
counter--; // Después realiza operación
}
Esto sucede asi porque es como funciona internamente lo que realizas con javascript ya que a pesar de que parece un metodo simple de resta internamente esta compuesto de dos partes. Para cuando javascript hace la operacion tu valor ya esta en pantalla.
EJEMPLO VISUAL
Primera iteración:
counter = 10 | counter-- | counter = 9
counter = 9 | counter-- | counter = 8
counter = 8 | counter-- | counter = 7
...
counter = 1 | counter-- | counter = 0
counter = 0 | counter-- | counter = -1 -> En este caso ya no cumples con la condición por lo cual nunca se imprime.
Para realizar el proceso que quieres en el caso de que primero quieras que se imprima el 9 entonces deberas de hacer lo siguiente:
var counter = 10;
while(counter > 0) {
counter--;
console.log(counter);
}
.as-console-wrapper { max-height: 100% !important; top: 0; }
| {
"pile_set_name": "StackExchange"
} |
Q:
Python: My return variable is always None
So I found a strange thing that happens in python whenever I try to return an optional parameter or at least I think that is why it is happening.
Here is my code
def reverse(string, output = ""):
if string == "":
print "winner: ", output
return output
output = output + string[-1]
string = string[:-1]
reverse(string, output=output)
And here is what happens when I run it:
>>> output = reverse("hello")
winner: olleh
>>> print output
None
Anyone know why my return is always None?
A:
You have to return the return value of the recursive call.
def reverse(string, output = ""):
if string == "":
print "winner: ", output
return output
output = output + string[-1]
string = string[:-1]
return reverse(string, output=output)
| {
"pile_set_name": "StackExchange"
} |
Q:
TextView Not centered in app but centered in match_constraint
I've created a simple activity design using ConstraintLayout.
Whenever I try to center a textView, it does it correctly in the blueprints but never does it in the actual app. Not sure if i am doing something wrong or I'm losing my mind.
Here is the image
Here is the XML code
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/background_green"
tools:context="nz.co.listcosolutions.StartActivity">
<ImageView
android:id="@+id/imageView4"
android:layout_width="160dp"
android:layout_height="163dp"
android:layout_marginEnd="95dp"
android:layout_marginStart="95dp"
android:layout_marginTop="32dp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:srcCompat="@drawable/baby_plant" />
<Button
android:id="@+id/btnNext"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginEnd="32dp"
android:layout_marginStart="32dp"
android:layout_marginTop="64dp"
android:text="@string/next"
android:textColor="@color/background_green"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/textView3" />
<TextView
android:id="@+id/textView3"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginEnd="8dp"
android:layout_marginStart="8dp"
android:layout_marginTop="20dp"
android:text="Welcome to My App"
android:textAlignment="center"
android:textColor="@android:color/white"
android:textSize="24sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/imageView4" />
</android.support.constraint.ConstraintLayout>
Im also using the latest version of ConstraintLayout
compile 'com.android.support.constraint:constraint-layout:1.0.2'
A:
You need to add:
android:gravity="center"
to the TextView.
This is the only certain way to center the text inside a TextView object or one of its subclasses.
The android:textAlignment is not working in all the cases and as reported by this answer that it has problems in lower API levels.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python Segmentation Fault?
First off, I didnt even know a memory error / segfault was possible in python. Kudos to learning something new!
I have this database I create
database = DBManager(dbEndpoint,dbUser,dbPass,dbSchema)
And then I try to use it in a thread
def stateTimeThreadStart():
database.getTable('CLIENTS')
threads = []
threads.append(threading.Thread(name='State Updater', target=stateTimeThreadStart, args=()))
threads[0].start()
The output is
Segmentation fault: 11
What on earth is going on here? It definetly has something to do with database.getTable('CLIENTS') because when I comment it out the issue does not occur. In addition, I have also tried to pass the database to the thread with no luck. Any ideas?
Thanks!
A:
Segmentation faults in Python can occur due to database connectors. The drivers used to connect to the database are usually coded in a C base, so in case of RAM overload or perhaps other reasons it throws Segmentation Faults.
This is further exacerbated by the fact that you are using multithreading. Most database drivers are known to throw Segmentation Faults if multithreading isn't handled very carefully. Most database driver protocols can not handle multiple threads using the same connection at once.
The rule of thumb is to not share a single connection between threads.
| {
"pile_set_name": "StackExchange"
} |
Q:
HP MSA70 / P800 Array Failure - Shows 2 drives in each slot, 13/25 drives "missing"
We have an HP MSA70 with 25 x 600GB HP SAS 10k DP drives, connected to an HP P800 controller. The drives are configured in RAID 6.
Yesterday, some kind of unknown "event" occurred and the array dropped offline. We rebooted the server (running CENTOS 6.2) and upon startup, the Array Controller reported that 13 of the drives are "missing". When we look at the volume in the Array management, there are two entries for each slot for slots 1-12. One shows a 600gb drive and one shows a 0gb drive. There are no more entries after 12.
We contacted HP support, who sent us to Tier 2 support, and after many hours gave up. They said they have never seen this, before (my favorite thing to hear from a vendor).
Has anybody seen this before, and have we lost all of the data?
Thank you.
A:
Old, old, old, old...
CentOS 6.2 is old (6.2, 6 December 2011 (kernel 2.6.32-220))
HP StorageWorks MSA70 is old. (End of Life - October 2010)
HP Smart Array P800 is old. (End of Life - 2010)
So this makes me think that firmware and drivers are also old. E.g. there's no reason to run CentOS 6.2 in 2015... And I'm assuming no effort was made to keep anything current.
This also makes me think that the systems are not being monitored. Assuming HP server hardware, what did the system IML logs say? Are you running HP management agents? If not, important messages about the server and storage health could have been missed.
Did you check information from the HP Array Configuration Utility (or HP SSA)?
But in the end, you've probably suffered a port failure or expander/backplane failure:
How many SAS cables are connected to the enclosure? If 1 cable is connected, then you likely have a backplane issue because of the SAS expander in the enclosure.
If two cables are connected, you may have a SAS cable, MSA70 controller or P800 port failure.
Your data is likely intact, but you need to isolate the issue and determine which one of the above issues is the culprit. Replacing a SAS cable is a lot easier than swapping the MSA70 controller or RAID controller card... but I guess you can get another MSA70 for $40 on eBay...
| {
"pile_set_name": "StackExchange"
} |
Q:
sql queries and inserts
I have a random question. If I were to do a sql select and while the sql server was querying my request someone else does a insert statement... could that data that was inputted in that insert statement also be retrieved from my select statement?
A:
Queries are queued, so if the SELECT occurs before the INSERT there's no possibility of seeing the newly inserted data.
Using default isolation levels, SELECT is generally given higher privilege over others but still only reads COMMITTED data. So if the INSERT data has not been committed by the time the SELECT occurs--again, you wouldn't see the newly inserted data. If the INSERT has been committed, the subsequent SELECT will include the newly inserted data.
If the isolation level allowed reading UNCOMMITTED (AKA dirty) data, then yes--a SELECT occurring after the INSERT but before the INSERT data was committed would return that data. This is not recommended practice, because UNCOMMITTED data could be subject to a ROLLBACK.
| {
"pile_set_name": "StackExchange"
} |
Q:
Como passar objetos entre controllers no MVC utilizando POO
Basicamente, eu preciso que ser o login for bem sucedido salvar o nome de usuário em uma variável e utilizar-lá em outro controller.
Model.php:
public function login($email, $password) {
session_start();
$sql = "SELECT * FROM users WHERE email = :email AND password= :password;";
$query = $this->db->prepare($sql);
$parameters = array(':email' => $email, ':password' => $password);
$query->execute($parameters);
$rows = $query->fetch(PDO::FETCH_NUM);
if($rows > 0) {
header ("Location: " . URL . "home");
} else {
exit ('Email or password incorrect');
}
}
Controller.php
public function login() {
if (isset($_POST['login_submit']) AND isset($_POST['email']) AND isset($_POST['password'])) {
$this->model->login($_POST['email'], $_POST['password']);
}
}
A:
Não foi explicito mas parece que você quer que seja mandado por session. Sendo assim você pode simplesmente setar na sessão e pegar de volta no outro controle.
<?php
// declaração da classe Pessoa
class Pessoa {
public $nome;
}
// No Controller que envia os parametros
session_start();
$joao = new Pessoa();
$joao->nome = "João";
$_SESSION['pessoa'] = $joao;
// No Controller que recebe os dados
session_start();
$joao = $_SESSION['pessoa'];
print_r($joao);
Ou se quiser padronizar isso e jogar no paradigma de orientação a objetos
<?php
// controller que envia
$joao = new Pessoa();
$joao->nome = "João";
SessionUtils::setPropriedade('pessoa', $joao);
// controller que recebe
$joao = SessionUtils::getPropriedadeLimpar('pessoa');
print_r($joao);
// declaração da classe Pessoa
class Pessoa {
public $nome;
}
// classe util para a sessão
class SessionUtils {
private static $BASE_PROPRIEDADES = "props";
/**
* Pega uma propriedade da sessão
* @return a propriedade ou null se não existir
*/
public static function getPropriedade($nome){
self::configurarSessao();
$sessao = self::getSessao();
return @$sessao[$nome];
}
/**
* Pega uma propriedade da sessão e depois a exclui da mesma
* @return a propriedade ou null se não existir
*/
public static function getPropriedadeLimpar($nome){
self::configurarSessao();
$sessao = self::getSessao();
$valor = @$sessao[$nome];
self::setPropriedade($nome, null);
return $valor;
}
/**
* Seta uma propriedade na sessão
*/
public static function setPropriedade($nome, $valor){
self::configurarSessao();
$_SESSION[self::$BASE_PROPRIEDADES][$nome] = $valor;
}
/**
* Configura a sessão para guardar os itens
*/
private static function configurarSessao(){
if(!isset($_SESSION)){
session_start();
}
if(!self::getSessao() || !is_array(self::getSessao())){
self::setSessao(array());
}
}
private static function getSessao(){
return $_SESSION[self::$BASE_PROPRIEDADES];
}
private static function setSessao($valor){
$_SESSION[self::$BASE_PROPRIEDADES] = $valor;
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
StAX and arraylist java
I'm trying to read an xml document with StAX but I have a little problem and i don't know how to fix it, I've tried to look for over internet (maybe i'm using the wrong key word for my problem :/)
so I've this XML:
<questionReponses
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns='http://polytis.fr/studentest'
xsi:schemaLocation='http://polytis.fr/studentest qanda.xsd'>
<questionReponse>
<categorie>Biologie</categorie>
<question>Question 1</question>
<reponse>reponse correcte 1</reponse>
<mauvaiseReponse>reponse fausse 1.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 1.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 1.3</mauvaiseReponse>
</questionReponse>
<questionReponse>
<categorie>Chimie</categorie>
<question>Question 2</question>
<reponse>reponse correcte 2</reponse>
<mauvaiseReponse>reponse fausse 2.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 2.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 2.3</mauvaiseReponse>
</questionReponse>
<questionReponse>
<categorie>CultureG</categorie>
<question>Question 3</question>
<reponse>reponse correcte 3</reponse>
<mauvaiseReponse>reponse fausse 3.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 3.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 3.3</mauvaiseReponse>
</questionReponse>
here is my parser:
try {
// instanciation du parser
InputStream in = new FileInputStream(nomFichier);
XMLInputFactory factory = XMLInputFactory.newInstance();
XMLStreamReader parser = factory.createXMLStreamReader(in);
// lecture des evenements
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
// traitement selon l'evenement
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
question = new Question(categorieCourante,questionCourante,bonneReponseCourante,mauvaisesReponses);
quizz.add(question);
}
if (parser.getLocalName().equals("categorie")) {
categorieCourante = donneesCourantes;
}
if (parser.getLocalName().equals("question")) {
questionCourante = donneesCourantes;
}
if (parser.getLocalName().equals("reponse")) {
bonneReponseCourante = donneesCourantes;
}
if (parser.getLocalName().equals("mauvaiseReponse")) {
mauvaisesReponses.add(donneesCourantes);
}
break;
case XMLStreamConstants.CHARACTERS:
donneesCourantes = parser.getText();
break;
} // end switch
} // end for
parser.close();
}
and the result is not the one expected:
question 1
[categorie =
Biologie
question =
Question 1
bonne reponse =
reponse correcte 1
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
, categorie =
Chimie
question =
Question 2
bonne reponse =
reponse correcte 2
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
, categorie =
CultureG
question =
Question 3
bonne reponse =
reponse correcte 3
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
]
and it's the same for the 3 question i have. When i parse "mauvaiseReponse" all the the "mauvaiseReponse" balise are streamed and added.
the result i'm looking for is something like this:
question 1
categorie =
Biologie
question =
Question 1
bonne reponse =
reponse correcte 1
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
i'm sorry if my english isn't that good, i hope you will undestand my problem and can help me with this
A:
Explanation
Simply, you must renew your badAnswers (mauvaisesReponses) list on each completed Question instance.
I've written a sample code for the provided input xml file. For simplicity, I've created the Question class in the same file with solution;
// A - first instantiation of badAnswers list
List<String> badAnswers = new LinkedList<>();
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
Question question = new Question(currentCategory, currentQuestion, currentRightAnswer, badAnswers);
quiz.add(question);
// B - Renew badAnswers after each Question entry insert
badAnswers = new LinkedList<>();
}
Please also note that I've used LinkedList implementation here to demonstrate that your problem is not related to the List implementation, it is implementation-agnostic.
Solution Code
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.LinkedList;
import java.util.List;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamConstants;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamReader;
public class Solution {
public static void main(String[] args) {
List<Question> quiz = getQuiz("inputFile.xml");
printQuiz(quiz);
}
public static List<Question> getQuiz(String fileName) {
List<Question> quiz = null;
try {
// parser instantiation
InputStream in = new FileInputStream(fileName);
XMLInputFactory factory = XMLInputFactory.newInstance();
XMLStreamReader parser = factory.createXMLStreamReader(in);
String currentData = null, currentCategory = null, currentQuestion = null, currentRightAnswer = null;
quiz = new LinkedList<>();
List<String> badAnswers = new LinkedList<>(); // first instantiation of badAnswers list
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
Question question = new Question(currentCategory, currentQuestion, currentRightAnswer, badAnswers);
quiz.add(question);
badAnswers = new LinkedList<>(); // Renew badAnswers after each Question entry insert
}
if (parser.getLocalName().equals("categorie")) {
currentCategory = currentData;
}
if (parser.getLocalName().equals("question")) {
currentQuestion = currentData;
}
if (parser.getLocalName().equals("reponse")) {
currentRightAnswer = currentData;
}
if (parser.getLocalName().equals("mauvaiseReponse")) {
badAnswers.add(currentData);
}
break;
case XMLStreamConstants.CHARACTERS:
currentData = parser.getText();
break;
}
} // end of for loop
parser.close();
} catch (FileNotFoundException | XMLStreamException e) {
e.printStackTrace();
}
return quiz;
}
public static void printQuiz(List<Question> quiz) {
int i = 1;
for(Question q : quiz) {
System.out.println("Question : " + i++);
System.out.printf("\tCategory : %s\n" , q.getCurrentCategory());
System.out.printf("\tQuestion : %s\n" , q.getCurrentQuestion());
System.out.printf("\tAnswer : %s\n" , q.getCurrentRightAnswer());
System.out.printf("\tBad Answers: %s\n" , q.getBadAnswers());
System.out.println("***********************\n");
}
}
}
class Question {
private String currentCategory;
private String currentQuestion;
private String currentRightAnswer;
private List<String> badAnswers;
public Question(String currentCategory, String currentQuestion, String currentRightAnswer, List<String> badAnswers) {
this.currentCategory = currentCategory;
this.currentQuestion = currentQuestion;
this.currentRightAnswer = currentRightAnswer;
this.badAnswers = badAnswers;
}
public String getCurrentCategory() {
return currentCategory;
}
public String getCurrentQuestion() {
return currentQuestion;
}
public String getCurrentRightAnswer() {
return currentRightAnswer;
}
public List<String> getBadAnswers() {
return badAnswers;
}
}
Demo Output
Question : 1
Category : Biologie
Question : Question 1
Answer : reponse correcte 1
Bad Answers: [reponse fausse 1.1, reponse fausse 1.2, reponse fausse 1.3]
***********************
Question : 2
Category : Chimie
Question : Question 2
Answer : reponse correcte 2
Bad Answers: [reponse fausse 2.1, reponse fausse 2.2, reponse fausse 2.3]
***********************
Question : 3
Category : CultureG
Question : Question 3
Answer : reponse correcte 3
Bad Answers: [reponse fausse 3.1, reponse fausse 3.2, reponse fausse 3.3]
***********************
| {
"pile_set_name": "StackExchange"
} |
Q:
Can a existing mapreduce program be made to run from a specified offset of input file
Is there any way to run an existing mapreduce program so that it processes only from a given offset of the input file?
Eg:
If given offset is 500, the mapreduce program should start processing input file from 500th byte.
A:
It is possible, but will require Java coding and creating a custom InputFormat. For example you can subclass FileInputFormat and override methods public List getSplits(JobContext job) and protected FileSplit makeSplit(Path file, long start, long length, String[] hosts).
To pass the starting offset you can use Configuration parameters accessible via job.getConfiguration().getInt(YOUR_PARAM_NAME, 0)
| {
"pile_set_name": "StackExchange"
} |
Q:
A japanese saying "一をいうと十返ってくる"
I'm currently trying to read a japanese novel and I found this expression :
一をいうと十返ってくる
It was meant to qualify a character, but I just don't get it. At first I thought it could mean "tell one and give back ten", so I thought it meant this character tends to do more than he was actually asked or intended to do...?
However, I tried searching on japanese sites and it seems it's a saying to qualify a very proud person...? Still I would like to have a more precise idea of what it could really mean and where it does come from, because I'm very interested by japanese idioms.
Does anyone have a more precise idea ?
Thank you very much.
A:
「[一]{いち}をいうと[十返]{じゅうかえ}ってくる」
The meaning and nuance of this phrase can be quite different depending on the context or the speaker's intention.
Positive:
Someone is always willing to give a full explanation. You ask one simple question and he will not only answer that question but also give you so much more related information.
Negative:
Someone always talks back to you. Tell him one thing and he will give back a long session of objection, refutation, etc.
(Possibly) more important:
I explained the phrase in terms of "speaking words" above, but the phrase does not always have to be about "ten times as many words". It can also be about someone's tendency in taking non-verbal actions if he just is the type to do much more than the bare minimum.
| {
"pile_set_name": "StackExchange"
} |
Q:
Doctrine2 entity default value for ManyToOne relation property
I've got a Doctrine2 Entity called "Order", which has several status properties. The allowed status' are stored in a different Entity, so there is a ManyToOne relation defined for those entities.
/**
* @ORM\Entity()
*/
class Order extends AbstractEntity {
// ...
/**
* @ORM\ManyToOne(targetEntity="Status")
* @ORM\JoinColumn(onDelete="NO ACTION", nullable=false)
*/
protected $status;
/** @ORM\Column(nullable=true) */
protected $stringProperty = "default value";
}
I need to set this status property to a default value when creating a new instance of the order object.
For a "non-relation" property I can simply set it like the $stringProperty above. But how to do it for relations?
I cannot set the value to the id of the related record, as Doctrine2 will complain.
It's fine if the configured default is a "Reference" to the status entity. The available status' are fixed and won't change (often).
How do I configure the entity to have a proper default relation configured.
Preferably not in a listener when persisting, as the status may be requested before that.
A:
There are several approaches but I would suggest using the OrderRepository as a factory for creating new orders.
class OrderRepository
{
public function create()
{
$order = new Order();
$status = $this->_em->find('Status','default'); // or getReference
$order->setStatus($status);
return $order;
}
}
// In a controller
$orderRepository = $this->container->get('order_repository');
$order = $orderRepository->create();
By going with a repository you can initialize complex entity graphs that will be ready for persisting.
==========================================================================
Plan B would be to do this sort of thing within the order object and then use listeners to "fix things up" before persisting or updating.
class Order
{
public function __construct()
{
$this->status = new Status('Default');
}
}
The problem of course is that a default status object already exists in the database so when you flush you will get a error. So you need to hang an onFlush(http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/events.html#onflush) listener on the entity manager, check to see if the status object is being managed by the entity manager and, if not, replace it with a managed object fetched via the entity manager.
This approach lets you deal with more "pure" domain models without worrying as much about the persistence layer. On the other hand, dealing with the flush can be tricky. On the gripping hand, once you get it working then it does open up some major possibilities.
========================================================
There is also the question of what exactly the status entity does. If all it contains is some sort of status state ('entered',processed') etc. Then you might consider just having it be a string. Sort of like the ROLE_USER objects.
| {
"pile_set_name": "StackExchange"
} |
Q:
React typescript ref return null in conditional rendering
I want to use React refs, it works fine in static rendering, e.g:
<footer ref="ftr"></footer>
But, not in conditional rendering, e.g:
{footer ?
<footer ref="ftr"></footer>
: null}
When I called ReactDOM.findDOMNode(this.refs.ftr);, the first way returned the element (fine) but the second returned me undefined.
How to do the right way in conditional rendering? Any answer would be appreciated.
A:
You should not use string refs as written in the docs:
We advise against it because string refs have some issues, are
considered legacy, and are likely to be removed in one of the future
releases
You can do this:
let footerElement: HTMLElement | null = null;
...
{footer ?
<footer ref={ el => footerElement = el }></footer>
: null}
...
if (footerElement != null) {
...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Not populating tableview with structure array
I need to populate my tableView with an array of a structure. The first property of the structure is the name. This is what I tried...
var menuArray:[Restaurant] = [Restaurant]()
override func viewDidLoad() {
super.viewDidLoad()
let shake = Item(name: "Shake", carbs: 20)
let fries = Item(name: "Fries", carbs: 30)
let beverages = Category(name: "Beverages", items: [shake])
let chips_fries = Category(name: "Chips & Fries", items: [fries])
let desserts = Category(name: "Desserts", items: [])
let other = Category(name: "Other Menu Items", items: [])
let sandwiches_burgers = Category(name: "Sandwiches & Burgers", items: [])
let sides = Category(name: "Sides", items: [])
a_w = Restaurant(name: "A&W", categories: [beverages, chips_fries, desserts, other, sandwiches_burgers, sides])
let menuArray = [a_w]
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let currentCell = tableView.dequeueReusableCell(withIdentifier: "cell")
let currentRestaurant = menuArray[indexPath.row]
currentCell?.textLabel!.text = currentRestaurant.name
return currentCell!
}
override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return menuArray.count
}
Why won't it populate my tableView
Here is my class also...
import Foundation
struct Item {
let name: String
let carbs: Int
}
struct Category {
let name: String
let items: [Item]
}
struct Restaurant {
let name: String
let categories: [Category]
}
A:
In this line
let menuArray = [a_w]
you are creating a local variable menuArray which is different from the property with the same name representing the data source array.
Omit let
menuArray = [a_w]
PS: Please use more descriptive variable names than a_w.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to Compile and Debug C++ in Notepad++ using Turbo C++ Compiler
I have installed NppExecute plugin in notepad++. I am not able to figure out next step to compile and debug C,C++ programs in Notepad++.
System Details: (a) Turbo C directory C:\TC (b) OS Windows 7
Please provide complete details on how to set Environment Variable and Scripts for Compiling and Debugging.
A:
I wondering why somone wants to use turbo C++.If you run Windows, just use Visual Studio Express, or Dev-C++.If you still want to use Turbo C you will run into several problems with compatibility of this ancient software.
A:
Notepad++ has the run feature, but as far as I know it's unable to help you debugging (e.g. stepping through code, watching variables, etc.).
Your best bet would be using a simple batch file to compile the code and run your debug commands, but as far as I know you can't include everything into Notepad++ (i.e. it's no real C/C++ IDE).
Only option you've got is adding the created batch file as the program to be run by NppExecute.
Edit:
Overall, as rkosegi suggested, if possible, use a more up-to-date toolchain.
Microsoft's Visual C++ Express Edition can be downloaded for free and used for private or commercial projects.
If you target cross platform code, it might be easier to use MinGW to use GCC/G++ under Windows.
| {
"pile_set_name": "StackExchange"
} |
Q:
bootstrap.min.css sets transparency where not wanted
I have a small chatbox at the bottom of my page which seems to be inheriting CSS style from bootstrap.min.css and that chatbox is transparent which is a nuisance because the underlying text on the page shows through and what is worse, is that hyperlinks on the page are over-riding clickable areas in the chatbox for opening, closing and submitting messages.
I have tried adding CSS style to the chatbox for opacity and rgba. Even tried adding a background image but to no effect.
I have since modified the chatbox to display an iFrame from a different site that does not use bootstrap.min.css.
But even the iFrame page is affected by transparency. I can remove the transparency setting in bootstrap.min.css but that will not solve my bigger problem... I am intending to use this chatbox on several sites and may not have control of the site's CSS.
So I need a way to override the parent site's CSS just for the chatbox.
If that is impossible, then I can weed out the transparency from bootstrap.min.css that is used on my own sites. However I do wonder what is the point of such transparency when it is useless here...
A:
It's a z-index problem which is common when integrating iframes, apply z-index: 2000; (or whatever number as long as it comes on top) on your chatbox div so your chatbox will still stay upfront.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where to get flight dynamics for a flight sim model?
Once, a while ago, I tried to create a Flight Simulator X model for an aircraft that I wanted a model of, but was soon overwhelmed by having to guess so much of the flight dynamics. Is there somewhere where I could get detailed information about the flight dynamics of aircraft without contacting the manufacturer, a pilot, or having the plane itself to run tests on? I mean for things like drag at different mach, drag coefficient created by the landing gear, lift coefficient created by the flaps, detailed stuff about the engines, etc.
A:
Unfortunately I have no experience with how FSX models aircraft, but at a guess, it's model requires extensive experimental data from a real aircraft to truly get the right parameters.
And that's something no hobbyist is likely to be able to do. For that matter, it's pretty difficult for a pilot to do, since actually recording the relevant data is difficult, and some of what you need to know requires doing things with the aircraft you probably shouldn't do in most circumstances.
X-Plane's flight model and aircraft creation tool is far more forgiving. You still end up having to guess a lot of parameters, but they are generally less critical to basic handling.
All you really need to get a decent flight model out of X-Plane is a good set of reverence pictures, an eye for detail (so your model matches the geometry properly), and ideally the correct airfoil profiles and engine specifications.
(Primarily the thrust and power)
For the most part, good reference models and diagrams, and the information you'd find in the Pilot's operating handbook is enough to create a decent flight model in X-Plane.
It certainly won't be perfect, and you'll probably have to tune it, but it's a much easier task than getting that data needed for an FSX model.
I have the good fortune of being a student pilot, and as such I decided to attempt a model in X-Plane, and found that while it was far from perfect, (and needs a lot of improvement to be a 'good') model, it's behaviour was much closer to the real aircraft that I fly regularly than I was ever expecting given how much I had to guess.
I had to guess everything from the aileron deflection angles to the propeller geometry, wing airfoil choices and more, and still the resulting model was only slightly off from the real thing insofar as I know how the real aircraft flies.
I guess that's not an overly helpful answer in a direct sense for an FSX flight model, but I fear it just isn't going to be at all easy to find the information you need to make a flight model that isn't fundamentally broken in FSX, let alone accurate.
X-Plane is just far simpler to work with when you don't have a lot of information...
Whether that's worth the downsides of X-Plane (especially the consequences of switching if you already have a heavy investment in FSX), I don't know.
But it's worth keeping in mind if you are particularly fond of amateur aircraft design.
(It's even plausible to create fictional designs in x-plane for which no real-world data could ever exist in theory, and still get a good idea of how such a design likely would fly if it did exist.)
As for FSX potentially having better flight models in some cases? Maybe. But this is likely going to be the flight models of expensive add-on aircraft models that were made with the help of the manufacturer and pilots qualified to fly the real aircraft.
That's not going to help you any if you don't have access to those kinds of resources.
A hand-tuned model matched to exact real-world data may well work better than a physics based model if you have good source data.
But if your source data is lousy (as it is for most of us unfortunately), then the physics based model will be much more reliable most of the time...
A:
Decent aerodynamic (wind tunnel) data is available courtesy of NASA / NTRS.
Windtunnel derived aerodynamic data sources is where I have collected together detailed data for the B747, F-14 and F-15.
B747 Aerodynamic data
NASA CR-1756 The Simulation of a Large Jet Transport Aircraft Volume I: Mathematical Model, C. Rodney Hanke
March 1971
D6-30643 THE SIMULATION OF A JUMBO JET TRANSPORT AIRCRAFT - VOLUME 11: MODELING DATA, C. Rodney Hanke and Donald R. Nordwall September 1970
F-14 Aerodynamic data
These are the data sources for my F-14 for FlightGear
F-14A Aerodata plots F-14A Aerodata plots from AFWAL-TR-80-3141. These are in the TR; and don't reflect the JSBSim model as that has more data; this is just what I made for reference whilst modelling.
Richard Harrison
AFWAL-TR-80-3141, Part I Investigation of High-Angle-of-Attack Maneuver-limiting factors, Part I: Anaylsis and simulation
Donald E. Johnston, David G. Mitchell, Thomas T. Myers
1980
AFWAL-TR-80-3141, Part III: Investigation of High-Angle-of-Attack Maneuver-limiting factors, Part III: Appendices aerodynamic models
Donald E. Johnston, David G. Mitchell, Thomas T. Myers
1980
NASA TN D-6909 DYNAMIC STABILITY DERIVATIVES AT ANGLES OF ATTACK FROM -5deg TO 90deg FOR A VARIABLE-SWEEP FIGHTER CONFIGURATION WITH TWIN VERTICAL TAILS
Sue B. Grafton and Ernie L. Anglin
1972
NASA-TM-101717 Flutter clearance to the F-14A Variable-Sweep Transition Flight Expirement Airplane - Phase 2
Lawrence C. Freudinger and Michael W. Kehoe
July 1990
N89 - 20931 APPLIED TRANSONICS AT GRUMMAN
W. H. Davis
F-15 Aerodynamic data sources
These are the data sources / references for F-15 for FlightGear. The FDM is based on the windtunnel derived aerodynamic data found in (AFIT/GAE/ENY/90D-16).
Richard Harrison, [email protected]: F-15 Aerodynamic data from (AFIT/GAE/ENY/90D-16); CG 25.65%, ZDAT/AED/2014/12-2, December, 2014: F-15 Aerodynamic data extracted from AFIT/GAE/ENY/90D-16
Robert J. McDonnell, B.S., Captain, USAF: INVESTIGATION OF THE HIGH ANGLE OF ATTACK DYNAMICS OF THE F-15B USING BIFURCATION ANALYSIS, AFIT/GAE/ENY/90D-16, December 1990: ADA230462.pdf
Richard L. Bennet, Major, USAF: ANALYSIS OF THE EFFECTS OF REMOVING NOSE BALLAST FROM THE F-15 EAGLE, AFIT/GA/ENY/91D-1, December 1991: ADA244044.pdf
DR. J. R. LUMMUS, G. T. JOYCE, O C. D. O MALLEY: ANALYSIS OF WIND TUNNEL TEST RESULTS FOR A 9.39-PER CENT SCALE MODEL OF A VSTOL FIGHTER/ATTACK AIRCRAFT : VOLUME I - STUDY OVERVIEW, NASA CR-152391-VOL-1 Figure 3-2 p54, October 1980: 19810014497.pdf
Frank W. Burcham, Jr., Trindel A. Maine, C. Gordon Fullerton, and Lannie Dean Webb: Development and Flight Evaluation of an Emergency Digital Flight Control System Using Only Engine Thrust on an F-15 Airplane, NASA TP-3627, September 1996: 88414main_H-2048.pdf
Thomas R. Sisk and Neil W. Matheny: Precision Controllability of the F-15 Airplane, NASA-TM-72861, May 1979 87906main_H-1073.pdf
Aircraft handling data
NT-a3A, F-104A, F-4C, X-15, HL-10, Jetstar, CV-880M, B-747, C-5A, and XB-70A.
Robert K. Heffley and Wayne F. Jewell, NASA CR-2144 AIRCRAFT HANDLING QUALITIES DATA,
December 1972
JSBSim implementations of the aerodynamics models can be viewed in my GitHub repository F-14 and F-15. These are both useful references in how to implement an aerodynamic model using JSBSim.
Where no such data is available OpenVSP using VSPAero is a useful tool for generating coefficients from geometry.
Any computational method (including OpenVSP and X-Plane) will not be able to attain the accuracy gained from windtunnel measurements, especially as you reach the edge of the flight envelope. All FAA Level D simulators use wind tunnel derived aerodyanmic data packages for this reason.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it ok to ask questions on Stack Overflow to improve my coding skills?
I have some questions I want to ask to other (experienced) programmers on Stack Overflow.
The goal of those questions is gaining knowledge to become a better programmer.
I think it's a great idea to ask an experienced programmer I know to take a look at my code. But mostly experienced programmers don't have time for this.
So can I ask such questions on Stack Overflow?
A:
So can I ask such questions on Stack Overflow?
No.
This is
opinion based
not about a specific programming problem
too broad
Regarding improvement of working code you may ask at Code Review, instead.
For questions about "creating, delivering, and maintaining software responsibly", you can ask them at Software Engineering Stack Exchange (previously named "Programmers Stack Exchange").
A:
Such questions are not strictly disallowed here (I think), they are asked and answered from time to time, if they ask about a very specific part of some code. When it's just a huge code dump, asking how to improve it, your question will quickly gather downvotes and close votes.
There is a site specifically created for this, however: Code Review Stack Exchange
Take a look at What topics can I ask about here? for details on the kind of questions you can ask on Code Review. Below is a summary, taken from that page:
I'm confused! What questions are on-topic for this site?
Simply ask yourself the following questions. To be on-topic the answer
must be "yes" to all questions:
Is code included directly in my question? (See Make sure you include your code in your question below.)
Am I an owner or maintainer of the code?
Is it actual code from a project rather than pseudo-code or example code?
Do I want the code to be good code? (i.e. not code-golfing, obfuscation, or similar)
To the best of my knowledge, does the code work as intended?
Do I want feedback about any or all facets of the code?
If you answered "yes" to all the above questions, your question is
on-topic for Code Review.
A:
Although you shouldn't just ask on Stack Overflow to have your code looked at, you can use Stack Overflow to improve your coding skills. I do it all the time, by answering questions (or just by trying to), about things that I don't quite know how to do but would like to. It's a great way to find out about language features, techniques and technologies you didn't know about.
A surprising number of questions (or perhaps it's not at all surprising) can be answered with a bit of googling, persistence and experimentation. And if I get it wrong, a swift handful of downvotes will set me straight. :-)
| {
"pile_set_name": "StackExchange"
} |
Q:
Is the sum of separating vectors always separating?
If $\mathcal{R}$ is a von Neumann algebra acting on Hilbert space $H$ and $v, w \in H$ are separating vectors for $\mathcal{R}$, must $v+w$ be (either zero or) separating for $\mathcal{R}$?
[I have edited to remove the restriction to type III factors and am moving my proposed partial solution to an answer below.]
A:
No, there must be a counterexample, under the mild assumption that there exists a nontrivial unitary $U \in \mathcal{R}$ whose restriction to the range of some nonzero projection $P \in \mathcal{R}$ is trivial (i.e. the identity).
Fix such a $U$ and $P$. Let $v$ be any separating vector for $\mathcal{R}$ and let $w = -Uv$. This $w$ is separating for $\mathcal{R}$ since any nonzero $T \in \mathcal{R}$ that annihilated $w$ would make $-TU$ a nonzero operator in $\mathcal{R}$ than annihilates $v$.
But we can show, using the fact that $UP = P$ and $U(1-P) = (1-P)U$, that $v + w$ is not separating for $\mathcal{R}$:
$v + w = v - Uv = (Pv + (1-P)v) - (UPv + U(1-P)v)$
$= (1-P)v - U(1-P)v = (1-P)v - (1-P)Uv = (1-P)(1-U)v$;
and $(1-P)(1-U)v$ is annihilated by $P$.
| {
"pile_set_name": "StackExchange"
} |
Q:
CMake link directory passing when compiling shared library
Say I have C project with the following structure (simplified):
|- CMakeLists.txt <- This is root CMake
|- lib
|- <some source files>
|- CMakeLists.txt <- CMake file for building the library
|- demo
|- <some source files>
|- CMakeLists.txt <- CMake for building demo apps
|- extra_lib
|- <some source files>
|- CMakeLists.txt <- CMake for building supplementary library
Now, I want to build my library (living in lib) as a shared library to be used by demo apps from demo directory.
Additional library, that can not be a part of my library (it is essentially a wrapper for some C++ external library) is also to be compiled as a shared library and then linked to my library.
I have a problem with including dependencies for additional library. In its CMakeLists.txt I've defined link_directories to point location where .so libs are stored and then target_link_libraries to point which should be linked. At the end I did export target.
include_directories(${EXTERNAL_DIR}/include)
link_directories(${EXTERNAL_DIR}/lib)
add_library(extra_lib SHARED extra_lib.cpp)
target_link_libraries(extra_lib
some_lib
)
export(TARGETS extra_lib FILE extra_lib.cmake)
The point is that when I try to compile lib and link it against extra_lib I get an error that some_lib is not found what I guess means that link_directories is local to the extra_lib.
Now, question is how can I make it propagate together with dependencies? I'd like it to work in the way that adding extra_lib as subdirectory and as a dependency for my lib would automatically add linked directories from extra_lib to the lib linking process.
The linking process would look like:
(some external library) --> extra_lib --> lib --> demo app
A:
First off, the CMake docs state that commands like include_directories and link_directories are rarely necessary. In fact, it is almost always better to use target_include_directories and target_link_libraries instead.
Secondly, the reason your approach fails is because you need to let CMake know about the existence of some_lib. You can do this like so:
add_library(some_lib SHARED IMPORTED)
set_target_properties(some_lib
PROPERTIES
IMPORTED_LOCATION ${EXTERNAL_DIR}/lib/libsome_lib.so)
Then, afterwards:
target_link_libraries(extra_lib some_lib)
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the challenges for recognising the handwritten characters?
This 2014 article saying that a Chinese team of physicists have trained a quantum computer to recognise handwritten characters.
Why did they have to use a quantum computer to do that?
Is it just for fun and demonstration, or is it that recognising the handwritten characters is so difficult that standard (non-quantum) computers or algorithms cannot do that?
If standard computers can achieve the same thing, what are the benefits of using quantum computers to do that then over standard methods?
A:
Handwritten digit recognition is a standard benchmark in Machine Learning in the form of the MNIST dataset. For example, scikit-learn, a python package for Machine Learning uses it as a tutorial example.
The paper you cite uses this standard task as a proof of concept, to show that their system works.
| {
"pile_set_name": "StackExchange"
} |
Q:
Class AB amplifier
What role does Rv play in this AB class amplifier?
A:
This is a class B amplifier: -
Your circuit is a class AB amplifier: -
Rv adjusts the bias point of the two transistors so that T1 and T2 are always conducting a little bit of current - this avoids excessive cross over distortion: -
See also this article, Crossover Distortion in Amplifiers, for more information.
Rv modifies the volt drop across the two series diodes. Remember that diodes are not just fixed 0.7 v devices. The forward volt drop can be adjusted so that the base-emitter junctions of each output transistor are conducting 1 mA or so, placing the transistors in a much more linear region of their characteristic at the expense of a sending DC current thru the transistors (an increase in power dissipation).
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a navbar with 2 collapse menu?
My codes are working fine on mobile view but on desktop right side Links are getting out of nav-bar. i am trying to create nav bar with Brand in center, search bar on left and links on right. and how to move hambuger bar on left and search on right side on mobile view .. (sorry for my english)
.navbar-brand {
position: absolute;
width: 100%;
left: 0;
top: 0;
text-align: center;
margin: auto;
}
.navbar-toggle {
z-index:3;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<nav class="navbar navbar-default navbar-fixed-top" role="navigation">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar-collapse-2">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Brand</a>
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar-collapse-1"><span class="glyphicon glyphicon-search" aria-hidden="true"> </span>
</button>
</div>
<div class="collapse navbar-collapse navbar-left" id="navbar-collapse-1">
<form class="navbar-form navbar-left" role="search">
<div class="form-group">
<input type="text" class="form-control" placeholder="Search">
</div>
<button type="submit" class="btn btn-default">Submit</button>
</form>
</div>
<div class="collapse navbar-collapse" id="navbar-collapse-2">
<ul class="nav navbar-nav navbar-right">
<li><a href="#">Link</a></li>
<li><a href="#">Link</a></li>
<li><a href="#">Link</a></li>
</ul>
</div>
</nav>
A:
add menu-1 class or call it whatever you want to the button
<button type="button" class="navbar-toggle menu-1" data-toggle="collapse" data-target="#navbar-collapse-2">
then add this to your css:
.menu-1 {
float: left;
margin-left: 10px;
}
Update: to handle search issue remove width: 100 and increase left for .navbar-brand like this:
.navbar-brand {
position: absolute;
/*width: 100%;*/
left: 50%;
top: 0;
text-align: center;
margin: auto;
}
and add this css also:
@media (max-width:767px){
a.navbar-brand {
left: 45%;
}
}
check the updated Jsfiddle
| {
"pile_set_name": "StackExchange"
} |
Q:
Issue with jquery remove method on IE7?
<table class="myCustomers">
<tbody>
<tr>
<td>
<ul id="salesCustomers">
<li title="cust 1"><a id="cust_1" >customer 1</a></li>
<li title="cust 2"></li>
</ul>
</td>
</tr>
</tbody>
when i do on below on IE 7, DOM element corresponding to "customer 1" gets removed from container "salesCustomers" but
"salesCustomers" container does get adjusted(i mean IE 7 displays empty space in place of it) after removal of element
$('#cust_1').remove();
It works fine on IE8,9,firefox,chrome but not on IE 7?
Updated:-
CSS part is
table.myCustomers li {
margin: 8px;
}
table.myCustomers li a {
text-decoration: none;
}
a {
color: #000000;
margin: 3px;
}
A:
The empty space may be since the li is still there. (as pointed out by Jayraj)
If you want to remove the li corresponding to the #cust_1 as well,
You have a couple of ways to do it,
$("[title='cust 1']").remove();
$("#cust_1").parents("li").remove(); //this will remove the
child element as well
Test link
| {
"pile_set_name": "StackExchange"
} |
Q:
Django ModelForm not showing up in template
I've been using django for a couple of days now and I'm trying to create a small app to learn the whole stuff.
I've read about the ModelForms and I wanted to use it in my app, but I can't get it to render in my template and I can't find the problem, so I was hoping you guys could help me out.
Here's my code:
models.py
from django.db import models
class Kiss(models.Model):
latitude = models.FloatField()
longitude = models.FloatField()
person1 = models.CharField(max_length = 255)
person2 = models.CharField(max_length = 255)
cdate = models.DateTimeField(auto_now_add=True)
def __unicode__(self):
return self.person1
views.py
from django.views.generic.list import ListView
from project.models import Kiss
from project.forms import KissForm
from django.http import HttpResponseRedirect
class KissListView(ListView):
template_name = 'project/home.html'
model = Kiss
form = KissForm
urls.py (only the relevant part)
urlpatterns += patterns('',
url(r'^$', KissListView.as_view(), name='home'),
)
forms.py
from django import forms
from project.models import Kiss
class KissForm(forms.ModelForm):
class Meta:
model = Kiss
and the template
<form action="" method="POST">
{% csrf_token %}
{{form.as_p}}
<button>Send</button>
</form>
Thanks in advance for your help.
J
A:
class KissListView(ListView):
...
You are using ListView which does not require form and will not give you form in the template context.
You may want to use CreateView instead.
| {
"pile_set_name": "StackExchange"
} |
Q:
Assembly in Visual Studio 2013 not building even after enabling Microsoft Macro Assembler
I'm trying to run a pretty basic assembly file to do a little math and print the output, nothing challenging. I've followed the steps given from places such as here but my build still fails and there are errors on every single line about syntax. Errors such as:
1>c:\users\damian\documents\visual studio 2013\projects\test345\test345\source.asm(22): error C2061: syntax error : identifier 'dword'
1>c:\users\damian\documents\visual studio 2013\projects\test345\test345\source.asm(24): error C2061: syntax error : identifier 'add'
1>c:\users\damian\documents\visual studio 2013\projects\test345\test345\source.asm(27): error C2061: syntax error : identifier 'pop'
1>c:\users\damian\documents\visual studio 2013\projects\test345\test345\source.asm(12): error C2061: syntax error : identifier 'main'
The code I'm trying to run is here. I've tried changing from cpp to c compiling, I've tried setting an entry point in the linker, and I've tried right clicking on project->Build Dependencies->Build Customizations and checking masm but none of those made any difference at all. Is there something else I'm missing?
A:
The code you tried to assemble uses NASM syntax. You need to configure Visual Studio to use NASM instead.
1) Install NASM and add it's path to the PATH environment variable.
2) Right click on your asm file and then choose Properties->General and then choose Custom Build Tool for the Item Type field.
3) Click on Apply.
4) On the Custom Build Tool page set nasm -f win32 -o "$(ProjectDir)$(IntDir)%(Filename).obj" "%(FullPath)" for the Command Line field.
5) Set the Outputs field to $(IntermediateOutputPath)%(Filename).obj
This will make NASM assemble your assembly source file into visual studio compatible object file.
We are not done yet though, you need to make some changes to the assembly file before you can link it using MSVC's linker.
1) MSVC's linker requires your functions to start with an underscore so main becomes _main.
2) The naming convention when declaring imported APIs is different too. So extern printf becomes extern __imp__printf
3) Call instructions to imported APIs are different too. call printf becomes call [__imp__printf]. The address of printf will be stored in an import table entry and our instruction dereferences it to find the address of printf and calls it.
Trying to link this will also result in an error (error LNK2001: unresolved external symbol _mainCRTStartup). The way I beat this is including a c file with a dummy function that does nothing. That way, the CRT startup stub gets linked. (If there is a better method, suggest it in the comments).
| {
"pile_set_name": "StackExchange"
} |
Q:
Send user to edit page without inserting the record
I currently have an Apex class that pulls the data from a record of one custom object 'property__c' and uses it to create a record of another object 'proposal__c' and then redirects the user to this new record's edit view.
However, there's a problem with this. On the edit page, if the user hits "cancel," the record is still inserted. This is because in order to direct the user to the edit view of the new record, we have to first insert that record so that we can pull its id.
Outside of making a custom edit view for this object, is it possible to send the user to the edit view without inserting the record first? This would ideally behave the same as if the user simply hit the "new" button and then "cancel." The only difference is, we want some of the fields of the record to be pre-populated.
Here is the code. You can see where it redirects the user at the bottom of the convert method.
public class ControllerCreateProposalView {
public Id propertyId;
public ControllerCreateProposalView(ApexPages.StandardController stdController){
propertyId = ApexPages.CurrentPage().getParameters().get('id');
}
public PageReference convert(){
PageReference pref;
Property__c property = [
select
Id,
Name,
OwnerId,
Primary_Contact__c,
from Property__c
where Id = :propertyId limit 1
];
Proposal__c proposal = new Proposal__c(
Name = property.Name,
OwnerId = property.ownerid,
Property__c = property.Id,
Client__c = property.Primary_Contact__c,
);
insert proposal;
String sServerName = ApexPages.currentPage().getHeaders().get('Host');
sServerName = 'https://'+sServerName+'/';
String editName='/e?retURL=%2F'+proposal.Id;
pref = new PageReference(sServerName + proposal.Id+editName);
pref.setRedirect(true);
return pref;
}
public PageReference back(){
PageReference pref = new PageReference('/' + propertyId);
pref.setRedirect(true);
return pref;
}
}
EDIT:
I can send the user to the default edit page, which is /xxxx.salesforce.com/a0r/e?retURL=%2Fa0r%2Fo
and I can even pass in values for standard fields, such as "name"
/xxxx.salesforce.com/a0r/e?name=TEST&retURL=%2Fa0r%2Fo
But I cannot pass custom fields. If I could pass custom fields in using this method, I believe I could achieve what I want to do.
A:
As you've mentioned in your edit, you can pass fields in the URL
string, but only if you use the Field ID, a concept known as
"Salesforce URL hacking". You can read more about that if wanted on
this other Salesforce Stack:
How do I prepopulate fields on a Standard layout?
Keep in mind that, while it is possible to pre-populate standard and custom fields using this method, the implementation is difficult since there is no guarantee that your field IDs will be identical across different environments (sandboxes, dev orgs, production). If for example my Custom_Field__c has an ID of 00NJ00000022gx0 in my sandbox, it is not guaranteed to have that same ID in production. This could result in your custom links/buttons/logic not inserting the correct data if you accidentally overwrote a 'working' button with a button from another org with different hard coded IDs.
You could probably create a custom setting that holds all of the correct IDs for the fields in each org, and then code your button to pull from that custom setting, but depending on how many fields you're pulling it could become a burden to maintain/update. Because of this, I wouldn't recommend this approach.
Beyond that hack, I think the answer to your question is ultimately
no. You can't edit a record until it has been committed, and you can't
roll back the commit on cancel since it has already occurred and no
triggers or workflows will fire on the cancel action.
As a roundabout alternative, you could consider something like this:
Create Boolean field (Default True) on Object being inserted that
you want to 'roll back'
Create WFR that sets that Boolean to FALSE (if it is currently TRUE) on successful edit of the record
Using a relatively simple scheduled Apex job, you could query every hour for all object records that still have the Boolean value set
to TRUE that were created more than an hour ago. Then delete all of
those records.
This approach wouldn't result in an instant delete when pressing cancel, but would provide for a way for the platform to 'clean' itself hour by hour to get rid of the unwanted records.
| {
"pile_set_name": "StackExchange"
} |
Q:
Query returns wrong result
Possible Duplicate:
Why are my SEDE results inaccurate/obsolete/incorrect/outdated?
I've run the following query, just to see if it actually works, but I'm getting no results even though I have dozens of posts:
select
q.Id
from
Posts q
where
q.OwnerUserId = 1525840
What's going wrong? I'm executing the query here.
A:
SEDE data isn't live, it comes from periodic data dumps. You can see on the homepage that the most recent data is from June 26th; your first post wasn't made until July 14th
| {
"pile_set_name": "StackExchange"
} |
Q:
Listbox returns System.Data.DataRowView instead of values
I am doing a project for my school, where I have to make a C# Windows Forms application that lets me interact with my PostgreSQL database. I have made a listbox, which is supposed to get the names of the tables from my database, and when I select these names, data from that table is show in the datagridview object in the form. The problem is, however, all my listbox values are System.Data.DataRowView, and datagridview only displays values from the first table in the list.
The code:
DataTable tabulusaraksts = new DataTable();
DataTable tabula = new DataTable();
NpgsqlDataAdapter adapter = new NpgsqlDataAdapter();
NpgsqlDataAdapter adapter2 = new NpgsqlDataAdapter();
string tab;
public datubaze()
{
InitializeComponent();
string connectionstring = "Server=localhost;Port=5432;UserId=postgres;Password=students;Database=retrospeles;";
//string connectionstring = String.Format("Server={0};Port={1};" +
// "User Id={2};Password={3};Database={4};",
// serveris.ToString(), port.ToString(), user.ToString(),
// password.ToString(), database.ToString());
NpgsqlConnection ncon = new NpgsqlConnection(connectionstring);
NpgsqlCommand listfill = new NpgsqlCommand("select table_name from INFORMATION_SCHEMA.tables WHERE table_schema = ANY (current_schemas(false));", ncon);
adapter.SelectCommand = listfill;
adapter.Fill(tabulusaraksts);
listBox1.DataSource = tabulusaraksts;
listBox1.DisplayMember = "table_name";
NpgsqlCommand showtable = new NpgsqlCommand("select * from " + tab +";" , ncon);
adapter2.SelectCommand = showtable;
}
public void listBox1_SelectedIndexChanged(object sender, EventArgs e)
{
tab = listBox1.GetItemText(listBox1.SelectedItem);
adapter2.Fill(tabula);
dataGridView1.DataSource = tabula;
}
A:
That code should work. I tried it with some test data and ListBox was filled with correct values.
To be sure, try to also set ValueMember like
listBox1.DisplayMember = "table_name";
I think the best approach is to add DataTable rows to your ListBox using loop or Linq list. After filling tabulusaraksts iterate through DataRows and add them as items to ListBox, without setting DataSource Something like this (Linq):
adapter.SelectCommand = listfill;
adapter.Fill(tabulusaraksts);
listBox1.Items.AddRange(tabulusaraksts.AsEnumerable().Select(row => row[0].ToString()).ToArray());
NpgsqlCommand showtable = new NpgsqlCommand("select * from " + tab +";" , ncon);
adapter2.SelectCommand = showtable;
or, using foreach loop
adapter.SelectCommand = listfill;
adapter.Fill(tabulusaraksts);
listBox1.Items.Clear();
foreach (DataRow row in tabulusaraksts.Rows)
{
listBox1.Items.add(tabulusaraksts[0].ToString());
}
NpgsqlCommand showtable = new NpgsqlCommand("select * from " + tab +";" , ncon);
adapter2.SelectCommand = showtable;
| {
"pile_set_name": "StackExchange"
} |
Q:
ssh authentication failure with public/private keys
I'm setting up a Continuous Deployment pipeline on gitlab.
Unfortunately, when trying to ssh from the pipeline to the target server, the authentication fails.
I am asking the question here because I am fairly sure the problem is unix related and not gitlab.
Here is the setup:
Using ssh-keygen I created a key pair.
I added the public key in ~/.ssh/authorized_keys on the server.
The private key is export in an env var 'SSH_PRIVATE_KEY' on the client server.
permissions on the server: ~/.ssh 700, ~/.ssh/authorized_keys 600
sshd configs on the server are all defaults.
On commit, gitlab spins up a docker executor (docker image node:11.2).
Then, those commands are executed inside the container:
'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
eval $(ssh-agent -s)
##
## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
## We're using tr to fix line endings which makes ed25519 keys work
## without extra base64 encoding.
## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
##
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "$SSH_PRIVATE_KEY" > ~/.ssh/known_hosts
chmod 644 ~/.ssh/known_hosts
ssh -vvv user@server
I followed the instructions here: https://docs.gitlab.com/ce/ci/ssh_keys/
Here is the output of my execution:
Running with gitlab-runner 11.5.0 (3afdaba6)
on Runner2 7eb17b67
Using Docker executor with image node:11.2 ...
Pulling docker image node:11.2 ...
Using docker image sha256:e9737a5f718d8364a4bde8d82751bf0d2bace3d1b6492f6c16f1526b6e73cfa4 for node:11.2 ...
Running on runner-7eb17b67-project-40-concurrent-0 via server...
Fetching changes...
Removing node_modules/
HEAD is now at aa4a605 removing bugged command line
Checking out aa4a6054 as integrate_cd...
Skipping Git submodules setup
Checking cache for default...
No URL provided, cache will be not downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 14
$ echo "$SSH_PRIV_KEY" | ssh-add - > /dev/null
Identity added: (stdin) ((stdin))
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ echo "$SSH_KNOWNHOST_KEY" > ~/.ssh/known_hosts
$ chmod 644 ~/.ssh/known_hosts
$ ssh -p 5555 -vvv user@server
OpenSSH_7.4p1 Debian-10+deb9u4, OpenSSL 1.0.2l 25 May 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
Pseudo-terminal will not be allocated because stdin is not a terminal.
debug2: resolving "server" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to servee[x.x.x.x] port 22
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u4
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2
debug1: match: OpenSSH_7.2 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to server:22 as 'user'
debug3: put_host_port: [server]:22
debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts"
debug3: record_hostkey: found key type RSA in file /root/.ssh/known_hosts:2
debug3: record_hostkey: found key type ECDSA in file /root/.ssh/known_hosts:4
debug3: record_hostkey: found key type ED25519 in file /root/.ssh/known_hosts:6
debug3: load_hostkeys: loaded 3 keys from [server]:22
debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
debug3: receive packet: type 20
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal
debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
debug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,[email protected],zlib
debug2: compression stoc: none,[email protected],zlib
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug2: peer server KEXINIT proposal
debug2: KEX algorithms: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,[email protected]
debug2: compression stoc: none,[email protected]
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug1: kex: algorithm: [email protected]
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug3: send packet: type 30
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug3: receive packet: type 31
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:1MfReEPXf/ResuMnmG/nEgimB5TxF1AcA2j4LBHBbTU
debug3: put_host_port: [x.x.x.x]:22
debug3: put_host_port: [server]:22
debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts"
debug3: record_hostkey: found key type RSA in file /root/.ssh/known_hosts:2
debug3: record_hostkey: found key type ECDSA in file /root/.ssh/known_hosts:4
debug3: record_hostkey: found key type ED25519 in file /root/.ssh/known_hosts:6
debug3: load_hostkeys: loaded 3 keys from [server]:22
debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts"
debug1: Host '[server]:22' is known and matches the ECDSA host key.
debug1: Found key in /root/.ssh/known_hosts:4
Warning: Permanently added the ECDSA host key for IP address '[x.x.x.x]:22' to the list of known hosts.
debug3: send packet: type 21
debug2: set_newkeys: mode 1
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: receive packet: type 21
debug1: SSH2_MSG_NEWKEYS received
debug2: set_newkeys: mode 0
debug1: rekey after 134217728 blocks
debug2: key: (stdin) (0x55ff2d56d630), agent
debug2: key: /root/.ssh/id_rsa ((nil))
debug2: key: /root/.ssh/id_dsa ((nil))
debug2: key: /root/.ssh/id_ecdsa ((nil))
debug2: key: /root/.ssh/id_ed25519 ((nil))
debug3: send packet: type 5
debug3: receive packet: type 7
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
debug3: receive packet: type 6
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug3: send packet: type 50
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug3: start over, passed a different list publickey,keyboard-interactive
debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: (stdin)
debug3: send_pubkey_test
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Trying private key: /root/.ssh/id_rsa
debug3: no such identity: /root/.ssh/id_rsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_dsa
debug3: no such identity: /root/.ssh/id_dsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_ecdsa
debug3: no such identity: /root/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_ed25519
debug3: no such identity: /root/.ssh/id_ed25519: No such file or directory
debug2: we did not send a packet, disable method
debug3: authmethod_lookup keyboard-interactive
debug3: remaining preferred: password
debug3: authmethod_is_enabled keyboard-interactive
debug1: Next authentication method: keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey,keyboard-interactive).
ERROR: Job failed: exit code 1
I think the ssh interesting part is this:
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: (stdin)
debug3: send_pubkey_test
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Trying private key: /root/.ssh/id_rsa
debug3: no such identity: /root/.ssh/id_rsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_dsa
debug3: no such identity: /root/.ssh/id_dsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_ecdsa
debug3: no such identity: /root/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: /root/.ssh/id_ed25519
debug3: no such identity: /root/.ssh/id_ed25519: No such file or directory
debug2: we did not send a packet, disable method
debug3: authmethod_lookup keyboard-interactive
debug3: remaining preferred: password
It tries to authenticate
debug2: we sent a publickey packet, wait for reply
But failed with SSH_MSG_USERAUTH_FAILURE right after
debug3: receive packet: type 51
Then it tries a couple public keys that do not exists on the runner.
What is happening? What can cause SSH_MSG_USERAUTH_FAILURE?
Thank you.
A:
Found the answer.
I got my hand on the logs from the server I was trying to connect too:
sshd[40354]: Authentication refused: bad ownership or modes for
directory /web
Turns out the user had more rights that supposed on his home repository. Fixed it to drwxr-xr-x resolved the issue.
So it seems that ssh validates the modes for
.ssh/
.ssh/authorized_keys
the user HOME repository
| {
"pile_set_name": "StackExchange"
} |
Q:
Where is the Registry running
I can create a container running a registry: docker run -d -p 5000:5000 --restart=always --name registry registry:2
But docker has a default registry, I can see that the registry is at Registry: https://index.docker.io/v1/ and it must be local, but where is it - do you know?
It is correct that if using a browser and go to: https://index.docker.io/v1/ it will take you to docker hub: https://index.docker.io/v1/
But all my local images is local on my machine, so there must be some where the registry is running.
You can see the registry if you do:
docker system info
Containers: 32
Running: 29
Paused: 0
Stopped: 3
Images: 205
Server Version: 18.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 2.934GiB
Name: linuxkit-025000000001
ID: Q6IO:V5CP:OHJL:4KJP:ZG2X:GV5W:YHMM:2WCK:4V4O:O6T3:A4E4:BJHM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 206
Goroutines: 223
System Time: 2018-08-29T11:56:34.8224409Z
EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
A:
That is the default registry which is dockerhub:
https://hub.docker.com/
Also see:
https://github.com/moby/moby/issues/7203
You cannot change the default registry (which is dockerhub). What you can do is push and pull using your registry as a prefix.
For example:
docker push localhost:5000/yourimage
docker pull localhost:5000/yourimage
As per my comment below - this registry runs locally and with
docker ps | grep registry:2
you can see it running. You can then use it's id to get the logs where you will see the activity.
You can also make use of the api by doing a call to:
curl -X GET http://localhost:5000/v2/_catalog
This will list all the images you have pushed to your local registry.
| {
"pile_set_name": "StackExchange"
} |
Q:
Bootstrap tree-view: Tree doesn't show up
I'm new to tree-view, I'm trying to show a basic tree but it doesn't work and I don't know where I made the mistake.
I made test.html that follow the same structure of my basic.html (I'm sparing you code of the navbar, alerts, etc). I have other js functions in functions.js that work fine.
test.html:
{% csrf_token %}
{% load groupfilter %}
{% load staticfiles %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" charset="utf-8">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<link rel="stylesheet" href="{% static 'bootstrap-treeview.min.css' %}">
<script src="{% static 'bootstrap-treeview.min.js' %}"></script>
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
<title>Test</title>
</head>
<body>
<div class="container">
<div id="tree"></div>
</div>
<!-- JavaScript functions -->
<script src="{% static 'functions.js' %}"></script>
</body>
</html>
extract of functions.js:
$(function(){
var mytree = [
{
text: "Parent 1",
nodes: [
{
text: "Child 1",
nodes: [
{
text: "Grandchild 1"
},
{
text: "Grandchild 2"
}
]
},
{
text: "Child 2"
}
]
},
{
text: "Parent 2"
}
];
$('#tree').treeview({data: mytree});
});
A:
Can you share sample link that you are following?
It seems like you are importing Twitter Bootstrap 4.3.1
But as I know, Offical Bootstrap still not provide TreeView on their document.
| {
"pile_set_name": "StackExchange"
} |
Q:
XML Schema for analysis in C#
Is it possible to use a XML Schema to check against the contents of a XML file?
For instance, in ASP.NET web.config, can I create a schema to check that <customErrors mode = "On">? This will ultimately be used in a C# app, the C# app should take in the XML document and the XML Schema and check if the XML Document violates any of the "rules" listed in the XML schema, i.e. <customErrors mode = "Off">
Is it possible to do the checking without any boundary to the structure of the XML file? i.e. the attribute <customErrors> can be within any part of the XML document and the schema will still work.
A:
Possible: Yes, in XML Schema 1.1 using assertions.
Practical or recommended: No.
XML Schema is intended to be used to validate the "structure of the XML file," as you anticipate in your question. You can skip much of that via xsd:any and then use assertions to express the sort of spot-checks that you describe via XPath expressions. However, it'd be more natural to just apply XPath expressions directly to your XML from within C#, or using Schematron, which is a standard for applying XPath expressions to do validation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cell should contain example input value for user to see
I want to create automated cells in Excel which will show the type of data to be entered in that cells. I want to create cells which will show "Enter Username here", " Enter DOB here" same as that which shows in fb and Gmail login page. I don't want to save any credentials.
I had created multiple dropdown lists and people are not understanding that there is a dropdown until they click on that cell. So I want to create automated cells which will show the type of data to be entered into it. It should disappear when I click on that cell and should appear if I erase the contents from that cell which I anyone had entered.
A:
Look into the change selection event:
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
if target.address = "$A$1" then
target.value = ""
else
Dim value as string
value = range("$A$1").value
if value="" then 'Note: It'd be better to check here if the user input is correct!
range("$A$1").value = "Enter DOB here"
end if
end if
End Sub
Edit to user's comments:
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
if target.address = "$A$1" then
if target.value = "Enter DOB here"
target.value = ""
end if
else
Dim value as string
value = range("$A$1").value
if value="" then 'Note: It'd be better to check here if the user input is correct!
range("$A$1").value = "Enter DOB here"
end if
end if
End Sub
| {
"pile_set_name": "StackExchange"
} |
Q:
Is a low number of members in a class considered a code smell?
I am currently making a simple to-do list program using the MVC pattern, and thus have a model class for the Notebook. However, something feels "off" as it has a very low number of members.
The Notebook is composed of categories, which are composed of To-do lists, which are composed of Items.
What I cannot place is whether this is a case poor analysis (e.g. there are more members and responsibilities I am just missing them..) or perhaps a code smell that the class is not needed (in that case I'm not sure what to do as I could just have a list of categories in that controller, but then I don't have a notebook entity modelled which seems wrong as well).
Below is the very simple class I have:
class Notebook
{
private String title;
private List<Category> categories;
public Notebook(String title, List<Category> categories)
{
}
public void setCategories(List<Category> categories)
{
}
public List<Category> getCategories()
{
}
}
I often have this issue where it feels like I am making classes for the sake of it and they have a very number of members/responsibilities, so it would be nice to know whether I am stressing for no reason or not.
A:
Not necessarily, there is the concept in Domain Driven Design of what is called a "Standard Type". Which is really a basic primitive wrapped in an object class. The idea is that the primitive contains no information about what information it contains, it's just a string/int/whatever. So by having say an object that surrounds the primitive and ensures that it is always valid ensures that the object has a meaning far beyond just the primitive it contains e.g. a Name is not just a string, it's a Name.
Here's an example taken from the comments of Velocity
public class Velocity
{
private readonly decimal _velocityInKPH;
public static Velocity VelocityFromMPH(decimal mph)
{
return new Velocity(toKph(mph));
}
private Velocity(decimal kph)
{
this._velocityInKPH = kph;
}
public decimal Kph
{
get{ return this._velocityInKPH; }
}
public decimal Mph
{
get{ return toMph(this._velocityInKPH); }
}
// equals addition subtraction operators etc.
private static decimal ToMph(decimal kph){ // conversion code }
private static decimal ToKph(decimal mph){ // conversion code }
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it focus or depth of field?
I was trying to shoot a group of people standing clumped but at different distances, quite close to me, think disorganized portrait. I have a Nikon d750, and was not able to get everyone into focus. If I brought the people closer to me in focus, then the background were blurred and vice versa. I pushed the aperture all the way to f18 or so and it didn't bring the whole scene into focus. Was I shooting from too close .... Or this makes me wonder is this about AF-S vs AF-A instead of aperture and the camera choosing one point to focus upon instead of the area? How would you compose a group shot like this to all be in focus? Thanks!
A:
It sounds like depth of field. If (with an APS crop sensor, 30 mm lens, f/4), if you focus at say 6 feet you might have about 2 feet of DOF span, like from 5 feet to 7 feet (coarse approximations). If your subject is distributed at say 6 to 8 feet, this 5-7 DOF zone does not include the far ones. If you focus far, you miss the near ones. Which is your description.
If you focus on the near ones, or on the far ones, you have wasted half of your DOF range in empty space where there is no one. There are DOF calculators which compute these numbers.
Normal procedure would be to focus more near the middle depth of the group (or slightly in front of the middle), to put the zone more centered on your group. So yes, you do chose your point of focus too.
And of course, stopping down the f/stop, like from f/4 to f/8 or f/11, could greatly increase the span of DOF, so that the zone size is double or more.
DOF is rather vague, and is NOT a critically precise number. If the calculator say DOF is 5 to 7 feet, then 7.02 feet is no different than 6.98 feet, both are at the limit of acceptability. These 5 to 7 feet numbers are considered the extremes of acceptability, and the actual focused distance will of course always be the sharpest point.
| {
"pile_set_name": "StackExchange"
} |
Q:
¿Cuál es el valor primitivo de [] con base en ECMAScript 2016 (versión 7)?
Para escribir una respuesta a ¿Cómo funciona el condicional if (!+[]+!+[] == 2) en JavaScript? me aventuré a utilizar https://www.ecma-international.org/ecma-262/7.0/index.html para las referencias.
En ToPrimitive se explica el procedimiento para convertir una valor en un valor primitivo pero no he logrado asimilarlo para el caso de [].
Sé que [] es un objeto y que es equivalente a new Array()
También sé que un array es un objeto exótico, así que uno o más de sus métodos internos esenciales no tiene comportamiento predeterminado.
Notas:
Comentario de Paul Vargas (chat)
Revisar Array objects (sección de ECMAScript 2016)
Otras pregunta relacionadas:
¿Por qué _=$=+[],++_+''+$ es igual a 10?
A:
Respuesta corta
El valor primitivo de [] es '' (cadena de texto vacía).
Explicación
Finalmente me decidí a googlear y encontré esta respuesta a Why does ++[[]][+[]]+[+[]] return the string “10”?1, la cual es similar a mi respuesta a ¿Cómo funciona el condicional if (!+[]+!+[] == 2) en JavaScript? en cuando a que hace referencia a una especificación ECMASCript sólo que aquella no especifica a cual versión se refieren las citas, sin embargo, me ha sido útil para llenar el "hueco" que derivó en esta pregunta.
Mas abajo incluyo un par de extractos los cuales se pueden resumir como
document.write([].join() === '') // Resultado true
Extractos de la ECMAScript 2016 (versión 7)
12.2.5Array Initializer
NOTE
An ArrayLiteral is an expression describing the initialization of an Array > object, using a list, of zero or more expressions each of which represents an array element, enclosed in square brackets. The elements need not be literals; they are evaluated each time the array initializer is evaluated.
Array elements may be elided at the beginning, middle or end of the element list. Whenever a comma in the element list is not preceded by an AssignmentExpression (i.e., a comma at the beginning or after another comma), the missing array element contributes to the length of the Array and increases the index of subsequent elements. Elided array elements are not defined. If an element is elided at the end of an array, that element does not contribute to the length of the Array.
7.1.1 ToPrimitive ( input [ , PreferredType ] )
The abstract operation ToPrimitive takes an input argument and an
optional argument PreferredType. The abstract operation ToPrimitive
converts its input argument to a non-Object type. If an object is
capable of converting to more than one primitive type, it may use the
optional hint PreferredType to favour that type. Conversion occurs
according to Table 9:
Table 9: ToPrimitive Conversions
Input Type Result
Undefined Return input.
Null Return input.
Boolean Return input.
Number Return input.
String Return input.
Symbol Return input.
Object Perform the steps following this table.
When Type(input) is Object, the following steps are taken:
If PreferredType was not passed, let hint be "default".
Else if PreferredType is hint String, let hint be "string".
Else PreferredType is hint Number, let hint be "number".
Let exoticToPrim be ? GetMethod(input, @@toPrimitive).
If exoticToPrim is not undefined, then
Let result be ? Call(exoticToPrim, input, « hint »).
If Type(result) is not Object, return result.
Throw a TypeError exception.
If hint is "default", let hint be "number".
Return ? OrdinaryToPrimitive(input, hint).
When the abstract operation OrdinaryToPrimitive is called with
arguments O and hint, the following steps are taken:
Assert: Type(O) is Object.
Assert: Type(hint) is String and its value is either "string" or "number".
If hint is "string", then
Let methodNames be « "toString", "valueOf" ».
Else,
Let methodNames be « "valueOf", "toString" ».
For each name in methodNames in List order, do
Let method be ? Get(O, name).
If IsCallable(method) is true, then
Let result be ? Call(method, O).
If Type(result) is not Object, return result.
Throw a TypeError exception.
NOTE
When ToPrimitive is called with no hint, then it generally behaves as
if the hint were Number. However, objects may over-ride this behaviour
by defining a @@toPrimitive method. Of the objects defined in this
specification only Date objects (see 20.3.4.45) and Symbol objects
(see 19.4.3.4) over-ride the default ToPrimitive behaviour. Date
objects treat no hint as if the hint were String.
En el caso de un objeto de tipo Array, el método para determinar el valor primitivo es join() de acuerdo a lo siguiente:
22.1.3.28 Array.prototype.toString ( )
When the toString method is called, the following steps are taken:
Let array be ? ToObject(this value).
Let func be ? Get(array, "join").
If IsCallable(func) is false, let func be the intrinsic function %ObjProto_toString%.
Return ? Call(func, array).
NOTE
The toString function is intentionally generic; it does not require
that its this value be an Array object. Therefore it can be
transferred to other kinds of objects for use as a method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Modoboa 1.1.1 Deployment Errors
I tried to install modoboa follow this steps: http://modoboa.readthedocs.org/en/latest/getting_started/install.html
I installed modoboa with pip install modoboa:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 77, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/syncdb.py", line 8, in <module>
from django.core.management.sql import custom_sql_for_model, emit_post_sync_signal
File "/usr/local/lib/python2.7/dist-packages/django/core/management/sql.py", line 9, in <module>
from django.db import models
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 40, in <module>
backend = load_backend(connection.settings_dict['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 93, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
python manage.py syncdb --noinput failed, check your configuration
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 77, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/syncdb.py", line 8, in <module>
from django.core.management.sql import custom_sql_for_model, emit_post_sync_signal
File "/usr/local/lib/python2.7/dist-packages/django/core/management/sql.py", line 9, in <module>
from django.db import models
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 40, in <module>
backend = load_backend(connection.settings_dict['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 93, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
python manage.py syncdb failed, check your configuration
Unknown command: 'migrate'
Type 'manage.py help' for usage.
python manage.py migrate --fake failed, check your configuration
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 77, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/loaddata.py", line 11, in <module>
from django.core import serializers
File "/usr/local/lib/python2.7/dist-packages/django/core/serializers/__init__.py", line 22, in <module>
from django.core.serializers.base import SerializerDoesNotExist
File "/usr/local/lib/python2.7/dist-packages/django/core/serializers/base.py", line 5, in <module>
from django.db import models
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 40, in <module>
backend = load_backend(connection.settings_dict['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 93, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
python manage.py loaddata initial_users.json failed, check your configuration
Unknown command: 'collectstatic'
Type 'manage.py help' for usage.
python manage.py collectstatic --noinput failed, check your configuration
I tried to install pip install MySQL-python but I received this error:
Downloading/unpacking MySQL-python
Downloading MySQL-python-1.2.5.zip (108kB): 108kB downloaded
Running setup.py egg_info for package MySQL-python
sh: mysql_config: orden no encontrada
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip_build_root/MySQL-python/setup.py", line 17, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 25, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
Complete output from command python setup.py egg_info:
sh: mysql_config: orden no encontrada
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip_build_root/MySQL-python/setup.py", line 17, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 25, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/MySQL-python
Storing complete log in /root/.pip/pip.log
Seems that error is caused by MySQL module but I don't know how to resolve it.
A:
You should install the python mysqldb package provided with your distribution.
On a debian/ubuntu one:
$ apt-get install python-mysqldb
| {
"pile_set_name": "StackExchange"
} |
Q:
Do we want an xkcd tag?
xkcd is referred to often on PPCG, with at least 47 questions which are based on concepts or directly related to the xkcd webcomic.
Therefore, is it worthwhile introducing an xkcd tag to group all of these challenges together?
A:
No
Tags are meant to classify questions according to some distinctive quality that they share. Simply referencing an xkcd comic is not a distinctive quality that would create a meaningful classification.
| {
"pile_set_name": "StackExchange"
} |
Q:
Binding value to select in angular js across 2 controllers
Working with angularJS I am trying to figure out a way to bind the value of a select element under the scope of controller A to use it as an argument for an ng-click call [getQuizByCampID() Function] under the scope of controller B.
My first idea was to use jquery, but I have read in the link below that using jquery is not recommended when starting with angularJS.
"Thinking in AngularJS" if I have a jQuery background?
I also read in the link below that this is performed using ng-model, the only problem is that that the example provided is all under the same controller.
and Binding value to input in Angular JS
What is the angularJS way to get the value of the select element under controller A into the function call in the select under controller B?
Price.html view
<div class="col-sm-3" ng-controller="campCtrl"> **Controller A**
<select id="selCampID" class="form-control" ng-model="campInput" >
<option ng-repeat="camp in campaigns" value="{{camp.camp_id}}">{{camp.camp_name}}</option>
</select>
</div>
<div class="col-sm-3" ng-controller="quizCtrl"> **Controller B**
<select ng-click="getQuizByCampID($('#selCampID').val())" class="form-control" ng-model="quizInput">
<option ng-controller="quizCtrl" ng-repeat="quiz in quizzesById" value="{{quiz.quiz_id}}">{{quiz.quiz_name}}</option>
</select>
</div>
App.js
var app= angular.module('myApp', ['ngRoute']);
app.config(['$routeProvider', function($routeProvider) {
$routeProvider.when('/price', {templateUrl: 'partials/price.html', controller: 'priceCtrl'});
}]);
$routeProvider.when('/price', {templateUrl: 'partials/price.html', controller: 'priceCtrl'});
Quiz Controller
'use strict';
app.controller('quizCtrl', ['$scope','$http','loginService', function($scope,$http,loginService){
$scope.txt='Quiz';
$scope.logout=function(){
loginService.logout();
}
getQuiz(); // Load all available campaigns
function getQuiz(campID){
$http.post("js/ajax/getQuiz.php").success(function(data){
$scope.quizzes = data;
//console.log(data);
});
};
$scope.getQuizByCampID = function (campid) {
alert(campid);
$http.post("js/ajax/getQuiz.php?campid="+campid).success(function(data){
$scope.quizzesById = data;
$scope.QuizInput = "";
});
};
$scope.addQuiz = function (quizid, quizname, campid) {
console.log(quizid + quizname + campid);
$http.post("js/ajax/addQuiz.php?quizid="+quizid+"&quizname="+quizname+"&campid="+campid).success(function(data){
getQuiz();
$scope.QuizInput = "";
});
};
}])
A:
You should store the value in a service.
example:
app.factory('SharedService', function() {
this.inputValue = null;
this.setInputValue = function(value) {
this.inputValue = value;
}
this.getInputValue = function() {
return this.inputValue;
}
return this;
});
Example on Plunkr
Read: AngularJS Docs on services
or check this Egghead.io video
| {
"pile_set_name": "StackExchange"
} |
Q:
Pass values to IN operator in a Worklight SQL adapter
I have started to work with SQL adapters in Worklight, but I do not understand how can I pass values to an IN condition when invoking my adapter procedure.
A:
You will need to edit your question with your adapter's XML as well as implementation JavaScript...
Also, make sure to read the SQL adapters training module.
What you need to do is have your function get the values:
function myFunction (value1, value2) { ... }
And your SQL query will use them, like so (just as an example how to pass variables to any SQL query, doesn't matter if it contains an IN condition or not):
SELECT * FROM person where name='$[value1]' or id=$[value2];
Note the quotation marks for value1 (for text) and lack of for value2 (for numbers).
| {
"pile_set_name": "StackExchange"
} |
Q:
How to change XML from dataset into HTML UL
I'm working on a C# webforms application and have a datalayer which gathers information about the menu a customer can see, based on their customer number and order type.
I was using the ASP.NET menu control for this until the qa department asked to change the menu to expand on click instead of hover. At that point, I decided to try and do the menu with a simpler css/html/jquery approach but I've hit a jam.
I have the following method in my datalayer that gets information for the menu and returns it as XML. What I'm stuck on is how to take the XML that was being gathered, when I was using the menu control and hopefully reformat it into a UL for using in the html/css approach I'd like to do.
public static string BuildMenu(string cprcstnm, string docType)
{
DataSet ds = new DataSet();
string connStr = ConfigurationManager.ConnectionStrings["DynamicsConnectionString"].ConnectionString;
using (SqlConnection conn = new SqlConnection(connStr))
{
string sql = "usp_SelectItemMenuByCustomer";
SqlDataAdapter da = new SqlDataAdapter(sql, conn);
da.SelectCommand.CommandType = CommandType.StoredProcedure;
da.SelectCommand.Parameters.Add("@CPRCSTNM", SqlDbType.VarChar).Value = cprcstnm;
da.SelectCommand.Parameters.Add("@DOCID", SqlDbType.VarChar).Value = docType;
da.Fill(ds);
da.Dispose();
}
ds.DataSetName = "Menus";
ds.Tables[0].TableName = "Menu";
DataRelation relation = new DataRelation("ParentChild",
ds.Tables["Menu"].Columns["MenuID"],
ds.Tables["Menu"].Columns["ParentID"],
false);
relation.Nested = true;
ds.Relations.Add(relation);
return ds.GetXml();
}
A sample of XMl that is output is as follows:
<Menus>
- <Menu>
<MenuID>23</MenuID>
<ITEMNMBR>0</ITEMNMBR>
<Text>ACC</Text>
<Description>ACC</Description>
<ParentID>0</ParentID>
- <Menu>
<MenuID>34</MenuID>
<ITEMNMBR>1</ITEMNMBR>
<Text>BASE</Text>
<Description>BASE</Description>
<ParentID>23</ParentID>
- <Menu>
<MenuID>516</MenuID>
<ITEMNMBR>2</ITEMNMBR>
<Text>HYP</Text>
<Description>HYP</Description>
<ParentID>34</ParentID>
I would need to convert this to something such as :
<ul class="dropdown">
<li><a href="#">ACC</a>
<ul class="sub_menu">
<li>
<a href="#">BASE</a>
<ul>
<li>
<a href="#">HYP</a>
<ul>
<li><a href="#">Terminal 1</a></li>
<li><a href="#">Terminal 1</a></li>
</ul>
</li>
</ul>
</li>
A:
You will get some ideas from the following MSDN link that illustrates writing html from a dataset using xslt
http://msdn.microsoft.com/en-us/library/8fd7xytc(v=vs.80).aspx
| {
"pile_set_name": "StackExchange"
} |
Q:
how to get result with cursor and paging using ZSCAN command with stackexchange.redis library?
I am using stackexchange.redis.
in that zscan is giving all matched value
I want to get exactly given page size result and next cursor for remaining values.
I have debugged its source code library in that i found that they are
scanning entire source value until cursor became zero and provides all
matched values.
so could we can get result as per cursor same as redis command Zscan.
here is my code snap
using (ConnectionMultiplexer conn = ConnectionMultiplexer.Connect(conf))
{
var dbs = conn.GetDatabase();
int currentpage = 0,pagesize=20;
var scanresult = dbs.SortedSetScan("key", "an*", pagesize, 0, 0, CommandFlags.None);
}
here I am getting all values of matching criteria instead of page size and next cursor.
so help out if any one has done it before
A:
This is because of stack stackexchange.redis library code. its scanning as per enumerable method. so its not working same as redis command line.
To solve this issue we have used another redis client library called csredis
using (var redis = new RedisClient("yourhost"))
{
string ping = redis.Ping();
var scanresult=redis.ZScan(key, cursor, pattern, pagesize);
}
As shown in above code we will get all dadta into "scanresult".
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a django User using DRF's ModelSerializer
In django, creating a User has a different and unique flow from the usual Model instance creation. You need to call create_user() which is a method of BaseUserManager.
Since django REST framework's flow is to do restore_object() and then save_object(), it's not possible to simply create Users using a ModelSerializer in a generic create API endpoint, without hacking you way through.
What would be a clean way to solve this? or at least get it working using django's built-in piping?
Edit:
Important to note that what's specifically not working is that once you try to authenticate the created user instance using django.contrib.auth.authenticate it fails if the instance was simply created using User.objects.create() and not .create_user().
A:
Eventually I've overridden the serializer's restore_object method and made sure that the password being sent is then processes using instance.set_password(password), like so:
def restore_object(self, attrs, instance=None):
if not instance:
instance = super(RegisterationSerializer, self).restore_object(attrs, instance)
instance.set_password(attrs.get('password'))
return instance
Thanks everyone for help!
| {
"pile_set_name": "StackExchange"
} |
Q:
Reading a text file and moving files from it to a directory
I have a directory of images. Some of these images must be stored in a text file like 'pic1.jpg'
I need to extract this filename, pick up the matching file from the current working directory and move it to a separate folder (under the cwd).
This is the code I have so far, but I cant get the shutil operations to work. What am I doing wrong?
Current directory C:\BE
Have to move a file(s) 1,jpg, 2,jpg etc from a textfile called "filelist.txt" to C:\BE\2014-03-25_02-49-11
import os, datetime
import shutil
src = os.getcwd()
global mydir
def filecreation(content, filename):
mydir = os.path.join(os.getcwd(), datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
try:
os.makedirs(mydir)
except OSError, e:
if e.errno != 17:
raise # This was not a "directory exist" error..
with open(os.path.join(mydir, filename), 'w') as d:
d.writelines(content)
#shutil.copyfile(src,mydir)
def main():
filelist = "filelist.txt"
with open(filelist) as f:
content = f.read().splitlines()
#content = shutil.copyfile(src, mydir)
print content
print "Here we are"
#list=['1.jpg','2.jpg']
filecreation(content,"filelist.txt")
print "lets try another method"
with open('filelist.txt','w+') as list_f:
for filename in list_f:
with open(filename) as f:
content = f.read()
#content = shutil.move(src,mydir)
#content = shutil.copyfile(src,mydir)
#for line in f
print "method 2 is working so far"
if __name__ == '__main__':
main()
A:
This is what finally worked -
from shutil import copy
f = open(r'C:\Users\B\Desktop\BE Project\K\filelist.txt', 'r')
for i in f.readlines():
print i
copy(i.strip(),r"E:\Images")
f.close()
| {
"pile_set_name": "StackExchange"
} |
Q:
Are all I2C sensors interoperable?
I have a quadcopter flight controller (RTFQ Flip MWC) that supports I2C sensors for adding thing like a barometer, magnetometer, and GPS system. The officially supported sensor block (BMP180, HMC5883L on one board) is discontinued, as far as I can tell.
I have found other I2C barometer and magnetometer sensors, (BMP280, LSM303) but I am not even sure if all I2C devices of the same type are interoperable. Do they all look the same (at least interface-wise) to the flight controller?
I'm also new to I2C in general; the sensors I need come on two separate boards. Do I just stack the boards, directly connecting the I2C bus between each?
Thanks in advance,
Neil
EDIT:
I was able to find the datasheets for the discontinued and proposed sensors:
BMP180
HMC5883L
BMP280
LSM303
All are compatible with the 3.3v output of the Flip MWC, which is good.
I was quickly able to find what I believe to be the register map for the BMP180 and HMC5883L, but the table I found for the LSM303 was very confusing and I wasn't able to find one in the BMP280 datasheet.
A:
The only way to know if two IIC devices are compatible in this context is to compare their IIC interface in the two datasheets very carefully. IIC may be largely standard, but it says nothing about the payload data carried over IIC.
If a particular product becomes popular, competitors will often make theirs compatible. However, there is no guarantee that any two devices are compatible. Each could use a different format for sending the data, require different settings in different registers that are accessed differently to select features, etc.
Unless you know they are compatible, assume they are not.
| {
"pile_set_name": "StackExchange"
} |
Q:
Identify slow solr queries
There are some queries that run very slow on my setup. Is there an easy way to identify and collect them (maybe through logs, or the admin console), so that I can do some performance analysis later on?
A:
yes, very easy in the logs, look at a sample line
INFO: [core0] webapp=/solr path=/select/ params={indent=on&start=0&q=*:*&version=2.2&rows=10} hits=1074 status=0 QTime=1
You need to look at Qtime
| {
"pile_set_name": "StackExchange"
} |
Q:
Assign values to dynamic number of sub-classes before serializing to JSON
I am integrating with a courier that requires me to pass box dimensions for each box in my consignment to their API in JSON format. I am able to set individual properties like RecipientName, but am not sure how to pass the box details for the varying number of boxes for each consignment.
The JSON needs to look like this (example is for a 2 box consignment):
{
"RecipientName": "Joe Bloggs",
"Packages" : [{
"boxNumber": "1",
"boxHeight": 1.55,
"boxLength": 1.55,
"boxWidth": 1.55
},
{
"boxNumber": "2",
"boxHeight": 2.55,
"boxLength": 2.55,
"boxWidth": 2.55
}]
}
I have built 2 classes, one that describes the structure of the JSON, and another that contains the method to serialize the JSON.
My JSON structure class looks like this (I have used a List because I have read that arrays are a fixed length, and because the number of boxes with vary I cannot use arrays):
public class API_JSON
{
public class Rootobject
{
public string RecipientName { get; set; }
public List<Package> Packages { get; set; }
}
public class Package
{
public string boxNumber { get; set; }
public double boxHeight { get; set; }
public double boxLength { get; set; }
public double boxWidth { get; set; }
}
}
And my API methods class looks like this:
public class API_Methods
{
public string recipientName;
public List<string> boxnumber;
public List<double> boxHeight;
public List<double> boxLength;
public List<double> boxWidth;
public Boolean SubmitConsignment(out string JSONData)
{
var NewRequestObject = new API_JSON.RootObject
{
Recipient = recipientName,
Packages = new API_JSON.Package
{
foreach (string item in ContainerNumber)
{
boxNumber=???,
boxHeight=???,
boxLength???=,
boxWidth=???
}
}
}
string JSONData = JsonConvert.SerializeObject(NewRequestObject);
return true;
}
}
I am then instantiating the object, setting its public variables, then running the method list this:
API_Methods myObject = new API_Methods();
myObject.recipientName;
myObject.boxnumber.Add(1);
myObject.boxnumber.Add(2);
myObject.boxHeight.Add(1.55);
myObject.boxHeight.Add(2.55);
myObject.boxLength.Add(1.55);
myObject.boxLength.Add(2.55);
myObject.boxWidth.Add(1.55);
myObject.boxWidth.Add(2.55);
bool test = API_Methods.SubmitConsignment(out JSON);
My problem is with the foreach loop - I know the code is incomplete - but I was hoping to iterate through the lists, but even with an empty foreach loop it appears to be the wrong place to put the loop as I start getting syntax errors about an expected "}"
A:
You're actually overcomplicating this for yourself - create complete package objects, and add them to the List Packages, and then pass the rootobject to the serializer.
The error you are getting is because you are not correctly initializing / filling your Packages List. Your object is invalid, hence the serializer is throwing exceptions.
This will be a lot easier for you if you create some constructors for your objects, something like this:
public Package(number, height, length, width)
{
boxNumber = number;
boxHeight = height;
//rest of your properties here in same format
}
You can then also make your setters private in the class, if you wish.
You can then easily create your package objects:
var package1 = new Package(10, 10, 10, 10);
This should make it a lot easier to create your list of boxes to put in your rootObject.
You can add each package to the packages list (individually or within a foreach loop):
Packages.Add(package1)
Or you could even start getting more concise:
Packages.Add(new Package(10,10,10,10));
You want to separate your concerns more to help keep this clear - so I'd recommend you fully construct your rootObject, add the packages to the list in one class (your 3rd code snippet), and then serialize it another (your 2nd code snippet).
Edit:
I think you'd find it easier to refactor your code somewhat:
1) Have a public rootobject in your Json_Api class, with get; set;. Get rid of the box collections. Get rid of your foreach loop from here too.
public class API_Methods
{
public rootObject RootObject { get; set; }
public Boolean SubmitConsignment(out string JSONData)
{
string JSONData = JsonConvert.SerializeObject(NewRequestObject);
return true;
}
}
2) Set the properties of this rootobject outside this class (where you currently initialize your objects). Add the New Package()s to Packages list here too.
API_Methods myObject = new API_Methods();
myObject.RootObject.recipientName = "NAME";
myObject.RootObject.Packages.Add(new Package(10,10,10,10);
myObject.RootObject.Packages.Add(new Package(20,20,20,20);
bool test = API_Methods.SubmitConsignment(out JSON);
3) Call the API method next, it should return a serialized version of the wholerootobject, including your packages.
Just a side note, it would be more conventional to send the RootObject as a parameter to the API, and return the Json string object back.
| {
"pile_set_name": "StackExchange"
} |
Q:
AndroidQuery ajax doesn't call php page when page contains IFRAME
I tried the following code to access my PHP page:
String url = "http://mypage.example.com/test.php?name="+data;
aq.ajax(url, String.class, new AjaxCallback<String>() {
@Override
public void callback(String url, String html, AjaxStatus status) {
Log.w(Tags.DEBUG,String.valueOf(status.getCode()));
}
});
My PHP page writes a file if the call was made. When I use the URL inside a browser, the file is created. When I use my Android app, nothing happens, the status code is 200. What else should I set?
UPDATE:
The source of my page:
<html>
<head>
<title>MY WEBSITE</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"></head>
<frameset rows="92, *" frameborder=NO border=1 framespacing=0 cols=*>
<frame name="topFrame" scrolling=NO noresize src="http://example.com/ads.php?ad=user12&cat=16" frameborder=NO>
<frame name="user" src="http://example.com/user12/test.php" scrolling=auto>
</frameset>
<noframes>
<body bgcolor=#FFFFFF text=#000000>
<a href="http://example.com/user12/test.php">http://example.com/user12/test.php</a>
</body>
</noframes>
</html>
A:
It seems that the web page was the problem, it sent back its source code and there was an IFRAME in it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Bold Font in Linux
Re-Edit of the question for clarification: Blender only has one font and it doesn't turn bold. Any way to make easily make it bold?
So, I am using the RC1 release of 2.8 right now to make a logo for a possible client, but it looks like Blender only comes with a single font and it doesn't turn to bold. The logo includes bold font so I am wondering if there is any way to achieve this easily. Maybe another downloadeable font from somewhere else? Perhaps access to the fonts included in Linux?
A:
Different fonts can be loaded in the font tab which is shown below. Once you have selected the desired font files you can format the text.
You can change the formatting by switching to edit mode. Then you can select the text by moving the cursor with the left and right arrow keys to a desired position, now press and hold the shift key and move to the end position of the selection using the arrow keys. Once you have a selection the font menu allows to change the formatting.
A:
While blender includes one font with its installation, you can use almost any font you can find. The most common font formats you find will be postscript, truetype or opentype. Look at the list of formats supported by Freetype.
Almost every website offering downloadable fonts offers a format that you can use with blender regardless of which system you are running. You can save these fonts anywhere and open them in blender.
Linux has several packages of different fonts available to install that can easily be found by most programs, unfortunately blender doesn't look in these standard font directories, we need to open specific font files. To find the location of installed fonts, open a terminal and type fc-list to get a list of paths, font name and available styles.
| {
"pile_set_name": "StackExchange"
} |
Q:
Page transitions, depending on URL
This code fades each page out, before going to the URL's destination. However, there are some instances where the user doesn't go to a new page, but goes to a PDF in the browser, or it opens the default mail application. On Safari it seems, if you go to an external site (www.twitter.com) and press the back button, the .wrapper is still faded out. (Perhaps a cache thing?)
function fadeAndGo(x) {
$(x).click(function (e) {
e.preventDefault();
var href = this.href;
$('.wrapper').fadeOut(function(){
window.location = href;
});
// $('.wrapper').delay()fadeIn();
});
}
fadeAndGo('a');
Is it possible to either:
Fade out, only if the URL does not contain 'PDF, mailto', or is an external link?
Fade in after a certain amount of time (it faded out, but faded back in after a couple of seconds, in case it was a PDF/mailto).
A:
Try this:
function fadeAndGo(x) {
$(x).click(function (e) {
e.preventDefault();
var href = $(this).attr("href");
if (!/PDF|mailto/gi.test(href)) {
$('.wrapper').fadeOut(function () {
window.location = href;
}).delay(2000).fadeIn();
} else {
window.location = href;
}
});
}
fadeAndGo('a');
| {
"pile_set_name": "StackExchange"
} |
Q:
Classic vs universal Google analytics and loss of historical data
I'm keen to use some of the new features in Google Universal Analytics.
I have an old site though that I don't want to lose the historical data for. The comparisons with historical data are interesting for example.
However Google doesn't appear to allow you to change a property from the classic code to the new code.
Am I missing something?
I'm surprised this isn't a bigger issue for many other users.
A:
Edit: Google just announced the upgrade path to universal analytics:
We just launched the Google Analytics Upgrade Center, an easy, two-step process to upgrade your classic Analytics accounts to Universal Analytics.
From their upgrade instructions:
Step 1: Transfer your property from Classic to Universal Analytics.
We’ve developed a new tool to transfer your properties to Universal Analytics that we will be slowly enabling in the admin section of all accounts. In the coming weeks, look for it in your property settings.
Step 2: Re-tag with a version of the Universal Analytics tracking code.
After completing Step 1, you’ll be able to upgrade your tracking code, too. Use the analytics.js JavaScript library on your websites, and Android or iOS SDK v2.x or higher for your mobile apps.
Our goal is to enable Universal Analytics for all Google Analytics properties. Soon all Google Analytics updates and new features will be built on top of the Universal Analytics infrastructure. To make sure all properties upgrade, Classic Analytics properties that don’t initiate a transfer will be auto-transferred to Universal Analytics in the coming months.
Google will support upgrading and migrating data to the universal analytics, but that upgrade process is not ready yet. From their help document:
In the coming months, look for documentation to help you upgrade your existing Google Analytics web properties and data to UA.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make a field in a table reference to another table in MySQL/MariaDB?
Say I'm setting up an small database with just 2 tables: feeds and feeditems.
In one table I'd store the feedname and url, with an ID as unique key.
In the second table I'd like to store some info coming from feed items (in example: date, title, url of the item and feedname). But instead of storing the feed name, I'd like to reference this feed field to the ID of that feed in the first table.
Thanks
A:
this a quick example of how to achieve your requirement...
CREATE TABLE IF NOT EXISTS `feeds` (
`Feed_ID` int(11) NOT NULL,
`Feed_Name` varchar(32) NOT NULL,
`Feed_Url` varchar(255) NOT NULL,
PRIMARY KEY (`Feed_ID`)
)
CREATE TABLE IF NOT EXISTS `feeditems` (
`FeedItem_ID` int(11) NOT NULL,
`Feed_ID` int(11) NOT NULL,
`FeedItem_Date` datetime NOT NULL,
`FeedItem_Title` varchar(255) NOT NULL,
`FeedItem_Url` varchar(255) NOT NULL,
`FeedItem_Name` varchar(255) NOT NULL,
PRIMARY KEY (`FeedItem_ID`),
FOREIGN KEY (`Feed_ID`) REFERENCES `feeds`(`Feed_ID`)
ON DELETE CASCADE
)
| {
"pile_set_name": "StackExchange"
} |
Q:
extjs data reader reading data from nested object of an xml reponse
I have xml file with following structure
<root>
<o1>
<p1></p1>
<p2></p2>
<p3></p3>
<o2>
<p1></p1>
<o3>
<p2></p2>
<p3></p3>
</o3>
<o2>
</o1>
</root>
I want the model to be loaded only with the p1 p2 and p3 for o1. But the model gets populated with the values inside o2 and o3 instead.
In the reader that I have configured, the root is 'root' and record is o1.
I even tried setting the implicitIncludes property of the reader to false.
Please help.
A:
I think this is because of the Ext.DomQuery.selectNode() method that is used in the conversion function of the model which probably starts parsing from the innermost node and returns the first occurence...
I solved this by overriding the getResponseData() method of the reader. In the overridden method, I removed the inner node of the xml response document i.e. the o2 node and then passed on the document to the readRecords() method as the natural flow is.
Though kind of a workaround but it is fine for me as the inner node is not needed my case.
| {
"pile_set_name": "StackExchange"
} |
Q:
Download OS X App Store updates to update multiple Macs
I have two MacBook Airs, but I have very limited bandwidth. I would prefer to download updates once and then copy them onto all the other MacBook Airs. How can I download App Store updates once to update multiple Macs?
A:
There are two types of update.
OS X software updates are updates for the OS and OS components (e.g. iTunes). These used to be delivered through a separate software update app, but since the introduction of the Mac App Store, the OS X updates have been combined with Mac App Store updates in the Updates tab of the Mac App Store. However, the CLI tool remains, giving you more flexibility in Terminal and allow the downloading of updates without installing them, perfect for copying to other machines before the installation takes place.
You can download OS X updates without installing them (which would automatically remove them) so you can copy them, using the following command:
softwareupdate -dav
The 10.9.4 update is distributed externally, outside of the Mac App Store; the Mac App Store just provides the UI for the installation process.
Conversely, for Mac App Store apps, you need OS X Server's Caching service, as the apps are 'non-transferrable' and the app receipt must match the Apple ID that downloaded the app for the app to be updated in the future. However, if you're using the same Apple ID, or don't care about updating the app from the second machine, update the app normally then copy the .app bundle from /Applications to the other Macs as necessary.
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL replication: "Houston, We've Got a Problem"
I ran into a problem with our replication server. Essentially, we have 2 databases (database1 and database2). Master server has both. Slave has only database1. There is a
Replicate_Do_DB: database1
set in CHANGE MASTER TO configuration.
Now what happened is - we are using code igniter, and one of the programers created database2 and started inserting info into it. Code igniter sets a default database to database1. Now the result is for every query he produced - I get an error on SHOW SLAVE STATUS\G:
Error 'Table 'database2.tbl40' doesn't exist' on query. Default database: 'database1'. Query: 'INSERT INTO `database2`.`tbl40` (`date`, `day`) VALUES ('2011-04-26', '2011-04-26')'
So essentially, I he fixed the problem afterwards, but the replication doesn't work as there is around 1000 queries that will produce that error for replication server.
My question is - is there some way to clear queries like that from the binlog?
Or I need to write a script that will do a
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
for every query that produces and error ?
A:
If you really don't care about that table, you can use pt-slave-restart on the slave and have it skip those problems. I would be conservative about running it and make sure that you are only skipping queries for the table/database that you don't care about or at least for only a specific error.
You didn't post what the error code was in the output from SHOW SLAVE STATUS, but I suspect it is error 1146.
For example, this will skip all errors for 1146:
pt-slave-restart -u root -p pass --error-numbers 1146
Or, you could try skipping all errors that reference that table
pt-slave-restart -u root -p pass --error-text 'database2'
Another way to do this would be to set replicate-ignore-db=database2 and restart MySQL on the slave, but there are some caveats to how that works that you should read about in the documentation
A:
I think the bigger problem here is your default database context was database1. Thats's why your slave tried to execute the update on database2 since it was specified in database2.table format.
Basically it's not safe to user db.table syntax with wildcards or you find yourself in the situation you did. If you're wanting to use the wildcard do or ignores it's generally safer to always specify your default db using "use" and execute the query in that context.
| {
"pile_set_name": "StackExchange"
} |
Q:
Social buttons and changing attributes using EmberJS
I'm trying to have two social buttons (facebook & twitter) on my website using EmberJS. I'm binding the URL of those buttons to an attribute url (for example).
The problem is that the attribute url is changing, and the buttons are not reloading.
I did a spin-off of this article on the EmberJS: http://emberjs.com/guides/cookbook/helpers_and_components/creating_reusable_social_share_buttons/
Updated to the last EmberJS version (1.3.1), and added a "change text" button. Try changing the text for the text button, and you'll see that the button is not reloading.
Link to the jsbin: http://emberjs.jsbin.com/izOtIYi/1/edit (watch the console too)
I think it's because Twitter is messing with the Metamorph system. How can I bypass this? I'm sure someone faced this before.
The strangest thing is that it's working well with facebook like button.
Thanks !
A:
The issue is that when you load the twitter widget it parses the <a> and then replaces it with an <iframe>. So even when you update the text property it doesnt reload the button.
One way to work around it would be to rerender the view when the text changes this would cause the iframe to be removed and a new a tag to be added.
I fixed up the jsbin to update the button when the text changes http://emberjs.jsbin.com/izOtIYi/8/edit
I got put the logic which rerenders the button into the component to make it more reusable.
The button will flash whenever the text is changed because its actually removing the existing button and creating a new button each time the text changes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a way to get a list of the currently addedtostage events in JavaScript without jQuery?
I am wondering if there is a way to get a list of the currently addedtostage events in JavaScript without jQuery?
I want to know this because I want to remove these events later.
I looked around on stackoverflow but I couldn't find an answer without jQuery.
I tried:
Event.observers.each(function(item) {
if(item[0] == element) {
console.log(item[2])
}
});
I also looked at List all javascript events wired up on a page using jquery
Thanks
A:
as far as I know since eventListenerList haven't been included in DOM 3 there is still no way to actually do it natively in js.
if it's just for debuging you can use tool such as visual event (http://www.sprymedia.co.uk/article/Visual+Event ) which know how majors libs suscribe events and how to read in it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it necessary to install Yoast for a website which is installed inside an existing WordPress installation folder?
I am setting up a new website inside an already installed WordPress website folder (e.g., www.example.com/newsite/). I am using the Yoast SEO plugin for my old website (www.example.com). Is it necessary to install the Yoast SEO plugin for /newsite again, and go through Google Authorization Code and Search Console too?
A:
If you are setting up a separate WordPress site (meaning no multisite) and you want to use Yoast SEO then, yes, you will have to install the plugin again. The new site has no way of using the existing copy in your old site. You also have to register it as a separate entity for Google.
I am not sure what your plan is but I would not recommend hosting a new site in a folder inside an existing domain. If you want it to rank properly, it should have its own domain. Aside from that, I would also place both sites in separate folders alongside one another instead of nesting one inside the other.
| {
"pile_set_name": "StackExchange"
} |
Q:
Corona sdk Get Sim Card Details
I'm creating an Application using Corona sdk with Lua Language, can anyone help me with getting Sim Card details such as, mobile Number, pin no .. etc
Thank you
A:
Corona provides system.getInfo() for getting device-specific information. I dont think you can find the mobile Number, but there is some info you can get.
You can get more details in the docs
You probably will find deviceID useful:
On iOS, "deviceID" returns a "unique" id for the device. Per Apple's policies, on iOS 6 and later, "deviceID" returns a MD5 hash of the device's "identifierForVendor" (see below); on iOS 5 it returns a MD5 of a GUID (Globally Unique Identifier) that is unique for each app install.
On Android, if your app uses the "android.permission.READ_PHONE_STATE" permission, the following will be returned:
IMEI for GSM phones.
MEID or ESN for CDMA phones.
The operating system's unique ID for devices that are not phones.
If your Android app does not use the "android.permission.READ_PHONE_STATE" permission, then the operating system's unique ID will be returned for all devices. Note that the operating system's unique ID may change after re-installing the operating system on the device.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scale down numbers with known max min to new max min in PHP
Let's say I have this array of numbers:
$arr = [100, 60, 30, 22, 1]
and it's based off a 1 - 100 range. And I want it based of a 1 - 5 range.
I have found this answer here: How to scale down a range of numbers with a known min and max value which details:
Array.prototype.scaleBetween = function(scaledMin, scaledMax) {
var max = Math.max.apply(Math, this);
var min = Math.min.apply(Math, this);
return this.map(num => (scaledMax-scaledMin)*(num-min)/(max-min)+scaledMin);
}
But I am not sure how I would translate that to PHP/Laravel.
A:
Not sure if this are the correct results, but it does what you described:
$arr = [100, 60, 30, 22, 1];
function scale(array $array, float $min, float $max): array
{
$lowest = min($array);
$highest = max($array);
return array_map(function ($elem) use ($lowest, $highest, $min, $max) {
return ($max - $min) * ($elem - $lowest) / ($highest - $lowest) + $min;
}, $array);
}
echo json_encode(scale($arr, 1, 10));
// [10,6.363636363636363,3.6363636363636362,2.909090909090909,1]
| {
"pile_set_name": "StackExchange"
} |
Q:
Disciplining: what to do when kid starts hiding his mischief?
We try to set well-defined/predictable and not-to-harsh consequences for mischief of our 4.5-year-old: timeouts, taking away toys, refusing to play, skipping story-time, etc., but no physical punishment, long solitary timeouts or excessive shouting. Afterwards we usually talk about why the mischief was followed with a consequence.
Sometimes, our child will freak out at the threat of such a discipline measure, that they beg one parent not to tell the other parent about the mischief, in hopes of skipping or lessening the measure of discipline he's threatened by. Think:
"Ok, that's it, there's no story-time, you'll just go to sleep by yourself"
"Please don't tell mommy, please, please "
It seems that such a response is a direct result of our discipline measures. The child is starting to hide the mischief even when it could be hazardous or too minor to have consequences.
We fear raising a child who will be afraid to tell their parents about any problems/mistakes/issues they are faced with, and would like to build a trusting relationship with them.
Are there any well-known/established recommendations on how to approach disciplining a child, so that they do not develop this fear-of-consequence attitude that is beginning to appear in our child?
A:
I think this seems normal at this point. You're avoiding the major problem areas here by not having long lasting punishments.
More than likely your child is simply embarrassed. She recognizes that she misbehaved, and doesn't want mommy to know she misbehaved, because it's embarrassing.
A good way to approach this when it happens is to simply point out that it's not something with long term consequences. Get her to focus on improving her "next time" if she wants mommy's approval. If she says "don't tell mommy", you can redirect with "Well, if you want this not to happen when mommy does bedtime, how can we work on making better choices next time?", for example. Move her quietly off of 'embarrassment' to 'solution-oriented'.
Realistically, every child will hide something, sometimes, whether from embarrassment or from punishment avoidance. Giving her a loving environment where you help her make better choices rather than having significant punishments is the best approach, and being understanding when she does hide things is also appropriate.
Rather than punishing the 'not telling', as some do, I suggest when you do discover something that wasn't told, you talk to her about why she didn't tell you, and talk about the potential consequences of not telling you - not punishment, but what bad things could happen (she or someone else could get hurt, the house could be damaged, etc.), and lightly talk about things like trustworthiness (though if she really is embarrassed by this, it's something to tread lightly around, as that risks more problems I worry with a child who's perhaps not high in the self confidence area).
| {
"pile_set_name": "StackExchange"
} |
Q:
Multiple lock owners in SQL Server 2014
I've encountered a deadlock on SQL Server 2014, and created a deadlock report using extended events. Below is an excerpt from the report.
What does it mean for objectlock(objid="554979041") to have multiple owners (namely, process14f3f2108 and processf32b5848 (twice)). I understand how a lock may have multiple waiters, but what does it mean to have multiple owners? I thought that a lock could only be owned by a single process, and that all other processes interested in the lock would have to wait for the lock. What am I missing?
<deadlock>
... content deleted ...
<resource-list>
<keylock hobtid="72057633537982464" dbid="15" objectname="BasketHeader" indexname="UX_BasketHeader_BasketID_Account" id="lock2d9b59f80" mode="X" associatedObjectId="72057633537982464">
<owner-list>
<owner id="process203dad848" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process14f3f2108" mode="RangeS-U" requestType="wait" />
</waiter-list>
</keylock>
<objectlock lockPartition="0" objid="554979041" subresource="FULL" dbid="15" objectname="BasketItem" id="lock1e64cbc80" mode="IX" associatedObjectId="554979041">
<owner-list>
<owner id="process14f3f2108" mode="IX" /> !!! OWNER 1
</owner-list>
<waiter-list>
<waiter id="processf32b5848" mode="X" requestType="convert" />
</waiter-list>
</objectlock>
<objectlock lockPartition="0" objid="554979041" subresource="FULL" dbid="15" objectname="BasketItem" id="lock1e64cbc80" mode="IX" associatedObjectId="554979041">
<owner-list>
<owner id="processf32b5848" mode="IX" /> !!! OWNER 2
<owner id="processf32b5848" mode="X" requestType="convert" /> !!! OWNER 3
</owner-list>
<waiter-list>
<waiter id="process203dad848" mode="IX" requestType="wait" />
</waiter-list>
</objectlock>
</resource-list>
</deadlock>
A:
The processes use and request different types of locks on the tables
Exclusive (X)
Shared (S)
Intent exclusive (IX)
Intent shared (IS)
Shared with intent exclusive (SIX)
And the compatability matrix looks like this:
(X) (S) (IX) (IS) (SIX)
(X) ✗ ✗ ✗ ✗ ✗
(S) ✗ ✓ ✗ ✓ ✗
(IX) ✗ ✗ ✓ ✓ ✗
(IS) ✗ ✓ ✓ ✓ ✓
(SIX) ✗ ✗ ✗ ✓ ✗
process203dad848 (A) has a X (exclusive) lock on the BasketHeader table
and is requesting an IX (Intent Exclusive) lock on the BasketItem table
process14f3f2108 (B) has an IX (Intent Exclusive) lock on BasketItem and is waiting to get a RangeS-U on BasketHeader.
processf32b5848 (C) has an IX lock on BasketItem and is waiting for it to be turned into a X lock
As you can see on the table above IX locks are compatible so seeing two of those on the BasketItem table is perfectly normal.
The RangeS-U makes this interesting as range locks only happen when you are running transactions if you are using serializable isolation level
Whats happening is that (A) holds an exclusive lock on BasketHeader and is waiting for an IntentExclusive lock on BasketItem. (B) is running in serializable isolation level and and is waiting for getting an exclusive Shared lock on BasketHeader and holds an IntentExclusive lock on BasketItem While (C) is Converting its IX lock to X lock.
(B) who is running in serializable will not be able to continue unless it gets its RangeS-U on BasketHeader and will not yield for (C). (A) cannot continue until it can get its lock on BasketItem and there you have your deadlock
| {
"pile_set_name": "StackExchange"
} |
Q:
Silverlight resource constructor always return to internal
When I modify my resource file (.resx) add text or modify, the constructor of my resource always go to internal and after that, when I run my silverlight I have an error in my binding XAML.
Is there a way to avoid this scenario? I need to go in the designer of my resource and put the constructor to public to solve the problem
I use my resource like this in my xaml file
<UserControl.Resources>
<resources:LibraryItemDetailsView x:Key="LibraryItemDetailsViewResources"></resources:LibraryItemDetailsView>
</UserControl.Resources>
<TextBlock FontSize="12" FontWeight="Bold" Text="{Binding Path=FileSelectedText3, Source={StaticResource LibraryItemDetailsViewResources}}"></TextBlock>
A:
Another way to do this without code changes is as below. Worked well for me.
http://guysmithferrier.com/post/2010/09/PublicResourceCodeGenerator-now-works-with-Visual-Studio-2010.aspx
A:
You can create a public class that exposes the resources through a property:
public class StringsWrapper
{
private static LibraryItemDetailsView _view = null;
public LibraryItemDetailsView View
{
get
{
if (_view == null)
{
_view = new LibraryItemDetailsView();
}
return _view;
}
}
}
Then in your XAML you can access your resource:
<UserControl.Resources>
<StringsWrapper x:Key="LibraryItemDetailsViewResources"></StringsWrapper>
</UserControl.Resources>
<TextBlock FontSize="12" FontWeight="Bold" Text="{Binding Path=View.FileSelectedText3, Source={StaticResource LibraryItemDetailsViewResources}}"></TextBlock>
This way the resx constructor can be internal!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check if file exist when downloading from FTP
I'm downloading from FTP server and I don't know exactly how to check if file already exist. What I want to do is that I retrieve filname from FTP server and then compare it with all files in folder. If file already exists then it compares next FTP filename with all files in folder and so on.
I already did comparison and it's working if all files from folder have same name as files on FTP server but if I add some older file then it downloads all files once again and I don't want that.
Here is my scratch code:
String[] names = client.listNames();
File folder = new File("c:\\test\\RTR_ZIP\\");
String[] filename = folder.list();
for (;i<names.length;i++) {
name = names[i];
exists=false;
if (name.contains(".zip")) {
if (filename.length == 0) {
new_file = new FileOutputStream("C:\\test\\RTR_ZIP\\" + name);
client.retrieveFile(name, new_file);
j++;
exists=true;
} else {
for (;k<filename.length;k++) {
name = names[i];
i++;
name1=filename[k];
// CHECK IF FILE EXISTS
if (!name.equals(name1)) {
new_file = new FileOutputStream("C:\\test\\RTR_ZIP\\" + name);
client.retrieveFile(name, new_file);
j++;
exists=true;
}
}
}//else
}//if contains .zip
}//for
Thanks in advance.
A:
If your ftp server supports XCRC command it could be possible to compare checksum (CRC32) of local and remote file.
You could iterate all files in the folder and compare its crc with local one.
import java.io.File;
import java.io.IOException;
import java.net.SocketException;
import java.util.Scanner;
import org.apache.commons.io.FileUtils;
import org.apache.commons.net.ftp.FTPClient;
public class DownloadFile {
private FTPClient client = new FTPClient();
public void connect() throws SocketException, IOException {
client.connect("127.0.0.1");
client.login("user", "password");
}
public boolean hasXCRCSupport() throws IOException {
client.sendCommand("feat");
String response = client.getReplyString();
Scanner scanner = new Scanner(response);
while(scanner.hasNextLine()) {
String line = scanner.nextLine();
if(line.contains("XCRC")) {
return true;
}
}
return false;
}
public boolean isSameFile() throws IOException {
if(hasXCRCSupport()) {
File file = new File("D:/test.txt");
String localCRC = Integer.toHexString((int) FileUtils.checksumCRC32(file)).toUpperCase();
client.sendCommand("XCRC /test.txt");
String response = client.getReplyString().trim();
System.out.println(response);
if(response.endsWith(localCRC)) {
return true;
}
}
return false;
}
public void logout() throws IOException {
client.logout();
}
public static void main(String[] args) throws SocketException, IOException {
DownloadFile downloadFile = new DownloadFile();
downloadFile.connect();
if(downloadFile.isSameFile()) {
System.out.println("remote file is same as local");
}
downloadFile.logout();
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Conditional formatting: making cells colorful
is it possible to do the following:
loc1 <- c("Aa", "Aa", "aa", "Aa")
loc2 <- c("aa", "aa", "aa", "AA")
loc3 <- c("aa", "Aa", "aa", "aa")
gen <- data.frame(loc1, loc2, loc3)
loc1g <- c(0.01, 0.5, 1, 0.75)
loc2g <- c(0.2, 0.1, 0.2, 0.6)
loc3g <- c(0.8, 0.8, 0.55, 1)
pval <- data.frame(loc1g, loc2g, loc3g)
I want to print to a file to gen dataframe such way that is conditionally formatted by the pval dataframe. Means than (row1, col1) of gen color depends upon pvale (row1, col1). The following are color coding:
0 to 0.3 is "red" text color
0.31 to 0.7 is "yellow"
> 0.7 is "red"
gen[1,1] will be "Aa" printed in red text color and so on....
appreciated your help.
EDITS:
I am more interested in printing not plotting in graph. If I can save output as MS excel and open in MSEXCEL it would be great. I can also be other types of text editors format that can read color coded text. As my orginal data matrix should be of a dimension of 1000 x 1000 or even more. I would like to quicky know unlying p-value for each gen categories.
A:
Sounds like you want to mimic Excel. Here are a couple examples:
x = 1:ncol(pval)
y = 1:nrow(pval)
# Colored backgrounds
dev.new(width=4, height=4)
image(x, y, t(as.matrix(pval)),
col = c('red', 'yellow', 'red'),
breaks = c(0, 0.3, 0.7, 1),
xaxt='n',
yaxt='n',
ylim=c(max(y)+0.5, min(y)-0.5),
xlab='',
ylab='')
centers = expand.grid(y, x)
text(centers[,2], centers[,1], unlist(gen))
# Colored text
dev.new(width=4, height=4)
image(x,y, matrix(0, length(x), length(y)),
col='white',
xaxt='n',
yaxt='n',
ylim=c(max(y)+0.5, min(y)-0.5),
xlab='',
ylab='')
pvals = unlist(pval)
cols = rep('red', length(pvals))
cols[pvals>0.3 & pvals<=0.7] = 'yellow'
text(centers[,2], centers[,1], unlist(gen), col=cols)
grid(length(x),length(y))
A:
Giving a POC-like answer which is using an ugly loop and not the most beatiful design:
Loading eg. the xlxs package to be able to write to Excel 2007 format:
library(xlsx)
Let us create a workbook and a sheet (see the manual!):
wb <- createWorkbook()
sheet <- createSheet(wb, "demo")
Define some styles to use in the spreadsheet:
red <- createCellStyle(wb, fillBackgroundColor="tomato", fillForegroundColor="yellow", fillPattern="BIG_SPOTS")
yellow <- createCellStyle(wb, fillBackgroundColor="yellow", fillForegroundColor="tomato", fillPattern="BRICKS1")
And the ugly loop which is pasting each cell to the spreadsheet with appropriate format:
for (i in 1:nrow(pval)) {
rows <- createRow(sheet, rowIndex=i)
for (j in 1:ncol(pval)) {
cell.1 <- createCell(rows, colIndex=j)[[1,1]]
setCellValue(cell.1, gen[i,j])
if ((pval[i,j] < 0.3) | (pval[i,j] > 0.7)) {
setCellStyle(cell.1, red)
} else {
setCellStyle(cell.1, yellow)
}
}
}
Saving the Excel file:
saveWorkbook(wb, '/tmp/demo.xls')
Result: demo.xls
Alternative solution with package ascii:
ascii.data.frame() can export data frames to a bunch of formats with the ability of adding some formatting. E.g. exporting to pandoc, first define the styles of each cells to an array with the same dimensions as pval:
style <- matrix('d', dim(pval)[1], dim(pval)[2])
style[pval < 0.3 | pval > 0.7] <- 's'
Set the desired output:
options(asciiType = "pandoc")
And export the data frame:
> ascii(gen, style=cbind('h', style))
**loc1** **loc2** **loc3**
--- ---------- ---------- ----------
1 Aa **aa** **aa**
2 **Aa** **aa** Aa
3 **aa** aa **aa**
4 **Aa** **AA** **aa**
--- ---------- ---------- ----------
With ascii::Report you could easily convert it it pdf, odt or html. Just try it :) Small demo with HTML output: result
r <- Report$new()
r$add(section("Demo"))
r$add(ascii(gen, style=cbind('h', style)))
options(asciiType = "pandoc")
r$backend <- "pandoc"
r$format <- "html"
r$create()
And odt output: result
r$format <- "odt"
r$create()
| {
"pile_set_name": "StackExchange"
} |
Q:
unable to display array result
I got the following array. now I get the single records but when is their array records then it doesn't work. I tried with OR condition but it doesn't work.
$this->db->get_where('genre',array('genre_id'=>$row['genre_id']))->row()->name;
//I get Follwoing Records
Array(
[0] => Array
(
[movie_id] => 7
[title] => Raaz
[genre_id] => 8 // it display the name
[actors] => []
[trailer_url] => https://drive.google.com/
)
[1] => Array
(
[movie_id] => 8
[title] => Tribute
[genre_id] => ["2","5","20"] // it doesn't display the name
[actors] => []
[trailer_url] => https://drive.google.com/
)
I tried the following code
$this->db->get_where('genre',array('genre_id'=>$row['genre_id']))->row()->name;
above code works for 0 index but it doesn't work 1 index array
A:
You can use where_in but you can't use it with get_where, you need to use alternate here instead of get_where:
Example:
You can alternate here like:
$this->db->select('name');
$this->db->from('genre');
if(is_array($row['genre_id'])){ // if result is in array
$this->db->where_in('genre_id',$row['genre_id']);
}
else{ // for single record.
$this->db->where('genre_id',$row['genre_id']);
}
$query = $this->db->get();
print_r($query->result_array()); // will generate result in an array
Edit:
After debugging, you are getting this value ["2","5","20"] as a string, so you can modify this code:
$genreID = intval($row['genre_id']); //
if($genreID > 0){
$this->db->where('genre_id',$row['genre_id']);
}
else{
$genreID = json_decode($row['genre_id'],true);
$this->db->where_in('genre_id',$genreID);
}
CI Query Builder
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the advantage of using a digital signature over simple asymmetric encryption?
If you're sending me a message, you can:
a) Encrypt the message using your private key, and I can decrypt is using your public key.
b) You can create a digital signature of your message, and then send the signature along with the un-encrypted message.
My two questions are:
1) I read somewhere that in the (a) scenario, if your encrypted message is tampered with en route, I won't be able to decrypt it using your public key. Is this the case? I thought I'd be able to apply your public key to any message, tampered-with or not, and if it's been tampered with, the message might just be gibberish or something.
2) What is the advantage of (b) over (a)? Given that the encrypted message in (a) and the digital signature in (b) are both encrypted using the same private key, in what way is the security provided by (b) better?
A:
These misconceptions come from people trying to explain digital signatures to the layperson. Once someone understands the concept of asymmetric encryption, a common way to explain signatures is "encryption with private key", but in reality there is no such thing (for a very technical explanation, see here). You're far better off thinking of asymmetric encryption and digital signatures as two entirely separate things.
You've come across some of the many problems with this explanation. If someone did try to send you a message "encrypted" with their private key and it was tampered with, you are correct that you would be able to "decrypt" it, but it would be gibberish.
In practice though, messages are too long to be encrypted or signed directly with asymmetric cryptography. When encrypting, a symmetric key is usually generated and used to encrypt the data, then that key is encrypted asymmetrically with the recipient's public key.
Likewise, when signing, the message is first passed through a digest algorithm (cryptographic hash) to remove any structure in the data and to output a small digest that is then signed with the private key. Even if you only have a very short message to sign though, you must still pass it through a hash, otherwise an attacker may be able to forge signatures on random messages algebraically related to yours.
Since correct signing requires some sort of hashing to be used, the signature obviously can't be reversed to the original message, so the message also has to be sent separately to the recipient (consequently your (a) scenario isn't even possible). Often, messages are signed with the sender's private key, then encrypted with a random symmetric key, which itself is then encrypted with the recipient's public key.
| {
"pile_set_name": "StackExchange"
} |
Q:
Free Software for Partition Manager
Free Software for the Partition Manager on Windows XP/vista?
A:
Try GParted Live. You can create a Boot CD and use that to work with partitions.
A:
Windows Vista now has a built in partition manager. You can access it like this:
Go to Control Panel / Administrative Tools / Computer Management.
Then go down to Storage / Disk Management. That brings up your drives.
Now you can just select a partition within a drive.
Right click it and you'll have options to Shrink, Extend or Delete it. The former two show a popup detailing what size you'd like.
More info here.
A:
Easeus Partition Master is an excellent tool, and the Home version is free! It has a bunch of useful features. I've been using it for a little while, and still cannot believe they give it away for free. You should definitely try it out.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pod spec lint fail validation: no known class method for selector
I'm trying to create a pod, my framework is building fine and I have no problem using it projects, but when I am trying to convert it into a pod and run pod spec lint to validate it it fails, and gives me the following error:
- ERROR | [iOS] xcodebuild: SimpleCameraFramework/SimpleCameraFramework/AVCaptureSession+Safe.m:28:67: error: no known class method for selector 'safeCastFromObject:'
In this file I have no compiler error, I have exposed the category in the umbrella header, so I really don't see where the problem is... Any idea?
A:
I found out the problem, for some reason the pod doesn't work with the precompiled header, if I remove it and import the .h file directly in AVCaptureSession+Safe, it works...
| {
"pile_set_name": "StackExchange"
} |
Q:
How to measure g using a metre stick and a ball
Can I measure the value of g using only a metre stick and a ball? I am not supposed to use a stopwatch and that has been the problem.
NOTE: I do not know if a solution exists or not.
A:
No you can't. You can see this because you are only given things that can define a units of length and mass (the meterstick and the ball), so you need something that can define the unit of time. If there was another process, nongravitational, with which you could define a unit of time, then you can find g relative to this unit of time, but absent such a thing, you can only define the unit of time by dropping something or measuring something oscillate in gravity, and then you are stuck.
| {
"pile_set_name": "StackExchange"
} |
Q:
Integration between circle and ellipse
I need to evaluate an integral over the $D=\{x^2+y^2 >1; \frac{x^2}{a^2}+\frac{y^2}{b^2}<1\}$, but I can't find the limits of integration simply by changing to polar coordinates.
Thanks
A:
In polar coordinates, the region can be represented by a whole $2\pi$ turn of $\theta$.
For the radius, it is a function of instant $\theta$ values.
$\int_0^{2\pi} \int_1^{U(\theta)} F(r,\theta)*r*dr*d\theta$
where $r = U(\theta)$ can be found as follows:
Suppose that a point on the ellipse has angle $\theta$, then we have $\frac{r^2cos^2(\theta)}{a^2} + \frac{r^2sin^2(\theta)}{b^2} = 1$
$\implies r^2(\frac{cos^2(\theta)}{a^2} + \frac{sin^2(\theta)}{b^2}) = 1$
$\implies r^2 = \frac{a^2b^2}{b^2cos^2(\theta) + a^2sin^2(\theta)}$
$\implies r = U(\theta) = \sqrt{\frac{a^2b^2}{b^2cos^2(\theta) + a^2sin^2(\theta)}}$
| {
"pile_set_name": "StackExchange"
} |
Q:
Execute HTTP Post automatically
I have a free script and I would like to ask if it's possible to replace or automate the search function. For example every hour. Right now I have to press the search button to find new proxies but I want to search automatically and update them in my database, maybe using a cron job.
if(isset($_POST['search'])) { // hit search button
$script_start = $pb->microtime_float();
ob_flush();
flush();
$proxylisttype = $pb->returnProxyList($_REQUEST['listtype']); // make sure request vars are clean
$sitestoscour = $pb->returnSitesScour($_REQUEST); // make sure request vars are clean
$finallist = $pb->returnFinalList($sitestoscour);
$finallist = $pb->arrayUnique($finallist); // eliminate the dupes before moving on
if(AUTO_BAN == 1) { // remove banned proxies
$finallist = $pb->autoBan($finallist);
}
$script_end = $pb->microtime_float(); // stop the timer
}
A:
You can either do it with curl from a php script or command line (or wget). That way you can set the $_POST:
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, "http://yoururl.com'");
curl_setopt($ch,CURLOPT_POST, true);
curl_setopt($ch,CURLOPT_POSTFIELDS, "search=your_query");
$result = curl_exec($ch);
curl_close($ch);
Then make that script run every hour by setting up a cron job.
You could also do it with wget:
wget --post-date="search=query" http://yoururl.com
| {
"pile_set_name": "StackExchange"
} |
Q:
Default context menu for RichTextEdit?
There doesn't seem to be a default context menu (with copy, paste, etc.) for the RichTextEdit control in WinForms? I try right-clicking inside the RichTextEdit and nothing happens?
A:
Correct. The RTE control doesn't have a default context menu. But you can assign it your own.
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET Response.Redirect( ) Error
Here is my code:
try
{
Session["CuponeNO"] = txtCode.Text;
txtCode.Text = string.Empty;
Response.Redirect("~/Membership/UserRegistration.aspx");
}
catch(Exception ex)
{
string s = ex.ToString();
lblMessage1.Text = "Error Occured!";
}
I am getting an error, even though it redirects after catch.
Here is the error:
"System.Threading.ThreadAbortException:
Thread was being aborted.\r\n at
System.Threading.Thread.AbortInternal()\r\n
at
System.Threading.Thread.Abort(Object
stateInfo)\r\n at
System.Web.HttpResponse.End()\r\n at
System.Web.HttpResponse.Redirect(String
url, Boolean endResponse)\r\n at
System.Web.HttpResponse.Redirect(String
url)\r\n
Can anyone tell me why this error is occurring?
A:
You could simply move ....
Response.Redirect("~/Membership/UserRegistration.aspx");
... outside of the Try / Catch block or you can try John S. Reid's newer solution below :
Response.Redirect(url) ThreadAbortException Solution
by John S. ReidMarch 31, 2004(edited October 28, 2006 to include greater detail and fix some inaccuracies in my analysis, though the solution at it's core remains the same)
... skipping down ...
The ThreadAbortException is thrown when you make a call to Response.Redirect(url) because the system aborts processing of the current web page thread after it sends the redirect to the response stream. Response.Redirect(url) actually makes a call to Response.End() internally, and it's Response.End() that calls Thread.Abort() which bubbles up the stack to end the thread. Under rare circumstances the call to Response.End() actually doesn't call Thread.Abort(), but instead calls HttpApplication.CompleteRequest(). (See this Microsoft Support article for details and a hint at the solution.)
... skipping down ...
PostBack and Render Solutions? Overrides.
The idea is to create a class level variable that flags if the Page should terminate and then check the variable prior to processing your events or rendering your page. This flag should be set after the call to HttpApplication.CompleteRequest(). You can place the check for this value in every PostBack event or rendering block but that can be tedious and prone to errors, so I would recommend just overriding the RaisePostBackEvent and Render methods as in the code sample1 below:
private bool m_bIsTerminating = false;
protected void Page_Load(object sender, EventArgs e)
{
if (WeNeedToRedirect == true)
{
Response.Redirect(url, false);
HttpContext.Current.ApplicationInstance.CompleteRequest();
m_bIsTerminating = true;
// Remember to end the method here if there is more code in it.
return;
}
}
protected override void RaisePostBackEvent
(
IPostBackEventHandler sourceControl,
string eventArgument
)
{
if (m_bIsTerminating == false)
base.RaisePostBackEvent(sourceControl, eventArgument);
}
protected override void Render(HtmlTextWriter writer)
{
if (m_bIsTerminating == false)
base.Render(writer);
}
The Final Analysis
Initially I had recommended that you should simply replace all of your calls to Response.Redirect(url) with the Response.Redirect(url, false) and CompleteRequest() calls, but if you want to avoid postback processing and html rendering you'll need to add the overrides as well. From my recent in depth analysis of the code I can see that the most efficient way to redirect and end processing is to use the Response.Redirect(url) method and let the thread be aborted all the way up the stack, but if this exception is causing you grief as it does in many circumstances then the solution here is the next best thing.
It should also be noted that the Server.Transfer() method suffers from the same issue since it calls Response.End() internally. The good news is that it can be solved in the same way by using the solution above and replacing the call to Response.Redirect() with Server.Execute().
1 - I modified the code formatting to make it fit inside SO boundaries so it wouldn't scroll.
| {
"pile_set_name": "StackExchange"
} |
Q:
Removing the default pages when adding a domain via Plesk
whenever I add a new domain into my new Plesk control panel on my dedicated server it creates a whole bunch of test files in the cgi-bin, httpdocs and httpsdocs.
There must be some setting somewhere where I can tell Plesk not to do this?
I've done a good Google search but must now turn to the StackOverflow masses :)
Yours,
Chris
A:
Ok I've found it (and feel a bit stupid!)
/var/www/vhosts/.skel/0/
Hope that helps someone :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Pattern Lab with second package.json
I have a Pattern Lab edition-node-gulp set up and would like to use NPM to manage UI dependencies, like jQuery, D3 and others. Pattern Lab is set up so that development happens in a 'Source' folder, which is complied and moved to a 'Public' folder. The root of the Public folder becomes the root of the application when served.
Currently, I include assets like jQuery and others manually. I think it would be great to manage dependencies like jQuery right in the package.json file used to run all of Pattern Lab Node, but the node_modules folder exists outside of Public, so I can not reference it in the live application.
So far, it seems that I have two options:
Continue as is, and forget package management for these assets.
Create a second package.json inside Public with jQuery and others, which seems sloppy.
Is creating a second package.json so bad?
Am I failing to consider some other option?
A:
Creating a second package.json is not that bad (when you know what and how you are doing of course). However in your particular case it is not the best scenario because there are way better options.
What is the problem? Adding the assets to the build output. So, what you can do:
install the assets via npm install and save them in the original package.json
adapt gulpfile.js to copy the files in the output directory.
If the second step step is too hacky / problematic it could be also replaced with simple package.json scripts change (add build script):
"scripts": {
"gulp": "gulp -- ",
"build": "npm run gulp && cp -R node_modules/jquery/dist/blablabla.js mypublicdir/blablabla.js"
},
and then run it as npm run build. If you need to support Windows you can use https://www.npmjs.com/package/cp-cli instead of cp.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get all the records that are duplicates not just the list of them Mysql
I can do
Select FieldA,FieldB,FieldC,Count(*) from TableA Group By FieldA,FieldB having count(*)>1
Which will give me a list of all the FieldA,FieldB duplicates with a count for each. What I need is all the records in that subset. If a specific FieldA,FieldB combo has a count of 3 I need to see all 3 of those records. I've tried various joins to no avail.
A:
select a1.*
from TableA a1
join
(
Select FieldA, FieldB
from TableA
Group By FieldA, FieldB
having count(*) > 1
) a2 on a1.FieldA = a2.FieldA
and a1.FieldB = a2.FieldB
Join the same table on the result of the grouped one.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is a one-yield-per-await restricted pipe possible?
I'm working with pipes-4.0.0. In that library, the number of yields to downstream a pipe makes is in general unrelated to the number of awaits from upstream.
But suppose I wanted to build a restricted pipe that enforced that one and only one yield is performed for each await, while still being able to sequence these kinds of pipes using monadic (>>=).
I have observed that, in the bidirectional case, each value requested from upstream by a Proxy is matched with a value sent back. So maybe what I'm searching for is a function of type Proxy a' a () b m r -> Pipe a (Either b a') m r that "reflects" the values going upstream, turning them into additional yields to downstream. Or, less generally, Client a' a -> Pipe a a'. Is such a function possible?
A:
You definitely do not want to use pipes for this. But, what you can do is define a restricted type that does this, do all your connections and logic within that restricted type, then promote it to a Pipe when you are done.
The type in question that you want is this, which is similar to the netwire Wire:
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad.Trans.Free -- from the 'free' package
data WireF a b x = Pass (a -> (b, x)) deriving (Functor)
type Wire a b = FreeT (WireF a b)
That's automatically a monad and a monad transformer since it is implemented in terms of FreeT. Then you can implement this convenient operation:
pass :: (Monad m) => (a -> b) -> Wire a b m ()
pass f = liftF $ Pass (\a -> (f a, ()))
... and assemble custom wires using monadic syntax:
example :: Wire Int Int IO ()
example = do
pass (+ 1)
lift $ putStrLn "Hi!"
pass (* 2)
Then when you're done connecting things with this restricted Wire type you can promote it to a Pipe:
promote :: (Monad m) => Wire a b m r -> Pipe a b m r
promote w = do
x <- lift $ runFreeT w
case x of
Pure r -> return r
Free (Pass f) -> do
a <- await
let (b, w') = f a
yield b
promote w'
Note that you can define an identity and wire and wire composition:
idWire :: (Monad m) => Wire a a m r
idWire = forever $ pass id
(>+>) :: (Monad m) => Wire a b m r -> Wire b c m r -> Wire a c m r
w1 >+> w2 = FreeT $ do
x <- runFreeT w2
case x of
Pure r -> return (Pure r)
Free (Pass f2) -> do
y <- runFreeT w1
case y of
Pure r -> return (Pure r)
Free (Pass f1) -> return $ Free $ Pass $ \a ->
let (b, w1') = f1 a
(c, w2') = f2 b
in (c, w1' >+> w2')
I'm pretty sure those form a Category:
idWire >+> w = w
w >+> idWire = w
(w1 >+> w2) >+> w3 = w1 >+> (w2 >+> w3)
Also, I'm pretty sure that promote obeys the following functor laws:
promote idWire = cat
promote (w1 >+> w2) = promote w1 >-> promote w2
| {
"pile_set_name": "StackExchange"
} |
Q:
Multithreaded filestream
I'm creating a client application that connects to a server and, besides other features, logs server chat into a file, so that rule violations may be punished later.
Unfortunatelly, it became necessary to have (at least) two threads in the application: receiver loop and sender loop - timing precesion is required by the server. This means, I need thread safe solution for logging data to a file.
This is what may happen, if threads send log information at the (as much as possible) same time:
logfile.txt:
chat: Hello my name is Duke
chat: Hell[WARNING]: Hacker detectedo my name is John
You can see one sentence injected into another. I don't want this to ever happen. Same happens to my std::cout output, but that is not so critical.
Questions:
How do I make std::ofstream to only log one data at a time? (remembering other and inserting it AFTER, not in the middle)
How do I check if the file was closed by other thread. (or an error)
Bonus question:
Could you solve same problem for std::cout?
A:
Standard streams have no thread safety guarantees. The only way to safely accomplish what you're trying to do is to serialize access to the stream.
The easiest solution is to wrap any statement involving the stream in a mutex.
{
std::lock_guard<std::mutex> locked(gCoutMutex);
std::cout << "This is a message" << std::endl;
}
A more correct approach, especially for a logger, is to have a logging thread with a message queue. When another thread wants to log, it just posts a message to the queue, and the logging thread will go through the queue and write everything to disk.
| {
"pile_set_name": "StackExchange"
} |
Q:
apache rewrite rules, non-www, https
I have two applications on the same server and use apache rewrite rules to redirect:
www requests to non www
http reuests to https
Everything works ok, except one case:
request www.test2.test.eu is redirect to https://www.test1.com content
How can I konfigure it properly?
Rewrites in domain test1.com config file:
ServerName test1.com
RewriteCond %{HTTP_HOST} ^www.test1.com$ [NC]
RewriteRule ^(.*)$ https://www.test1.com/$1 [R=301]
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^.*$ https:// %{SERVER_NAME}%{REQUEST_URI}
Rewrites in domain test2.test.eu config file:
ServerName test2.test.eu
RewriteCond %{HTTP_HOST} ^www.test2.test.eu$ [NC]
RewriteRule ^(.*)$ https://www.test2.test.eu/$1 [R=301]
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^.*$ https:// %{SERVER_NAME}%{REQUEST_URI}
Any suggestions very appreciated.
Kind regards.
A:
i think your first vritualhost config is acting as the default for any request on hosts not matching 'test1.com' and 'test2.test.eu'. Try adding this ServerAlias line to see if it gets the request going to the proper config file.
Rewrites in domain test2.test.eu config file:
ServerName test2.test.eu
ServerAlias www.test2.test.eu *.test2.test.eu
RewriteCond %{HTTP_HOST} ^www.test2.test.eu$ [NC]
RewriteRule ^(.*)$ https://www.test2.test.eu/$1 [R=301]
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^.*$ https:// %{SERVER_NAME}%{REQUEST_URI}
This explicitly tells apache that requests to 'www.test2.test.eu' should be handled by this configuration. The second entry on the ServerAlias with asterisk provides a wildcard so that even if the request comes for 'wwww.test2.test.eu' or 'xxx.test2.test.eu', the proper apache config will handle it. With using the wildcard, you could actually leave off the first entry, like this:
ServerName test2.test.eu
ServerAlias *.test2.test.eu
RewriteCond %{HTTP_HOST} ^www.test2.test.eu$ [NC]
RewriteRule ^(.*)$ https://www.test2.test.eu/$1 [R=301]
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^.*$ https:// %{SERVER_NAME}%{REQUEST_URI}
and it should work the same, although your first rewrite won't catch non-'www' hostnames either way.
| {
"pile_set_name": "StackExchange"
} |
Q:
Node JS Express Boilerplate and rendering
I am trying out node and it's Express framework via the Express boilerplate installation. It took me a while to figure out I need Redis installed (btw, if you're making a boilerplate either include all required software with it or warn about the requirement for certain software - Redis was never mentioned as required) and to get my way around the server.js file.
Right now I'm still a stranger to how I could build a site in this..
There is one problem that bugs me specifically - when I run the server.js file, it says it's all good. When I try to access it in the browser, it says 'transferring data from localhost' and never ends - it's like render doesn't finish sending and never sends the headers. No errors, no logs, no nothing - res.render('index') just hangs. The file exists, and the script finds it, but nothing ever happens. I don't have a callback in the render defined, so headers should get sent as usual.
If on the other hand I replace the render command with a simple write('Hello world'); and then do a res.end();, it works like a charm.
What am I doing wrong with rendering? I haven't changed a thing from the original installation btw. The file in question is index.ejs, it's in views/, and I even called app.register('.ejs', require('ejs')); just in case before the render itself. EJS is installed.
Also worth noting - if I do a res.render('index'); and then res.write('Hello'); immediately afterwards, followed by res.end();, I do get "Hello" on the screen, but the render never happens - it just hangs and says "Transferring data from localhost". So the application doesn't really die or hang, it just never finishes the render.
Edit: Interesting turn of events: if I define a callback in the render, the response does end. There is no more "Transferring data...", but the view is never rendered, neither is the layout. The source is completely empty upon inspection. There are no errors whatsoever, and no exceptions.
A:
Problem fixed. It turns our render() has to be the absolute last command in a routing chain. Putting res.write('Hello'); and res.end(); after it was exactly what broke it.
I deleted everything and wrote simply res.render('index') and it worked like a charm. Learn from my fail, newbies - no outputting anything after rendering!
| {
"pile_set_name": "StackExchange"
} |
Q:
Pseudocolumn in Spark JDBC
I am using a query to fetch data from MYSQL as follows:
var df = spark.read.format("jdbc")
.option("url", "jdbc:mysql://10.0.0.192:3306/retail_db")
.option("driver" ,"com.mysql.jdbc.Driver")
.option("user", "retail_dba")
.option("password", "cloudera")
.option("dbtable", "orders")
.option("partitionColumn", "order_id")
.option("lowerBound", "1")
.option("upperBound", "68883")
.option("numPartitions", "4")
.load()
Question is, can I use a pseudo column (like ROWNUM in Oracle or RRN(employeeno) in DB2) with option where I specify the partitionColumn ?
If not, can we specify a partition column which is not a primary key ?
A:
can I use a pseudo column (like ROWNUM in Oracle or RRN(employeeno) in DB2)
TL;DR Probably no.
While Spark doesn't consider constraints like PRIMARY KEY or UNIQUE there is very important requirement for partitionColumn, which is not explicitly stated in the documentation - it has to be deterministic.
Each executor fetches it's own piece of data using separate transaction. If numeric column is not deterministic (stable, preserved between transactions), the state of data seen by Spark might be inconsistent and records might be duplicated or skipped.
Because ROWNUM implementations are usually volatile (depend on non stable ordering and can be affected by features like indexing) there not safe choice for partitionColumn. For the same reason you cannot use random numbers.
Also, some vendors might further limit allowed operations on pseudocolumns, making them unsuitable for usage as a partitioning column. For example Oracle ROWNUM
Conditions testing for ROWNUM values greater than a positive integer are always false.
might fail silently leading to incorrect results.
can we specify a partition column which is not a primary key
Yes, as long it satisfies criteria described above.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I find out which apps I have already downloaded on my iPhone?
When you attempt to download an app that costs money, you will not be charged if you already bought the app. After you press buy and enter your iTunes password, it will say "You have already purchased this item. To download it again for free, select OK."
The problem is that I deleted hundreds of apps from my phone, some of which were paid for. Later on I might find an app I like but notice that it costs money. It might be an app I had already paid for, or an app which I got when it was free.
How can I know if I tap the buy button if I will be charged for the app or not?
In other words, how can I know if I owned a previous version of an app before?
A:
To see a listing of all the apps you have purchased or downloaded, do this :
Go to Store > View My Account
Login
Click on "Purchase History"
You should now see all the apps you have downloaded before.
Stolen from caliban's answer here.
A:
Update: The easiest way to do this is to open up the App Store on the iPad, and search for the app. If you see it say "INSTALL," rather than showing a price, it means that you have purchased the app before, and can download it again for free.
One way to do this is to archive all your iTunes receipt notifications in your email account. This way you can search your email account for the application's name and see if it exists in any of your receipts.
The downside to this is that if the application's name changed it won't find it since your receipt will contain the application's old name. You could try searching for the seller's name, but that might change as well.
Another way to do this is to look at your iTunes purchase history. While this will have the application's current name (unlike in email archiving), there is no easy way to search through it.
Another difference between the two methods is that the Purchase History will contain app updates you downloaded, while the receipt emails will not.
Update: It seems that as of May, I no longer get receipts for free items via email.
A:
Apps that you delete on your iPhone are still in your iTunes. You should see there wich Apps you already payed for.
| {
"pile_set_name": "StackExchange"
} |
Q:
union all in SQL (Postgres) mess the order
I have a query which is order by date , there is the query I have simplified it a bit but basically is :
select * from
(select start_date, to_char(end_date,'YYYY-mm-dd') as end_date from date_table
order by start_date ,end_date )
where start_date is null or end_date is null
It shows prefect order
but I add
union all
select start_date, 'single missing day' as end_date from
calendar_dates
where db_date>'2017-12-12' and db_date<'2018-05-13'
Then the whole order messed up. Why is that happened? Union or union all should just append the dataset from first query with the second, right? It should not mess the order in the first query, right?
I know this query doesn't makes any sense, but I have simplified it to
show the syntax.
A:
You can't predict what would be the order outcome by just assuming that UNION ALL will append queries in the order you write them.
The query planner will execute your queries in whatever order it sees it fit. That's why you have the ORDER BY clause. Use it !
For example, if you want to force the order of the first query, then the second, do :
select * from
(select 1, start_date, to_char(end_date,'YYYY-mm-dd') as end_date from date_table
order by start_date ,end_date )
where start_date is null or end_date is null
union all
select 2, start_date, 'single missing day' as end_date from
calendar_dates
where db_date>'2017-12-12' and db_date<'2018-05-13'
ORDER BY 1
| {
"pile_set_name": "StackExchange"
} |
Q:
javafx: How to bind the Enter key to a button and fire off an event when it is clicked?
Basically, I have a okayButton that sits in a stage and when it is clicked , it performs a list of tasks. Now I want to bind the Enter key to this button such that when it is clicked OR the ENTER key is pressed, it performs a list of tasks.
okayButton.setOnAction(e -> {
.........
}
});
How can I do that ? I have read the following post already. However, it did not help me to achieve what I want to do.
A:
First, set a hanlder on your button :
okayButton.setOnAction(e -> {
......
});
If the button has the focus, pressing Enter will automatically call this handler. Otherwise, you can do this in your start method :
@Override
public void start(Stage primaryStage) {
// ...
Node root = ...;
setGlobalEventHandler(root);
Scene scene = new Scene(root, 0, 0);
primaryStage.setScene(scene);
primaryStage.show();
}
private void setGlobalEventHandler(Node root) {
root.addEventHandler(KeyEvent.KEY_PRESSED, ev -> {
if (ev.getCode() == KeyCode.ENTER) {
okayButton.fire();
ev.consume();
}
});
}
If you have only one button of this kind, you can use the following method instead.
okayButton.setDefaultButton(true);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to stop redirect when clicking Order Place button in checkout/onepage in Magento 1?
I'm new to magento. I want to stop the redirection action when you click place order. Instead of redirecting, I want to place a block under place order button. How can I do this?
Thanks in advance
A:
In onepagecheckout when you click the place order then it call the Review.prototype.save() function in skin/frontend/base/default/js/opcheckout.js file. That looks like
var Review = Class.create();
Review.prototype = {
initialize: function(saveUrl, successUrl, agreementsForm){
this.saveUrl = saveUrl;
this.successUrl = successUrl;
this.agreementsForm = agreementsForm;
this.onSave = this.nextStep.bindAsEventListener(this);
this.onComplete = this.resetLoadWaiting.bindAsEventListener(this);
},
save: function(){
if (checkout.loadWaiting!=false) return;
checkout.setLoadWaiting('review');
var params = Form.serialize(payment.form);
if (this.agreementsForm) {
params += '&'+Form.serialize(this.agreementsForm);
}
params.save = true;
var request = new Ajax.Request(
this.saveUrl,
{
method:'post',
parameters:params,
onComplete: this.onComplete,
onSuccess: this.onSave,
onFailure: checkout.ajaxFailure.bind(checkout)
}
);
},
That function makes a ajax call to saveOrderAction() which you find in the file app/code/core/Mage/Checkout/controllers/OnepageController.php. If you go through that function you will find that here magento sets a redirect variable to the response body of the ajax call.
if (isset($redirectUrl)) {
$result['redirect'] = $redirectUrl;
}
$this->getResponse()->setBody(Mage::helper('core')->jsonEncode($result));
So now you know that you can stop the redirect by removing that redirect variable from the response body. And if you want to add any other block then you can do it in Review.prototype.save() function.
**Note: You should always rewrite the core files in local pool, modifying core files are not recommended **
| {
"pile_set_name": "StackExchange"
} |
Q:
How to select an element using jquery?
I am trying to select my radio button value within multiple divs but i don't know.
Also, append text within table but it doesn't work. maybe my syntax is wrong?
I've also tried appendTo() but same. nothing appears on the screen
input radio is located..
<div id="wrap">
<div id="section1">
<table class="question">
<tr><td><input type="radio" value="yes" name="tv"/></td><td><p id="position"></p></td></tr>
</table>
</div>
</div>
And below is my jquery source in script.js
$('#section1.input:radio["tv"]').change(function(){
if ($(this).val() == 'yes') {
$('#position').append("test appending");
});
A:
Your selector is totally wrong.
input shouldn't be prefixed by a dot, it isn't a CSS class ;
You need a space between #section1 and input, since the input is a child of your div ;
You forgot name= in the brackets.
Try something like:
$('#section1 input:radio[name="tv"]').change(function(){
if ($(this).val() == 'yes') {
$('#position').append("test appending");
}
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="wrap">
<div id="section1">
<table class="question">
<tr>
<td>
<input type="radio" value="yes" name="tv"/></td><td><p id="position"></p>
</td>
</tr>
</table>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to scroll to bottom of a JSF component after update?
I have a log that needs to be tailed in real-time:
<h:panelGroup layout="block" style="width:100%; max-height: 400px; overflow: auto;" id="log" styleClass="logArea">
<h:outputText value="#{myBean.log}" style="white-space:pre;"></h:outputText>
</h:panelGroup>
<h:outputScript>
function scrollLog() {var log=jQuery('.logArea');log.scrollTop(log.scrollHeight-log.height);};</h:outputScript>
<p:remoteCommand name="getLog" process="@this" update="log" onsuccess="scrollLog();">
</p:remoteCommand>
While the log output updates just fine after remoteCommand has been run it does not scroll to the bottom. I suspect my scrollLog() is called before partial update is applied, and the update resets scrollbar to the top.
I also tried the following jQuery code:
jQuery( function() { var log=jQuery('.logArea');log.animate({ scrollTop: log.scrollHeight}, 1000); });
but nothing seems to work.
How can I work around this and scroll to the bottom of the log after every update?
A:
The onsuccess handler is invoked directly after the ajax response is successfully retrieved, but before the DOM is updated based on the ajax response.
You want to use oncomplete handler instead.
<p:remoteCommand ... oncomplete="someFunctionWhichNeedsToWorkWithUpdatedDOM()" />
| {
"pile_set_name": "StackExchange"
} |
Q:
JavaScript Library needed for Showing/Hiding a div panel using JQuery
I have a simple fieldset and div panel, which I want to initially show. If you then click on a button/image or text I then want to hide the div panel. Let's call this "myPanel". Clicking on the button/image or text once more will then show it again. Now I have a solution in JavaScript below, but my question is how can I create a library for this and re-use this instead of writing out the method's for multiple panels. Something similar to this:
var panel = new library.panel("myPanel");
Then all events will be handled and variables defined in the JavaScript library.
Consider the following code:
<fieldset>
<legend>My Panel<a id="MyPanelExpandCollapseButton" class="pull-right" href="javascript:void(0);">[-]</a></legend>
<div id="MyPanel">
Panel Contents goes here
</div>
</fieldset>
<script type="text/javascript">
//This should be inside the JavaScript Library
var myPanelShown = true;
$(document).ready(function () {
$('#MyPanelExpandCollapseButton').click(showHideMyPanel);
if (myPanelShown) {
$('#MyPanel').show();
} else {
$('#MyPanel').hide();
}
});
function showHideMyPanel() {
if (myPanelShown) {
$('#MyPanelExpandCollapseButton').text("[+]");
$('#MyPanel').slideUp();
myPanelShown = false;
} else {
$('#MyPanelExpandCollapseButton').text("[-]");
$('#MyPanel').slideDown();
myPanelShown = true;
}
}
</script>
A:
If you want to make it yours then it is simple, make a function in separate js file :
function showHideBlock(panelId, buttonId){
if($(panelId).css('display') == 'none'){
$(panelId).slideDown('normal');
$(buttonId).text("[+]");
}
else {
$(panelId).slideUp('normal');
$(buttonId).text("[-]");
}
}
Now pass the panel or block id which you want to hide/show and button id which will cause hide/show.
onclick="showHideBlock('#MyPanel', '#MyPanelExpandCollapseButton');"
Try this
| {
"pile_set_name": "StackExchange"
} |
Q:
How do you call a javascript method on a htmlwidget (jsoneditor) in shiny?
I'm trying to use jsonedit from the listviewer package in a shiny app and want to display the tree fully expanded by default. There isn't an option to do this in the jsonedit() function, but the underlying javascript object has an .expandAll() method which should do it. How do I call this method from R shiny? My attempt below doesn't work either in a shiny app or directly in R.
library(shiny)
library(listviewer)
library(magrittr)
library(htmlwidgets)
x <- list(a=1,b=2,c=list(d=4,e='penguin'))
jsonedit(x, mode = 'view') %>% onRender("function(el,x,data) {this.expandAll();}")
shinyApp(
ui = shinyUI(
fluidPage(
jsoneditOutput( "jsed" )
)
),
server = function(input, output){
output$jsed <- renderJsonedit({
jsonedit(x, mode = 'view') %>% onRender("function(el,x,data) {this.expandAll();}")
})
}
)
A:
jsonedit(x, mode = 'view') %>%
onRender("function(el,x,data) {this.editor.expandAll();}")
| {
"pile_set_name": "StackExchange"
} |
Q:
Script para mudar jogador (jogo da velha Jquery)
Estou tentando fazer com que o jogo da velha mude o jogador que tem a vez, consegui fazer muda-lo do 'X' para o 'O', mas quando tento retornar para o 'X' e assim por diante, ele não vai.
Segue o código que fiz até aqui:
$(document).ready(function() {
$(".botao").click(function() {
$(this).text("X");
$("#jogador").text("É a vez do jogador 2");
mudarSimbolo();
});
function mudarSimbolo() {
if ($("#jogador").text() == "É a vez do jogador 2") {
$(".botao").click(function() {
$(this).text("O");
$("#jogador").text("É a vez do jogador 1");
});
} else if ($("#jogador").text() == "É a vez do jogador 1") {
$(".botao").click(function() {
$(this).text("X");
$("#jogador").text("É a vez do jogador 2");
});
}
}
});
.btn-default {
padding: 40px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" />
<div class="container" style="border:1px solid red; width:320px; height:320px;">
<button class="btn btn-default botao">1</button>
<button class="btn btn-default botao">2</button>
<button class="btn btn-default botao">1</button>
<button class="btn btn-default botao">2</button>
<button class="btn btn-default botao">3</button>
<button class="btn btn-default botao">4</button>
<button class="btn btn-default botao">5</button>
<button class="btn btn-default botao">6</button>
<button class="btn btn-default botao">7</button>
</div>
<div class="container">
<label id="jogador">É a vez do jogador 1</label>
</div>
A:
Eu simplificaria a logica, basta criar uma variável global e verificar a mesma:
var elem = "O";
$(document).ready(function() {
$(".botao").click(function() {
$(this).text(elem);
if (elem == "X") {
elem = "O";
$("#jogador").text("É a vez do jogador 1");
} else if (elem == "O") {
elem = "X";
$("#jogador").text("É a vez do jogador 2");
}
});
});
.btn-default {
padding: 40px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" />
<div class="container" style="border:1px solid red; width:320px; height:320px;">
<button class="btn btn-default botao">1</button>
<button class="btn btn-default botao">2</button>
<button class="btn btn-default botao">1</button>
<button class="btn btn-default botao">2</button>
<button class="btn btn-default botao">3</button>
<button class="btn btn-default botao">4</button>
<button class="btn btn-default botao">5</button>
<button class="btn btn-default botao">6</button>
<button class="btn btn-default botao">7</button>
</div>
<div class="container">
<label id="jogador">É a vez do jogador 1</label>
</div>
A:
Segue o código com correção:
$(document).ready(function() {
var player = 1;
$(".botao").click(function() {
if(player == 1) {
$(this).text("X");
player = 2;
} else {
$(this).text("O");
player = 1;
}
$("#jogador").text("É a vez do jogador " + player);
});
});
.btn-default {
padding: 40px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" />
<div class="container" style="border:1px solid red; width:320px; height:320px;">
<p>
<button class="btn btn-default botao">1</button>
<button class="btn btn-default botao">2</button>
<button class="btn btn-default botao">3</button>
</p>
<p>
<button class="btn btn-default botao">4</button>
<button class="btn btn-default botao">5</button>
<button class="btn btn-default botao">6</button>
</p>
<p>
<button class="btn btn-default botao">7</button>
<button class="btn btn-default botao">8</button>
<button class="btn btn-default botao">9</button>
</p>
</div>
<div class="container">
<label id="jogador">É a vez do jogador 1</label>
</div>
O que fiz foi criar uma variável global no documento chamada player, ao clicar no botão, caso o player seja 1, então coloco X e atribuo 2 a essa variável para indicar que é o player 2 quem irá jogar, caso seja a vez do player 2 então faz o contrário, coloca O e a variável player passa a valer 1.
| {
"pile_set_name": "StackExchange"
} |
Q:
LinkedList put into Intent extra gets recast to ArrayList when retrieving in next activity
A behaviour i'm observing w.r.t passing serializable data as intent extra is quite strange, and I just wanted to clarify whether there's something I'm not missing out on.
So the thing I was trying to do is that in ActivtyA I put a LinkedList instance into the intent I created for starting the next activity - ActivityB.
LinkedList<Item> items = (some operation);
Intent intent = new Intent(this, ActivityB.class);
intent.putExtra(AppConstants.KEY_ITEMS, items);
In the onCreate of ActivityB, I tried to retrieve the LinkedList extra as follows -
LinkedList<Item> items = (LinkedList<Item>) getIntent()
.getSerializableExtra(AppConstants.KEY_ITEMS);
On running this, I repeatedly got a ClassCastException in ActivityB, at the line above. Basically, the exception said that I was receiving an ArrayList. Once I changed the code above to receive an ArrayList instead, everything worked just fine.
Now I can't just figure out from the existing documentation whether this is the expected behaviour on Android when passing serializable List implementations. Or perhaps, there's something fundamentally wrong w/ what I'm doing.
Thanks.
A:
I can tell you why this is happening, but you aren't going to like it ;-)
First a bit of background information:
Extras in an Intent are basically an Android Bundle which is basically a HashMap of key/value pairs. So when you do something like
intent.putExtra(AppConstants.KEY_ITEMS, items);
Android creates a new Bundle for the extras and adds a map entry to the Bundle where the key is AppConstants.KEY_ITEMS and the value is items (which is your LinkedList object).
This is all fine and good, and if you were to look at the extras bundle after your code executes you will find that it contains a LinkedList. Now comes the interesting part...
When you call startActivity() with the extras-containing Intent, Android needs to convert the extras from a map of key/value pairs into a byte stream. Basically it needs to serialize the Bundle. It needs to do that because it may start the activity in another process and in order to do that it needs to serialize/deserialize the objects in the Bundle so that it can recreate them in the new process. It also needs to do this because Android saves the contents of the Intent in some system tables so that it can regenerate the Intent if it needs to later.
In order to serialize the Bundle into a byte stream, it goes through the map in the bundle and gets each key/value pair. Then it takes each "value" (which is some kind of object) and tries to determine what kind of object it is so that it can serialize it in the most efficient way. To do this, it checks the object type against a list of known object types. The list of "known object types" contains things like Integer, Long, String, Map, Bundle and unfortunately also List. So if the object is a List (of which there are many different kinds, including LinkedList) it serializes it and marks it as an object of type List.
When the Bundle is deserialized, ie: when you do this:
LinkedList<Item> items = (LinkedList<Item>)
getIntent().getSerializableExtra(AppConstants.KEY_ITEMS);
it produces an ArrayList for all objects in the Bundle of type List.
There isn't really anything you can do to change this behaviour of Android. At least now you know why it does this.
Just so that you know: I actually wrote a small test program to verify this behaviour and I have looked at the source code for Parcel.writeValue(Object v) which is the method that gets called from Bundle when it converts the map into a byte stream.
Important Note: Since List is an interface this means that any class that implements List that you put into a Bundle will come out as an ArrayList.
It is also interesting that Map is also in the list of "known object types" which means that no matter what kind of Map object you put into a Bundle (for example TreeMap, SortedMap, or any class that implements the Map interface), you will always get a HashMap out of it.
A:
The answer by @David Wasser is right on in terms of diagnosing the problem. This post is to share how I handled it.
The problem with any List object coming out as an ArrayList isn't horrible, because you can always do something like
LinkedList<String> items = new LinkedList<>(
(List<String>) intent.getSerializableExtra(KEY));
which will add all the elements of the deserialized list to a new LinkedList.
The problem is much worse when it comes to Map, because you may have tried to serialize a LinkedHashMap and have now lost the element ordering.
Fortunately, there's a (relatively) painless way around this: define your own serializable wrapper class. You can do it for specific types or do it generically:
public class Wrapper <T extends Serializable> implements Serializable {
private T wrapped;
public Wrapper(T wrapped) {
this.wrapped = wrapped;
}
public T get() {
return wrapped;
}
}
Then you can use this to hide your List, Map, or other data type from Android's type checking:
intent.putExtra(KEY, new Wrapper<>(items));
and later:
items = ((Wrapper<LinkedList<String>>) intent.getSerializableExtra(KEY)).get();
| {
"pile_set_name": "StackExchange"
} |
Q:
Send Cognos Burst Reporting to multiple emails
I have a few reports that need to be sent form hourly certain intervals during the day.
I know how to schedule burst jobs and they send out fine, but i am being tasked with sending that same exact thing to "CC" that persons mananagerl1 and managerl2
Lets say part of the email table looks like
Name | mgr1 | mgr2 | email | mgr1eml | mgr2email
normally i burst to email and group by name
how would i burst to all three emails without having to create 3 different reports?
A:
It's been a while, but I think you can do it this way:
Use a comma delimited string for your email addresses:
[email protected],[email protected],...
Set your burst property to email addresses (Report Studio > File menu > Burst Options > Burst Recipient > Type).
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding difference in unix epoch time via Python vs. InfluxDB
I've been trying to figure out how to generate the same Unix epoch time that I see within InfluxDB next to measurement entries.
Let me start by saying I am trying to use the same date and time in all tests:
April 01, 2017 at 2:00AM CDT
If I view a measurement in InfluxDB, I see time stamps such as:
1491030000000000000
If I view that measurement in InfluxDB using the -precision rfc3339 it appears as:
2017-04-01T07:00:00Z
So I can see that InfluxDB used UTC
I cannot seem to generate that same timestamp through Python, however.
For instance, I've tried a few different ways:
>>> calendar.timegm(time.strptime('04/01/2017 02:00:00', '%m/%d/%Y %H:%M:%S'))
1491012000
>>> calendar.timegm(time.strptime('04/01/2017 07:00:00', '%m/%d/%Y %H:%M:%S'))
1491030000
>>> t = datetime.datetime(2017,04,01,02,00,00)
>>> print "Epoch Seconds:", time.mktime(t.timetuple())
Epoch Seconds: 1491030000.0
The last two samples above at least appear to give me the same number, but it's much shorter than what InfluxDB has. I am assuming that is related to the precision, InfluxDB does things down to nanosecond I think?
Python Result: 1491030000
Influx Result: 1491030000000000000
If I try to enter a measurement into InfluxDB using the result Python gives me it ends up showing as:
1491030000 = 1970-01-01T00:00:01.49103Z
So I have to add on the extra nine 0's.
I suppose there are a few ways to do this programmatically within Python if it's as simple as adding on nine 0's to the result. But I would like to know why I can't seem to generate the same precision level in just one conversion.
I have a CSV file with tons of old timestamps that are simply, "4/1/17 2:00". Every day at 2 am there is a measurement.
I need to be able to convert that to the proper format that InfluxDB needs "1491030000000000000" to insert all these old measurements.
A better understanding of what is going on and why is more important than how to programmatically solve this in Python. Although I would be grateful to responses that can do both; explain the issue and what I am seeing and why as well as ideas on how to take a CSV with one column that contains time stamps that appear as "4/1/17 2:00" and convert them to timestamps that appear as "1491030000000000000" either in a separate file or in a second column.
A:
Something like this should work to solve your current problem. I didn't have a test csv to try this on, but it will likely work for you. It will take whatever csv file you put where "old.csv" is and create a second csv with the timestamp in nanoseconds.
import time
import datetime
import csv
def convertToNano(date):
s = date
secondsTimestamp = time.mktime(datetime.datetime.strptime(s, "%d/%m/%y %H:%M").timetuple())
nanoTimestamp = str(secondsTimestamp).replace(".0", "000000000")
return nanoTimestamp
with open('old.csv', 'rb') as old_csv:
csv_reader = csv.reader(old_csv)
with open('new.csv', 'wb') as new_csv:
csv_writer = csv.writer(new_csv)
for i, row in enumerate(csv_reader):
if i != 0:
# Put whatever rows the data appears in and the row you want the data to go in here
row.append(convertToNano(row[<location of date in the row>]))
csv_writer.writerow(row)
As to why this is happening, after reading this it seems like you aren't the only one getting frustrated by this issue. It seems as though influxdb just happens to be using a different type of precision then most python modules. I didn't really see any way to get around it than doing the string manipulation of the date conversion unfortunately.
| {
"pile_set_name": "StackExchange"
} |
Q:
LINQ to Entities does not recognize the method '<>f__AnonymousType4`1[System.String] get_Item(Int32)'
please I'm working on a ASP.NET MVC project with Entity Framework, I try to use this Query but I got an error .
Query :
var R = (from A in SCHOOL_DB_Context.Con.ABS where A.STG_ABS == STG && (A.DT_ABS.Month + "/" + A.DT_ABS.Year) == MONTHS[i].MONTH && A.DT_ABS.Hour == Hour select A).ToList();
Error :
LINQ to Entities does not recognize the method
'<>f__AnonymousType4`1[System.String] get_Item(Int32)' method, and
this method cannot be translated into a store expression.
The full code is :
var MONTHS = (from A in SCHOOL_DB_Context.Con.ABS where A.STG_ABS == STG && A.DT_ABS.Hour == Hour group A by A.DT_ABS.Month + "/" + A.DT_ABS.Year into G select new { MONTH = G.Key }).ToList();
List<DataPoint> DATA = new List<DataPoint>();
List<DataPoint> DTP = new List<DataPoint>();
if (MONTHS.Count == 0)
{
DTP.Add(new DataPoint(null, null));
}
else
{
for (int i = 0; i < MONTHS.Count; i++)
{
var R = (from A in SCHOOL_DB_Context.Con.ABS where A.STG_ABS == STG && (A.DT_ABS.Month + "/" + A.DT_ABS.Year) == MONTHS[i].MONTH && A.DT_ABS.Hour == Hour select A).ToList();
int Count = 0;
Count = R.Count;
//DATA.Add(new DataPoint(MONTHS[i].MONTH, Count));
DTP.Add(new DataPoint(MONTHS[i].MONTH, Count));
}
}
Please any help to fix this issue?
A:
Try pulling the indexer out of the secondary query:
var month = MONTHS[i].MONTH;
var R = (from A in SCHOOL_DB_Context.Con.ABS
where A.STG_ABS == STG
&& (A.DT_ABS.Month + "/" + A.DT_ABS.Year) == month
&& A.DT_ABS.Hour == Hour
select A).ToList();
You might still get an error concatenating the month and year as a string - if you do my next suggestion would be to pull the month and year separately in the MONTHS query and compare the two values independently.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I access an instance of a class that is inside an arrayList
I am doing a MOOC and am supposed to return a number that is associated with a name. The number name combo is held in an object called Phonebook that is an ArrayList. The arrayList holds information of Person, a class that I created. I need to perform getNumber() on the Phonebook object but I can't since getNumber() only works on objects of type Person.
package problem94_phonebook;
import java.util.ArrayList;
public class Phonebook {
private ArrayList<Person> phonebook;
public Phonebook(){
this.phonebook = new ArrayList<Person>();
}
public String searchNumber(String name){
if (this.phonebook.contains(name)){
return this.phonebook.Person.getNumber(); // here is the problem
}
}
}
package problem94_phonebook;
import java.util.ArrayList;
public class Person {
private String Name;
private String Numb;
private ArrayList<String> Phonebook;
public Person(String name, String numb){
this.Name = name;
this.Numb = numb;
this.Phonebook = new ArrayList<String>();
}
public String getName() {
return Name;
}
public String getNumber() {
return Numb;
}
public String toString(){
return this.Name +" " +"nummber: " + this.Numb;
}
public void changeNumber(String newNumber){
this.Numb = newNumber;
}
public void add(String name, String number){
this.Phonebook.add(name);
this.Phonebook.add(number);
}
public void printAll(){
for(String i : this.Phonebook){
System.out.println(i);
}
}
}
A:
EDIT
.contains won't work since .contains
Returns true if and only if this list contains at least one element e such that (o==null ? e==null : o.equals(e)).
(More info on how .contains works here)
which means that unless your class overrides the .equals method to compare the names this part -> o.equals(e) will always return false since .equals is comparing two different objects. (more info on how .equals work here)
Thus if you really want to use .contains you need to override .equals method in your Person class. But it still would not work since your second problem is you are accessing an element of an array list wrong.
But since you are a beginner, I suggest that you just try changing your method to this:
public ArrayList<String> searchNumber(String name){
ArrayList<String> result = new ArrayList<>(); // list incase of persons with the same name
for (Person p : phonebook){ // iterate through the array
if (p.getName().equals(name)){ // check if the current person's name is equal to anme
result.add(p.getNumber()); // add the person's phone number
}
}
return result ;
}
Also I noticed that you have an attribute Phonebook in your Person class. It is just an array of Strings but I think it is better if you change it to an array of Phonebook so that you can make use of your class Phonebook. This will be your new Person class.
package problem94_phonebook;
import java.util.ArrayList;
import Phonebook;
public class Person {
private String Name;
private String Numb;
private ArrayList<Phonebook> phonebook; // updated
public Person(String name, String numb){
this.Name = name;
this.Numb = numb;
this.phonebook= new ArrayList<Phonebook>();// updated
}
public String getName() {
return Name;
}
public String getNumber() {
return Numb;
}
public String toString(){
return this.Name +" " +"nummber: " + this.Numb;
}
public void changeNumber(String newNumber){
this.Numb = newNumber;
}
public void add(Person personToBeAdded){ // changed
boolean isPresent = false;
// check if person to be added is already in the phonebook to avoid duplicates
for (Person person:this.phonebook){
if (person.getName().equals(personToBeAdded.getName()) && person.getNumber().equals(personToBeAdded.getNumber())){
isPresent = true;
break;
}
}
if (!isPresent){
this.phonebook.add(personToBeAdded);
}
}
public void printAll(){
for(Person person : this.phonebook){
System.out.println(person.toString());
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to approach animations and OpenGL
There are tons of tools and instructions for making 3d models and animations in various software products. My question is: in video-game engines, when would you use a pre-rendered animation, and when would you use armature data in the model to manipulate your model in to the desired action?
Secondary questions:
Are there any games that even use the model's rigging, in-game, or is everything pre-rendered?
Are there any human-readable file formats that contain armature data?
Lastly, from a OpenGL-level and up perspective, how would you implement a system for animating something like walking?
I am building an OpenGL graphics engine from scratch as a personal project, so if answers can cater to that context, it would be fantastic.
A:
Yeah, most games use a model's rigging and apply animation tracks to the bones in real time based on things happening in the game or player input. Animations can also be blended between to make new animations or transition from one animation to another. Animations can also be combined such that the lower half of a body is playing one animation and the upper half is playing a different animation. There is also something called parametric animation where a lot more of the animations are derived from a smaller set of animated bone data. There is also various levels of physics based animation such as ragdoll and inverse kinematics. I've specialized as an animation programmer at previous employers, check out this more detailed info based on my experiences and observations: http://blog.demofox.org/2012/09/21/anatomy-of-a-skeletal-animation-system-part-1/
| {
"pile_set_name": "StackExchange"
} |