conversations
listlengths 2
20
|
---|
[
{
"from": "human",
"value": "\n Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.\n\nI don't understand the part marked in bold.\nCongress shall make no law prohibiting the free exercise of religion. So congress should not make a law which prohibits the freedom of religion. I get it.\nBut Congress shall make a law which respects an establishment of religion. Doesn't \"Congress shall make no law respecting an establishment of religion\" mean congress should not make a law that respects religion because there's \"no\" in it?? "
},
{
"from": "gpt",
"value": "The confusing term seems to be \"respecting\". This is a somewhat different meaning of \"respect\", that is still in common use:\n\n respecting (prep): \n \n * in view of : considering\n * with respect to : concerning \n \n\nThe first amendment states that Congress shall pass no law related to (or with regard to) the establishment of a (state) religion. This has been interpreted by the courts to have many consequent effects, mainly that no governing body in the country can write law supporting any one religion over another religion. More information\nThe term \"respecting\" is mostly limited to legal documents, or formal essays/articles. Examples:\n\n Respecting the salaries of the governors of states, the constitution made no provision.\n The speculations of the fathers respecting the origin and course of the world seek to combine Christian ideas of the Deity with doctrines of Greek philosophy.\n Authentic information respecting the great valley of the Ganges was supplied by Megasthenes, an ambassador sent by Seleucus, who reached the remote city of Patali-putra, the modern Patna.\n\n\"Respecting\" can be used to mean \"having/showing respect for\", but most commonly in the related term \"self-respecting\"\n\n No self-respecting gentleman would go outside on a sunny English day without an umbrella.\n\nIn most cases, if you want to say \"having respect for\" use that, the verb \"to respect\", or the adjective \"respectful (of)\".\n\n He was respectful of her concerns, and did not press her for an immediate decision.\n Nor did the clergy much respect the official standards of the Church.\n They had little respect for tradition, flaunting contemporary standards to produce art intended to shock the viewer.\n\nThere is also the related term \"respective\":\n\n respective (adj): belonging or relating to each one of the people or things that have been mentioned, not the same or shared : separate\n\nThis is commonly used to talk about things that are similar but separately related to some other similar set of things:\n\n As the sun set, each of the children hurried to their respective homes.\n The chronic controversies between the courts of common law and the Admiralty Court as to the limits of their respective jurisdictions reached an acute stage.\n\nAs Flater's comment mentions, the term \"irrespective (of)\" implies the opposite of \"respecting\": not taking (something) into account; regardless of. \n\n Irrespective of their political differences, the candidates agreed that the economy was the most important issue facing the country."
}
] |
[
{
"from": "human",
"value": "Why are people unwilling to share information about their salary?"
},
{
"from": "gpt",
"value": "People are unwilling to talk openly about their salaries because there is a social taboo surrounding the practice. \nLike all taboos, there isn't necessarily a good rationale behind its existence. And the taboo can, in fact, be detrimental, as it is here.\n\nAround the time of the industrial revolution, you were not allowed to talk about your wages. You could be fired if you did. Employers did this because discussing what you make is an obvious and required step towards collective bargaining.\n\nIf you discuss your wages, someone will find out they make less money than they should. If they make less money than they should, they will want more money. The people making money from underpaying people don't want to pay more. Your ignorance is a very useful tool in depressing your wages or the wages of others -- especially minorities or women -- around you.\n\nSo people learned from the early industrialists of old that they would be fired if it was found out they talked money. Those people told others of what had happened to them, that they had been fired for talking money. It snowballed from there. They spread that lesson on to others. So now, even though discussing your salary is a legally protected right, people are hesitant to exercise that right.\n\nNot discussing salary is an abusive practice. It developed as a historical artefact from the abuse of the rich and powerful in the beginning of the industrial revolution that has wormed its way deep into society and is still pushed by the same abusers today."
}
] |
[
{
"from": "human",
"value": "In D&D, how many attacks can a fighter with two-weapon fighting make at 5th level?"
},
{
"from": "gpt",
"value": "Answer: 3 attacks on your turn (if you don't use Action Surge). (It could go up to five with Action Surge, though.)\n\nThe Two-Weapon Fighting rule states (PHB, page 195):\n\n>When you take the Attack action and attack with a light melee weapon that you’re holding in one hand, you can use a bonus action to attack with a different light melee weapon that you’re holding in the other hand. You don’t add your ability modifier to the damage of the bonus attack, unless that modifier is negative. If either weapon has the thrown property, you can throw the weapon, instead of making a melee attack with it.\n\nThe bonus action used as part of Two-Weapon Fighting is an attack, but it is not the Attack action.\nWhen you are a 5th-level fighter and you take the Attack action, you can make two attacks. You can also use your bonus action to make an additional attack. Notably, you can only take one bonus action per turn, since the rules for bonus actions state:\n\nYou can take only one bonus action on your turn, so you must choose which bonus action to use when you have more than one available.\n\nYou only add your ability modifier to the damage of that bonus-action attack if you have the Two-Weapon Fighting fighting style (PHB, page 72):\n\nWhen you engage in two-weapon fighting, you can add your ability\nmodifier to the damage of the second attack.\n\nTherefore, you are taking the Attack action (2 attacks) and then making an additional attack with your bonus action (1 attack), for a total of 3 attacks if you're using your action plus your bonus action. There's a clear difference between them.\nAction Surge lets a 5th-level fighter attack up to 5 times in one turn, once per short rest.\n\nThe description of the Action Surge feature says (PHB, page 72):\n\nStarting at 2nd level, you can push yourself beyond your normal limits for a moment. On your turn, you can take one additional action. Once you use this feature, you must finish a short or long rest before you can use it again. Starting at 17th level, you can use it twice before a rest, but only once on the same turn.\n\nWhich means that if you use Action Surge, you would have 2 actions and 1 bonus action on your turn.\nSo with this, you can take your Attack action (2 attacks), then use Action Surge and take another Attack action (2 more attacks), and finally your Two-Weapon Fighting bonus action (one more attack), for a total of 5 attacks during your turn."
}
] |
[
{
"from": "human",
"value": "Consider the following code:\n```avgDists = np.array([1, 8, 6, 9, 4])\nids = avgDists.argsort()[:n]\n```\nThis gives me indices of the ```n``` smallest elements. Is it possible to use this same ```argsort``` in descending order to get the indices of ```n``` highest elements?"
},
{
"from": "gpt",
"value": "If you negate an array, the lowest elements become the highest elements and vice-versa. Therefore, the indices of the `n` highest elements are:\n```\n(-avgDists).argsort()[:n]\n```\nAnother way to reason about this, as mentioned in the comments, is to observe that the big elements are coming last in the argsort. So, you can read from the tail of the argsort to find the `n` highest elements:\n```\navgDists.argsort()[::-1][:n]\n```\nBoth methods are O(n log n) in time complexity, because the argsort call is the dominant term here. But the second approach has a nice advantage: it replaces an O(n) negation of the array with an O(1) slice. If you're working with small arrays inside loops then you may get some performance gains from avoiding that negation, and if you're working with huge arrays then you can save on memory usage because the negation creates a copy of the entire array.\n\nNote that these methods do not always give equivalent results: if a stable sort implementation is requested to argsort, e.g. by passing the keyword argument kind='mergesort', then the first strategy will preserve the sorting stability, but the second strategy will break stability (i.e. the positions of equal items will get reversed)."
}
] |
[
{
"from": "human",
"value": "Can a woman give birth to twins with different fathers?"
},
{
"from": "gpt",
"value": "Yes, this is possible through something called heteropaternal superfecundation (see below for further explanation).\nOf all twin births, 30% are identical and 70% are non-identical (fraternal) twins.\n\nIdentical twins result when a zygote (one egg, or ovum, fertilized by one sperm) splits at an early stage to become twins. Because the genetic material is essentially the same, they resemble each other closely.\n\nTypically during ovulation only one ovum is released to be fertilized by one sperm. However, sometimes a woman's ovaries release two ova. Each must be fertilized by a separate sperm cell. If she has intercourse with two different men, the two ova can be fertilized by sperm from different sexual partners. The term for this event is heteropaternal superfecundation (HS): twins who have the same mother, but two different fathers.\n\nThis has been proven in paternity suits (in which there will be a bias selecting for possible infidelity) involving fraternal twins, where genetic testing must be done on each child. The frequency of heteropaternal superfecundation in this group was found (in one study) to be 2.4%. As the study's authors state, \"Inferences about the frequency of HS in other populations should be drawn with caution.\""
}
] |
[
{
"from": "human",
"value": "Relationship between SVD and PCA. How to use SVD to perform PCA?"
},
{
"from": "gpt",
"value": "Let the real values data matrix $\\mathbf X$ be of $n \\times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have been subtracted and are now equal to zero.\n\nThen the $p \\times p$ covariance matrix $\\mathbf C$ is given by $\\mathbf C = \\mathbf X^\\top \\mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\\mathbf C = \\mathbf V \\mathbf L \\mathbf V^\\top,$$ where $\\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\\mathbf L$ is a diagonal matrix with eigenvalues $\\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\\mathbf{XV}$.\n\nIf we now perform singular value decomposition of $\\mathbf X$, we obtain a decomposition $$\\mathbf X = \\mathbf U \\mathbf S \\mathbf V^\\top,$$ where $\\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\\mathbf C = \\mathbf V \\mathbf S \\mathbf U^\\top \\mathbf U \\mathbf S \\mathbf V^\\top /(n-1) = \\mathbf V \\frac{\\mathbf S^2}{n-1}\\mathbf V^\\top,$$ meaning that right singular vectors $\\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\\lambda_i = s_i^2/(n-1)$. Principal components are given by $\\mathbf X \\mathbf V = \\mathbf U \\mathbf S \\mathbf V^\\top \\mathbf V = \\mathbf U \\mathbf S$.\n\nTo summarize:\n\n1. If $\\mathbf X = \\mathbf U \\mathbf S \\mathbf V^\\top$, then the columns of $\\mathbf V$ are principal directions/axes (eigenvectors).\n2. Columns of $\\mathbf {US}$ are principal components ("scores").\n3. Singular values are related to the eigenvalues of covariance matrix via $\\lambda_i = s_i^2/(n-1)$. Eigenvalues $\\lambda_i$ show variances of the respective PCs.\n4. Standardized scores are given by columns of $\\sqrt{n-1}\\mathbf U$ and loadings are given by columns of $\\mathbf V \\mathbf S/\\sqrt{n-1}$. See e.g. here and here for why "loadings" should not be confused with principal directions.\n5. The above is correct only if $\\mathbf X$ is centered. Only then is covariance matrix equal to $\\mathbf X^\\top \\mathbf X/(n-1)$.\n6. The above is correct only for $\\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\\mathbf U$ and $\\mathbf V$ exchange interpretations.\n7. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations.\n8. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\\mathbf U$, and $k\\times k$ upper-left part of $\\mathbf S$. Their product $\\mathbf U_k \\mathbf S_k$ is the required $n \\times k$ matrix containing first $k$ PCs.\n9. Further multiplying the first $k$ PCs by the corresponding principal axes $\\mathbf V_k^\\top$ yields $\\mathbf X_k = \\mathbf U_k^\\vphantom \\top \\mathbf S_k^\\vphantom \\top \\mathbf V_k^\\top$ matrix that has the original $n \\times p$ size but is of lower rank (of rank $k$). This matrix $\\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here.\n10. Strictly speaking, $\\mathbf U$ is of $n\\times n$ size and $\\mathbf V$ is of $p \\times p$ size. However, if $n>p$ then the last $n-p$ columns of $\\mathbf U$ are arbitrary (and corresponding rows of $\\mathbf S$ are constant zero); one should therefore use an economy size (or thin) SVD that returns $\\mathbf U$ of $n\\times p$ size, dropping the useless columns. For large $n\\gg p$ the matrix $\\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\\ll p$."
}
] |
[
{
"from": "human",
"value": "What is the English term for when someone thinks they are doing something nice for you but it ends up making things worse. EX: Someone buys you an elephant -- nice gesture and cool! But now you have to take care of it, and it becomes a burden on you."
},
{
"from": "gpt",
"value": "In fact, the English expression for a burdensome gift is literally white elephant:\n\n> a thing that is useless and no longer needed, although it may have cost a lot of money [OALD]\n\nSo-called white elephants, or albino elephants, are found in many parts of South and Southeast Asia. In Buddhist countries they may be venerated as Queen Maya, mother of the Buddha, was said to have been visited in a dream by a white elephant holding a white lotus flower, and Siddharth Gautama entered his mother's womb in the form a white elephant. The white elephant is also associated with traits like mental strength and purity.\nIt became a royal symbol in Siam (Thailand); the king continues to keep white elephants. The story emerged that if a courtier displeased him, the king would make him a gift of a white elephant. The courtier could hardly decline a royal gift, and could hardly afford not to maintain a sacred animal, and could not put it to productive use, and so would be ruined by the cost of upkeep.\nThe earliest example of its use is from a 1721 essay in London Journal:\n\n In short, Honour and Victory are generally no more than white Elephants; and for white Elephants the most destructive Wars have been often made.\n\nA 2011 paper by Ross Bullen entitled “This Alarming Generosity”: White Elephants and the Logic of the Gift, in American Literature, covers the popularization of the term in the mid-19th century, presents an alternative account, that the story is a piece of orientalism and the white elephant rose as a literary trope."
}
] |
[
{
"from": "human",
"value": "Did Hillary Clinton propose to punish flag burners in 2005?"
},
{
"from": "gpt",
"value": "The proposed law co-sponsored by Clinton in 2005 prohibits flag burning under specific circumstances. It does not attempt to make flag burning illegal in general (i.e. outside of those specific circumstances). Clinton has voted against a proposed amendment to the Constitution that would allow Congress to ban flag burning in general.\n\nIn 2005, Clinton co-sponsored the Flag Protection Act of 2005, whose summary is as follows:\n\n> Flag Protection Act of 2005 - Amends the federal criminal code to revise provisions regarding desecration of the flag to prohibit: (1) destroying or damaging a U.S. flag with the primary purpose and intent to incite or produce imminent violence or a breach of the peace; (2) intentionally threatening or intimidating any person, or group of persons, by burning a U.S. flag; or (3) stealing or knowingly converting the use of a U.S. flag belonging to the United States, or belonging to another person on U.S. lands, and intentionally destroying or damaging that flag.\n\nIt seems like a reaction to ongoing efforts to pass a constitutional amendment allowing Congress to ban flag burning, a move that the co-sponsors of the 2005 act oppose and voted against. Its text notes:\n\n> the Bill of Rights is a guarantee of those freedoms and should not be amended in a manner that could be interpreted to restrict freedom, a course that is regularly resorted to by authoritarian governments which fear freedom and not by free and democratic nations\n\nbut suggests that flag burning with intent to incite violence is not protected by the Constitution (unlike flag burning as political expression, which is protected):\n\n> destruction of the flag of the United States can be intended to incite a violent response rather than make a political statement and such conduct is outside the protections afforded by the first amendment to the Constitution.\n\nNote that the distinction between \n\n* banning flag burning for being offensive, and \n* banning flag burning when it incites violence or disturbs the peace\n\nis an important one. The Flag Protection Act proposed in 2005 was essentially a replacement for a previous Flag Protection Act that banned flag burning in general:\n\n> Whoever knowingly mutilates, defaces, physically defiles, burns, maintains on the floor or ground, or tramples upon any flag of the United States shall be fined under this title or imprisoned for not more than one year, or both.\n\nbut this was deemed unconstitutional in United States v. Eichman, 496 U.S. 310 in 1990, on the basis of their earlier (1989) decision in Texas v. Johnson, 491 U.S. 397. There, the Supreme Court noted that\n\n> an important governmental interest in regulating nonspeech can justify incidental limitations on First Amendment freedoms when speech and nonspeech elements are combined in the same course of conduct\n\nbut that in the case under consideration, this did not apply:\n\n> An interest in preventing breaches of the peace is not implicated on this record. \n\nand further noted that \n\n> Expression may not be prohibited [p398] on the basis that an audience that takes serious offense to the expression may disturb the peace, since the Government cannot assume that every expression of a provocative idea will incite a riot, but must look to the actual circumstances surrounding the expression.\n\nClinton was not one of the 59 co-sponsors of a proposed amendment to the constitution that would allow Congress to ban flag burning in general:\n\n> Proposing an amendment to the Constitution of the United States authorizing Congress to prohibit the physical desecration of the flag of the United States.\n\nand she also voted against it. The proposed amendment failed to pass by one vote."
}
] |
[
{
"from": "human",
"value": "There are several questions here about the definition of depth of field, about focal length, and about subject distance. And of course there's the basic how does aperture affect my photographs. And plenty of how do I get super-shallow d.o.f questions. There's related questions like this one. But there's no be-all-end-all question asking:\nWhat exactly determines depth of field in a photograph? \nIs it just a property of the lens? Can lenses be designed to give more depth of field for the same aperture and focal length? Does it change with camera sensor size? Does it change with print size? How do those last two relate?"
},
{
"from": "gpt",
"value": "I'm going to dispense with the formulas, photos of rulers and definitions of \"magnification\" and go with what you actually experience in practice. The major factors that actually matter to shooting are:\n\n* Aperture. Wide aperture lenses give you a shallower depth of field. This is probably the least controversial factor! This is important as some lenses have much larger apertures e.g. 18-55 f/3.5-5.6 vs. 50 f/1.8 \n* Subject distance. This is a really important consideration. Depth of field gets drastically shallower when you start to get really close. This is important as at macro focussing distances DoF is a major problem. It also means you can get shallow DoF regardless of aperture if you get close enough, and that if you want deep DoF in low light compose to focus further away.\n* Focal length. This does affect depth of field, but only in certain ranges, when maintaining subject size. Wide lenses have very deep depth of field at most subject distances. Once you get past a certain point, DoF changes very little with focal length. This is important again because if you want to increase / decrease DoF you can use focal length to do this whilst still filling the frame with your subject.\n* Sensor size. This affects DoF when you maintain the same subject distance and field of view between sensor sizes. The bigger the sensor the shallower the depth of field. DSLRs have much bigger sensors than compacts, and so for the same FoV and f-ratio they have shallower DoF. This is important because by the same token cropping images increases DoF when maintaining the same final output size, as it's akin to using a smaller sensor.\n"
}
] |
[
{
"from": "human",
"value": "Why did the NES not allow rotated sprites?"
},
{
"from": "gpt",
"value": "For each sprite displayed on a scanline, the hardware fetches two bytes from memory, and then clocks the pixels out one by one. The sprite is eight pixels wide, and each pixel is two bits, which is why it's two memory accesses per sprite per scanline. You can imagine that this arrangement just needs a couple of shift registers to clock the pixels out.\n\nNow, flipping the sprite about its vertical axis is easy: you just clock the pixels out in reverse order! Similarly, flipping the sprite about its horizontal axis is also easy: You just fetch the bitmap from memory in reverse order. Rotating by 180° of course is the same as flipping both horizontally and vertically.\n\nBut if you wanted to rotate the sprite 90° or 270°, that's much harder. That's because the hardware would need to get one pixel from each pair of bytes, which means fetching 16 bytes from memory instead of just two. There is not enough time on this slow hardware to do that. Incidentally, this is also where the maximum number of sprites per scanline limitation comes from.\n\nIt's a similar story on the Commodore 64, the Atari 2600, and many others: These platforms can do some simple 2D manipulations on sprites like stretching and sometimes flipping like you've seen, but never rotations."
}
] |
[
{
"from": "human",
"value": "I want to merge two dictionaries into a new dictionary.\n```\nx = {'a': 1, 'b': 2}\ny = {'b': 3, 'c': 4}\nz = merge(x, y)\n\n>>> z\n{'a': 1, 'b': 3, 'c': 4}\n```"
},
{
"from": "gpt",
"value": "Here is what you can do for the case you described:\n\n```\nz = dict(list(x.items()) + list(y.items()))\n```\nThis will, as you want it, put the final dict in ```z```, and make the value for key ```b``` be properly overridden by the second (```y```) dict's value:\n```>>> x = {'a': 1, 'b': 2}\n>>> y = {'b': 10, 'c': 11}\n>>> z = dict(list(x.items()) + list(y.items()))\n>>> z\n{'a': 1, 'c': 11, 'b': 10}\n\n```\nIf you use Python 2, you can even remove the ```list()``` calls. To create z:\n```>>> z = dict(x.items() + y.items())\n>>> z\n{'a': 1, 'c': 11, 'b': 10}\n```\nIf you use Python version 3.9.0a4 or greater, you can directly use:\n```>>> x = {'a': 1, 'b': 2}\n>>> y = {'b': 10, 'c': 11}\n>>> z = x | y\n>>> z\n{'a': 1, 'c': 11, 'b': 10}\n```"
}
] |
[
{
"from": "human",
"value": "There are many \"standards\" for the JSON content type:\napplication/json\napplication/x-javascript\ntext/javascript\ntext/x-javascript\ntext/x-json\nWhich one do I use, and where? I assume security and browser support issues are a factor."
},
{
"from": "gpt",
"value": "According to RFC 4627:\nThe MIME media type for JSON text is application/json. The default encoding is UTF-8.\n\nFor JSON text, you should use:\n```\napplication/json\n```\n\nFor JSONP (runnable JavaScript) with callback, you should use:\n```\napplication/javascript\n```"
}
] |
[
{
"from": "human",
"value": "What's the goal of Minecraft and what can users achieve in this world?"
},
{
"from": "gpt",
"value": "The thing about Minecraft is, unlike most games you set your own goals. Here's a list of common (and uncommon!) things people have done with Minecraft:\n\n* Building a massive structure\n* Recreating real-world objects\n* Survival (Deleting map upon dying)\n* Making amusing videos\n* Creating an adventure map for others to play\n* Getting all the achievements\n* Proposing to your girlfriend\n* Achieving difficult things in the game through ingenuity\n* Computation\n* Blowing stuff up!\n\nThe mechanism that keeps the game going is your own desire to complete the goals you've set yourself, and pure, red-hot addiction.\n\nSome additional stuff people have done:\n\n* Walking and walking and walking (thanks SevenSidedDie)\n* Build an epic railway\n* Make music\n* Find the Far Lands\n\nMinecraft 1.9+\nIn Minecraft 1.9 and beyond, there are villages, strongholds, abandoned mineshafts and nether ruins to explore. In particular, strongholds will contain portals to The End, in which players can fight The Ender Dragon, a 'boss fights'. The second 'boss fight' has also been added, The Wither. This makes exploration a key part of the game, as well as combat. 'Hardcore Mode' (world deletion upon death) is also an option if you're after a challenge.\nA creative mode has also been added that allows players complete freedom in constructing large structures, compartmentalizing that aspect of gameplay."
}
] |
[
{
"from": "human",
"value": "If I have sampled a signal using proper sampling methods (Nyquist, filtering, etc) how do I relate the length of my FFT to the resulting frequency resolution I can obtain?\nLike if I have a 2,000 Hz and 1,999 Hz sine wave, how would I determine the length of FFT needed to accurately tell the difference between those two waves?"
},
{
"from": "gpt",
"value": "The frequency resolution is dependent on the relationship between the FFT length and the sampling rate of the input signal.\n\nIf we collect 8192 samples for the FFT then we will have:\n$$\\frac{8192\\ \\text{samples}}{2} = 4096\\ \\,\\text{FFT bins}$$\n\nIf our sampling rate is 10 kHz, then the Nyquist-Shannon sampling theorem says that our signal can contain frequency content up to 5 kHz. Then, our frequency bin resolution is:\n$$\\frac{5\\ \\text{kHz}}{4096\\ \\,\\text{FFT bins}} \\simeq \\frac{1.22\\ \\text{Hz}}{\\text{bin}}$$\n\nThis is may be the easier way to explain it conceptually but simplified: your bin resolution is just \\$\\frac{f_{samp}}{N}\\$, where \\$f_{samp}\\$ is the input signal's sampling rate and N is the number of FFT points used (sample length).\n\nWe can see from the above that to get smaller FFT bins we can either run a longer FFT (that is, take more samples at the same rate before running the FFT) or decrease our sampling rate.\n\n## The Catch:\nThere is always a trade-off between temporal resolution and frequency resolution.\n\nIn the example above, we need to collect 8192 samples before we can run the FFT, which when sampling at 10 kHz takes 0.82 seconds. \n\nIf we tried to get smaller FFT bins by running a longer FFT it would take even longer to collect the needed samples.\nThat may be OK, it may not be. The important point is that at a fixed sampling rate, increasing frequency resolution decreases temporal resolution. That is the more accurate your measurement in the frequency domain, the less accurate you can be in the time domain. You effectively lose all time information inside the FFT length.\n\nIn this example, if a 1999 Hz tone starts and stops in the first half of the 8192 sample FFT and a 2002 Hz tone plays in the second half of the window, we would see both, but they would appear to have occurred at the same time.\n\nYou also have to consider processing time. A 8192 point FFT takes some decent processing power. A way to reduce this need is to reduce the sampling rate, which is the second way to increase frequency resolution.\n\nIn your example, if you drop your sampling rate to something like 4096 Hz, then you only need a 4096 point FFT to achieve 1 Hz bins *4096 Hz, then you only need a 4096 point FFT to achieve 1hz bins and can still resolve a 2khz signal. This reduces the FFT bin size, but also reduces the bandwidth of the signal.\n\nUltimately with an FFT there will always be a trade off between frequency resolution and time resolution. You have to perform a bit of a balancing act to reach all goals."
}
] |
[
{
"from": "human",
"value": "I wanted to produce a 1 GB random file, so I used following command.\n```dd if=/dev/urandom of=output bs=1G count=1\n```\nBut instead every time I launch this command I get a 32 MB file:\n```$ dd if=/dev/urandom of=output bs=1G count=1\n0+1 records in\n0+1 records out\n33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s\n```\nWhat is wrong?"
},
{
"from": "gpt",
"value": "```bs```, the buffer size, means the size of a single read() call done by dd. For example, both ```bs=1M count=1``` and ```bs=1k count=1k``` will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.\n\nRegular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and \"virtual\" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.\n\nFor ```/dev/urandom```, this limit is defined in urandom_read() in drivers/char/random.c:\n```#define ENTROPY_SHIFT 3\nstatic ssize_t\nurandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)\n{\n nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));\n ...\n}\n```\nThis means that every time the function is called, it will clamp the requested size to 33554431 bytes.\nBy default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify ```iflag=fullblock```.)\n\nNote also that \"the size of a single read()\" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.\n\nAnd it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.\n\nSo for simplicity, just use ```head -c 1G /dev/urandom > output```."
}
] |
[
{
"from": "human",
"value": "The treads on my road bike's 28C tires are almost completely gone—they only persist along the shoulders of the tire. Do the treads matter? What is a good indicator of when the tire as a whole needs to be replaced?"
},
{
"from": "gpt",
"value": "The tread on a road bike's tires is really quite unimportant and purely cosmetic. Road bike tires have tread patterns because they sell better, not because they perform better.\nHere's some questions to ask to decide about replacing a tire:\n\n* Can you see any of the fabric?\n* Are you getting flats more easily than you used to?\n* Is any part of the tire bulging out or lumpy?\n* Are there visible holes/gashes in the tire?\n* Does the wear indicator indicate you should replace? (Some tires have dimples or grooves that are there to indicate when the tire should be replaced. Some have an alternate color of rubber underneath, and when that shows through the tire should be replaced)\n\nSmooth tread is actually ideal for road conditions, but not commonly found on tires. If you're on hard surfaces such as pavement/asphalt, the tread doesn't help you and actually slightly reduces contact area and increases rolling resistance. On most road tires the minimal tread pattern is just cosmetic to make their customers feel better (since a smooth tire looks slippery).\n\nSee also: Sheldon Brown's comments on tire replacement and tread patterns\nIf you plan to ride through mud, sand or other soft surfaces, you need tread. So when you no longer have tread on your mountain bike or cyclocross bike's tires, it's time to replace them (unless you plan to only use them on the road)."
}
] |
[
{
"from": "human",
"value": "Is there a way to create an index on a property/column using fluent configuration, instead of using the new ```IndexAttribute``` ?"
},
{
"from": "gpt",
"value": "Currently there is no \"first class support\" for creating a index via the fluent API, but what you can do is via the fluent API you can mark properties as having attributes from the Annotation API. This will allow you to add the ```Index``` attribute via a fluent interface.\n\nHere are some examples from the work item from Issues site for EF.\nCreate a index on a single column:\n```modelBuilder.Entity()\n .Property(e => e.MyProperty)\n .HasColumnAnnotation(\n IndexAnnotation.AnnotationName,\n new IndexAnnotation(new IndexAttribute()));\n```\nMultiple indexes on a single column: \n```modelBuilder.Entity()\n .Property(e => e.MyProperty)\n .HasColumnAnnotation(\n IndexAnnotation.AnnotationName,\n new IndexAnnotation(new[]\n {\n new IndexAttribute(\"Index1\"),\n new IndexAttribute(\"Index2\") { IsUnique = true }\n }));\n```\nMulti-Column indexes:\n```modelBuilder.Entity()\n .Property(e => e.MyProperty1)\n .HasColumnAnnotation(\n IndexAnnotation.AnnotationName,\n new IndexAnnotation(new IndexAttribute(\"MyIndex\", 1)));\nmodelBuilder.Entity()\n .Property(e => e.MyProperty2)\n .HasColumnAnnotation(\n IndexAnnotation.AnnotationName,\n new IndexAnnotation(new IndexAttribute(\"MyIndex\", 2)));\n```\nUsing the above techniques will cause ```.CreateIndex()``` calls to be automatically created for you in your ```Up()``` function when you scaffold your next migration (or be automatically created in the database if you are not using migrations)."
}
] |
[
{
"from": "human",
"value": "Both races have the same limits on their strength, and athletics, but it's hard to imagine why. A Goliath could feasibly lift and throw a gnome, yet the Gnome, following rules as written, can pin down a Goliath, fairly easily, if statted correctly. \nIs there an in-universe explanation as to why such dramatically different sized creatures can wrestle on an even playing field?\nHow might a DM explain a scenario in which a gnome beats a goliath in any kind of test of strength?"
},
{
"from": "gpt",
"value": "D&D is a terrible reality simulator.\nThe problem that you're having here is that, in real life, most people who are 7 feet tall are going to be faster, stronger, and all-around more physically powerful than a person who is 3 feet tall. If you compare two people of similar athleticism, the person who is 4 feet taller is going to be able to physically dominate the other.\n\nHowever, D&D is not a game built to simulate reality. It's a game built to let you play through stories about heroes who fight against a large variety of evil things. It's built to let players build a wide array of kinds of hero, and let them generally be effective at their role, even if the character options they picked wouldn't work in real life.\n\nThis means that the game will let you build a gnome strongman, if that's the kind of character you want to play. The designers wouldn't want your gnome strongman to be much worse at being a strongman than a character of a different race, so the only racial 'penalty' to being a strongman that you get is that you don't get a racial bonus to Strength. All of this is based on intentional design choices to make as many player archetypes as possible viable.\n\nIn-universe, they'd explain this just like we would in real life. If a really buff but physically small person wins, say, an arm-wrestling contest with someone pushing 7 feet, we'd be impressed. It would be odd, for sure, but it wouldn't be something impossible. In a world where divine blessings and magic are commonplace, the idea of a strong gnome beating a goliath in a contest of strength becomes less unbelievable, and more possible."
}
] |
[
{
"from": "human",
"value": "So I'm pretty far into writing my dystopian novel and I was reading over what I had. Something that helps me when I first start a novel is to get a clear picture of my characters in my head and put a face to a name, so I usually sculpt a personality and find a Google image of someone who I think matches that, and I put all of those into documents for my personal reference. I looked over my main five characters--Analise, Poet, Shove, Star, and Nova--and then suddenly something jumped out at me. Analise is Hispanic, Shove is Japanese, and Poet, Star, and Nova are all black.\nI had forgotten about their races because it wasn't important to me and I had not noticed while I was writing, because the story isn't about their racial backgrounds. But is it, I don't know, somehow alienating or offensive to white readers that the characters aren't white, and that no main characters are white? "
},
{
"from": "gpt",
"value": "The answer I'll give you here is the same as the ones I've already given you and others: write what works for you. If these are who the characters are, then that's who they are. If you're forcing diversity, then it will come off as forced. That includes making some characters white just to be diverse.\n\nWill you alienate or even offend some white readers? Yes.\n\nBut this is not the type of offense to worry about. Some people are so used to being in the mainstream everywhere and for everything, that they loudly protest when suddenly they're not. If they don't like your story, they can go literally anyplace else to find beautiful, stirring, authentic depictions of all sorts of white people. Even within works about people of color.\n\nSome people will argue that this is exactly the same as novels only including white people. But, no. It isn't. Because representation isn't just about a single work. It's about the entirety of our culture. Americans (and most Westerners) find white people so central to their understanding of the universe that they insert them in places they might not otherwise be and tell entire stories set in nonwhite worlds from the white character's point of view. (I just watched The Last King of Scotland which does exactly this...they invented a white character for this very purpose...in a movie about real events in Uganda.)\n\nWrite the story that matters to you."
}
] |
[
{
"from": "human",
"value": "When I do ```\\footnote{}``` for a value in a table, the footnote doesn't show up. How do I get it to show up? Also, is it possible to get it to show up at the bottom of the table rather than the bottom of the page?"
},
{
"from": "gpt",
"value": "This is a classic difficulty in LaTeX.\n\nThe problem is how to do layout with floats (figures and tables, an similar objects) and footnotes. In particular, it is hard to pick a place for a float with certainty that making room for the associated footnotes won't cause trouble. So the standard ```tabular``` and ```figure``` environments don't even try.\n\nWhat can you do:\n\n* Fake it. Just put a hardcoded vertical skip at the bottom of the caption and then write the footnote yourself (use ```\\footnotesize``` for the size). You also have to manage the symbols or number yourself with ```\\footnotemark```. Simple, but not very attractive, and the footnote does not appear at the bottom of the page.\n* Use the ```tabularx```, ```longtable```, ```threeparttable[x]``` (kudos to Joseph) or ```ctable``` which support this behavior.\n* Manage it by hand. Use ```[h!]``` (or ```[H]``` with the float package) to control where the float will appear, and ```\\footnotetext``` on the same page to put the footnote where you want it. Again, use ```\\footnotemark``` to install the symbol. Fragile and requires hand-tooling every instance.\n* The ```footnote``` package provides the ```savenote``` environment, which can be used to do this.\n* Minipage it (code stolen outright, and read the disclaimer about long caption texts in that case):\n\n\n \\begin{figure}\n \\begin{minipage}{\\textwidth}\n ...\n \\caption[Caption for LOF]%\n {Real caption\\footnote{blah}}\n \\end{minipage}\n \\end{figure}\n\nAdditional reference: TeX FAQ item Footnotes in tables."
}
] |
[
{
"from": "human",
"value": "Why is kVA not the same as kW?"
},
{
"from": "gpt",
"value": "Both watts and volt-amps come from the same equation, $P=IV$, but the difference is how they're measured.\nTo get volt-amps, you multiply root mean square (RMS) voltage ($V$) with RMS current ($I$) with no regard for the timing/phasing between them. This is what the wiring and pretty much all electrical/electronic components have to deal with.\n\nTo get watts, you multiply instantaneous voltage ($V$) with instantaneous current ($I$) for every sample, then average those results. This is the energy that is actually transferred.\n\nNow to compare the two measurements:\nIf voltage and current are both sinewaves, then $\\text{watts} = \\text{volt-amps} \\times \\cos(\\phi)$, where $\\phi$ is the phase angle between voltage and current. It's pretty easy to see from this that if they're both sine waves and if they're in phase ($\\phi = 0$), then $\\text{watts} = \\text{volt-amps}$.\n\nHowever, if you're NOT dealing with sine waves, the $\\cos(\\phi)$ relationship no longer applies! So you have to go the long way around and actually do the measurements as described here.\n\nHow might that happen? Easy. DC power supplies. They're everywhere, including battery chargers, and the vast majority of them only draw current at the peak of the AC voltage waveform because that's the only time that their filter capacitors are otherwise less than the input voltage. So they draw a big spike of current to recharge the caps, starting just before the voltage peak and ending right at the voltage peak, and then they draw nothing until the next peak.\n\nAnd of course there's an exception to this rule also, and that is Power Factor Correction (PFC). DC power supplies with PFC are specialized switching power supplies that end up producing more DC voltage than the highest AC peak, and they do it in such a way that their input current follows the input voltage almost exactly. Of course, this is only an approximation, but the goal is to get a close enough match that the $\\cos(\\phi)$ shortcut becomes acceptably close to accurate, with $\\phi \\approx 0$. Then, given this high voltage DC, a secondary switching supply produces what is actually required by the circuit being powered."
}
] |
[
{
"from": "human",
"value": "Elon Musk and his partner want to name their child X Æ A-12.\nIs that name allowed in California, US?"
},
{
"from": "gpt",
"value": "We could start with what the statutes say (HSC 102425)\n\n (a) The certificate of live birth for a live birth occurring on or after January 1, 2016, shall contain those items necessary to establish the fact of the birth and shall contain only the following information; (1) Full name and sex of the child.\n\nIt says nothing about the form of that name. Therefore, any prohibition of the letter Æ (or æ) etc. would have to come from the administrative interpretation of California Department of Public Health, Vital Records. There is a long handbook, which on p. 112 states the rule regarding child names. The rules note that\n\n The form must be completed using the 26 alphabetical characters of the English language with appropriate punctuation, if necessary. No pictographs (), ideograms (), diacritical marks (è, ñ, ē, ç), or extraneous entries are allowed.\n\nSo the short answer is, unless you feel like making a court case out of the matter and you have a lot of money, this name will not be allowed. The rule might be challenged in court as exceeding statutory authority, and might well be deemed to discriminate w.r.t. race and national origin. The rule could be defended on grounds of necessity, if we presume that the department is incapable of recording information other than the 26 letters and \"appropriate punctuation\" (undefined, presumably only apostrophe). It's not that in principle Unicode doesn't exist, it's that their system may not be capable of dealing with it (numerous problems would arise from the non-unique mapping from visual representation to Unicode number). There does not seem to be a court ruling on the matter."
}
] |
[
{
"from": "human",
"value": "In this Creation magazine reprint of a 1994 article titled Exploding stars point to a young universe, Young-Earth Creationist, Jonathan Sarfati argues that the scarcity of Supernova remnants (SNRs) in the sky suggests the Milky Way galaxy is less than billions of years old.\n\nOn average, a galaxy like our own, the Milky Way, should produce one supernova every 25 years.\n[...]\nAs can be readily seen above, a young universe model fits the data of the low number of observed SNRs. If the universe was really billions of years old, there are 7000 missing SNRs in our galaxy.\n\nDoes astronomy predict a Milky Way supernova every 25 years? Are there missing SNRs that undermine these predictions?"
},
{
"from": "gpt",
"value": "There are many reasons why this is wrong. The first one is the assumption of 1 supernova per 25 years. That was the value arrived at in Gustav Tammann's article published in 1970. Others got values up to 100 years (Tammann himself changed his value later). All these values are really only good guesses.\n\nWorse errors are made in the percentage of remnants that should be visible. To start with, only a small percentage of them are visible; the others are obscured by dust in the galaxy. This does not happen to the same extent to the supernovas themselves, as they are intrinsically very much brighter. \n\nSarfati also uses an old paper to come up with the number of remnants in the Large Magellanic Cloud (a satellite galaxy to the Milky Way). However, that figure was considered to be wrong (and much smaller) in later studies. Older remnants are very difficult to distinguish against a background of dust in the galaxy. Many of them disappear completely after about 10,000 years or so.\n\nIt should be noted that, using Sarfati's own reasoning, we should not be able to see any remnants older than 6000 years. Instead, we know of remnants (e.g. G166.2 + 2.5) that are over 100,000 years old. How did those originate in his universe? The same goes for the distance to supernovas. The distance to SN 1987A has been trigonometrically measured at 167,000 light years (the paper says 51,200 parsecs). In other words, its light took more than 30 times longer to reach us than the creationist universe has supposedly existed. \n\nRemember also that a supernova typically becomes either a pulsar or a black hole, both of which are very hard to observe. A black hole can only be \"seen\" through its effects on other matter, and a pulsar is mainly visible if the solar system happens to be in line with its sweeping radio beam (with some pulsars the star itself has also been seen). Hence, all we can see of them long after the event is an expanding cloud of gas that gets dimmer over the millennia.\n\nThere are several other reasons which are covered in a very good article on TalkOrigins. That article has links to all the original scientific papers referred to, so you can check the validity yourself."
}
] |
[
{
"from": "human",
"value": "Why is there so much technical detail of whaling included in Moby-Dick?"
},
{
"from": "gpt",
"value": "There are multiple purposes behind the novel structure of Moby-Dick.\n\nIn the first instance the author was himself a whaleman, having spent 18 months as an ordinary seaman aboard the whaler Acushnet in 1841–42. In his experience the literature of the day, such as Miriam \n\nCoffin or The Whale-Fisherman by Joseph Hart, did not portray whaling accurately but rather sensationalised it. He wished to provide his readers with a more realistic account.\n\nIn doing so, he also gave the reader the information necessary to better understand and appreciate the passages of whaling described in the novel. With the reader forewarned, he is able to keep the narrative passages tense and exciting. It is interesting that, early in the novel, the Pequod meets virutally no whales and that the chases become longer and more involved as the story progresses. This may be because Melville felt he needed to give the reader more and more technical information before they could fully appreicate these encounters, but understood the science would need to be spaced out among the narrative.\n\nIn describing it to the reader, the science also serves another useful narrative function. It indicates that on whaling voyages, long periods of time pass when nothing much happens at all. By padding out of the story with scientific digressions serves as a useful narrative metaphor for waiting.\n\nThe level of detail provided also mirrors one of the novel's central themes: obsession. Although Ishmael is the narrator and Ahab the obvious obsessive, the amount of science indicates that Ishmael is also an obsessive. His obsession is the business of whaling itself.\n\nFinally, Melville made good use of these passages in serving as metaphors for other themes of the novel where he felt it appropriate. For example in the chapter The Line, which describes how the whale line is stored and used in the boat, it ends by using the line as a metaphor.\n\n \"All men live enveloped in whale-lines. All are born with halters round their necks; but it is only when caught in the swift, sudden turn of death, that mortals realize the silent, subtle, ever-present perils of life.\"\n"
}
] |
[
{
"from": "human",
"value": "Why are we building larger land-based telescopes instead of launching larger ones into space?"
},
{
"from": "gpt",
"value": "Because it's cheaper.\n(1) With adaptive optics you can get 0.1 arc second resolution on the ground (admittedly only on a mountain top with particularly good air flow, but still!). This eliminates one of the major advantages of space until you get above several meters mirror diameter.\n(2) Rocket fairings are the shrouds which protect payloads during the supersonic atmospherics speeds reached during launch. A 5 meter fairing is about the largest that can be flown, which limits the size of the one-piece mirrors which can be launched. (The Dreaded Webb Telescope's mirror is in pieces which will assemble themselves in space -- a very scary and very expensive piece of design.)\n(3) Servicing a telescope on the top of Mauna Kea or in the high Chilean Andes is a difficult and expensive process. Servicing a telescope in orbit makes that look like small change. (Cost comparable to the cost of building a new giant scope on Earth.) And in-orbit servicing can't even be done with current technology except in low earth orbit.\n(4) While high resolution is one frontier in astronomy, going deep is another, and going deep requires big mirrors. A 30 meter mirror on Earth gathers much more light than a 5 meter mirror in space. The giant terrestrial telescopes simply do a better job of being light buckets for spectroscopy than anything we can yet put into space.\n\nThe bottom line is that with the development of adaptive optics, space-based telescopes of currently buildable and launchable size lost their main advantage over ground-based telescopes. And since they're 10x to 100x the cost, they are simply not worth building for many purposes.\n\nSpace based telescopes still hold a significant edge in parts of the spectrum blocked by the atmosphere such as UV and IR (Webb), and for certain tasks involving long-term high accuracy photometry (Kepler) and astrometry (Gaia). But for general purpose use, the balance seems firmly on the ground's side for large telescopes.\n\nThis will change if space flight becomes cheaper -- the SpaceX BFR, for example, with its 9 meter fairing and dramatically lower launch costs, offers great hope for space telescopes."
}
] |
[
{
"from": "human",
"value": "Why can we see the dust particles in a narrow beam of light (and not in an all lighted area)?"
},
{
"from": "gpt",
"value": "Your inability to see the dust until you narrow the slit has nothing to do with the narrowness of the beam but instead the dynamic range of light that your eye can see at one time.\nA bit of searching turns up reports of a contrast ratio for you eye at one time as between 100:1 and 1000:1. This means if you're in a room with a range of brightness greater than about 100 to 1 the brightest things will all be washed out as white and the darkest things will all be essentially black. This is obvious in photos that are \"backlit\" like this one:\n\nThese horses aren't black but because the ratio of the bright light to the dark horses exceeds the dynamic range of the camera the sky is washed out white and the horses are in silhouette.\nYour eye can adjust over time to a huge range but it can't utilize the whole range all at once.\nIn the case of dust reflecting light, if you allow a lot of light into the room the relative brightness between the small amount of light the dust is reflecting and the rest of the illuminated room prevent you from seeing the dust.\n\nThis is fundamental to signal processing. Why can't you hear a whisper in a noisy room? The noise of the crowd obscures the whisper. The difference between the signal you're trying to pick up and the background noise is called the signal-to-noise ratio. In the case of dust, the light let into the room is scattered and reflected in the room and causes the room to be illuminated. This is the noise that obscures the signal from light reflected off of the dust."
}
] |
[
{
"from": "human",
"value": "I can not initialize a List as in the following code:\n```List supplierNames = new List();\nsupplierNames.add(\"sup1\");\nsupplierNames.add(\"sup2\");\nsupplierNames.add(\"sup3\");\nSystem.out.println(supplierNames.get(1));\n```\nI face the following error: \n\n Cannot instantiate the type ```List```\n\nHow can I instantiate ```List```?"
},
{
"from": "gpt",
"value": "If you check the API for ```List``` you'll notice it says:\n```Interface List\n```\nBeing an ```interface``` means it cannot be instantiated (no ```new List()``` is possible).\nIf you check that link, you'll find some ```class```es that implement ```List```:\n\nAll Known Implementing Classes:\n```AbstractList```, ```AbstractSequentialList```, ```ArrayList```, ```AttributeList```, ```CopyOnWriteArrayList```, ```LinkedList```, ```RoleList```, ```RoleUnresolvedList```, ```Stack```, ```Vector```\n\nSome of those can be instantiated (the ones that are not defined as ```abstract class```). Use their links to know more about them, I.E: to know which fits better your needs.\nThe 3 most commonly used ones probably are:\n``` List supplierNames1 = new ArrayList();\n List supplierNames2 = new LinkedList();\n List supplierNames3 = new Vector();\n```\n\nBonus:\nYou can also instantiate it with values, in an easier way, using the ```Arrays``` ```class```, as follows:\n```List supplierNames = Arrays.asList("sup1", "sup2", "sup3");\nSystem.out.println(supplierNames.get(1));\n```\nBut note you are not allowed to add more elements to that list, as it's ```fixed-size```."
}
] |
[
{
"from": "human",
"value": "What is the difference between ```warnings.warn()``` and ```logging.warn()``` in terms of what they do and how they should be used?"
},
{
"from": "gpt",
"value": "Here are the differences:\n\n* ```logging.warning``` just logs something at the ```WARNING``` level, in the same way that ```logging.info``` logs at the ```INFO``` level and ```logging.error``` logs at the ```ERROR``` level. It has no special behaviour.\n\n* ```warnings.warn``` emits a ```Warning```, which may be printed to ```stderr```, ignored completely, or thrown like a normal ```Exception``` (potentially crashing your application) depending upon the precise ```Warning``` subclass emitted and how you've configured your Warnings Filter. By default, warnings will be printed to ```stderr``` or ignored.\n\n* Warnings emitted by ```warnings.warn``` are often useful to know about, but easy to miss (especially if you're running a Python program in a background process and not capturing ```stderr```). For that reason, it can be helpful to have them logged.\n\nTo use them, Python provides a built-in integration between the ```logging``` module and the ```warnings``` module to let you do this; just call ```logging.captureWarnings(True)``` at the start of your script and all warnings emitted by the ```warnings``` module will automatically be logged at level ```WARNING```."
}
] |
[
{
"from": "human",
"value": "In Greek mythology, the words \"Titan\" and \"God\" seem to be used interchangeably. For example, Zeus is a God, but Cronus (his father) was a Titan. So what is the difference between a Titan and a God in Greek mythology? "
},
{
"from": "gpt",
"value": "A 'god' is synonymous to a 'deity'; the Titans and Titanesses were gods, they were members of the second order of divine beings - after Gaia and Uranus, and the other primordial deities.\n\nCronus and his fellow Titans comprised the second Ancient Greek pantheon (the first comprising of deities such as Ananke, Gaea, and Ouranos), which the younger generation of gods, including Zeus, would later usurp, as the Titans usurped the primordial gods before them. (For more information on this concept of change in dominant deities, read up on the theory of The Golden Age, particularly that of Hesiod, who also wrote the only surviving account of the Titans in his Theogony.)\n\nAs for why the Titans have their own order name as opposed to Zeus et al who are simply 'gods', there is debate as to the exact etymology of the word 'titan' but this quote from Hesiod's Theogony states:\n\n"But these sons whom he begot himself great Heaven used to call Titans (Strainers) in reproach, for he said that they strained (τιταίνοντας) and did presumptuous a fearful deed, and that vengeance (τίσιν) for it would come afterwards."\nTrans. Evelyn-White, here with side-by-side Greek.\n\nHowever this is not to say Hesiod was correct; he himself was writing long after the supposed Golden Age ended."
}
] |
[
{
"from": "human",
"value": "How do weather models work?"
},
{
"from": "gpt",
"value": "All numerical atmospheric models are built around calculations derived from primitive equations that describe atmospheric flow. Vilhelm Bjerknes discovered the relationships and thereby became the father of numerical weather prediction. Conceptually, the equations can be thought of as describing how a parcel of air would move in relationship to its surroundings. For instance, we learn at a young age that hot air rises. The hydrostatic vertical momentum equation explains why and quantifies under what condictions hot air would stop rising. (As the air rises it expands and cools until it reaches hydrostatic equilibrium.) The other equations consider other types of motion and heat transfer.\n\nUnfortunately, the equations are nonlinear, which means that you can't simply plug in a few numbers and get useful results. Instead, weather models are simulations which divide the atmosphere into three-dimensional grids and calculate how matter and energy will flow from one cube of space into another during discrete time increments. Actual atmospheric flow is continuous, not discrete, so by necessity the models are approximations. Different models make different approximations appropriate to their specific purpose.\n\nNumerical models have been improving over time for several reasons:\n\n* More and better input data,\n* Tighter grids, and\n* Better approximations.\n\nIncreasing computational power has allowed models to use smaller grid boxes. However, the number of computations increases exponentially with the number of boxes and the process suffers diminishing returns. On the input end of things, more and better sensors improve the accuracy of the initial conditions of the model. Synoptic scale and mesoscale models take input from General Circulation Models, which helps set reasonable intial conditions. On the output end, Model Output Statistics do a remarkable job of estimating local weather by comparing the current model state with historical data of times when the model showed similar results. Finally, ensemble models take the output of several models as input and produce a range of possibly outcomes."
}
] |
[
{
"from": "human",
"value": "I am currently trying to decipher Mazur's Eisenstein ideal paper (not a comment about his clarity, rather about my current abilities). One of the reasons I am doing that is that many people told me that the paper was somehow revolutionary and introduced a new method into number theory. \nCould you explain exactly what subsequent developments did the paper bring, what ideas in the paper were considered more-or-less original (at the time it was published), and exactly what difficulties did these ideas resolve that people failed to resolve before the paper was published (if any)?"
},
{
"from": "gpt",
"value": "First, Mazur's paper is arguably the first paper where the new ideas (and language) of the Grothendieck revolution in algebraic geometry were fully embraced and crucially used in pure number theory. Here are several notable examples: Mazur makes crucial use of the theory of finite flat group schemes to understand the behavior of the $p$-adic Tate modules of Jacobians at the prime $p$. He studies modular forms of level one over finite rings (which need not lift to characteristic zero when the residue characteristic is $2$ or $3$). He proves theorems about mod-$p$ modular forms using what are essentially comparison theorems between etale cohomology and de Rham cohomology, and many more examples. The proof of the main theorem ($\\S5$, starting at page 156) is itself a very modern proof which fundamentally uses the viewpoint of $X_0(N)$ as a scheme.\n\nSecond, there are many beautiful ideas which have their original in this paper: it contains many of the first innovative ideas for studying $2$-dimensional (and beyond) Galois representations, including the link between geometric properties (multiplicity one) and arithmetic properties, geometric conceptions for studying congruences between Galois representations, understanding the importance of the finite-flat property of group schemes, and the identification of the Gorenstein property. There is a theoretical $p$-descent on the Eisenstein quotient when previously descents were almost all explicit $2$-descents with specific equations. It introduces the winding quotient, and so on.\n\nThird, while it is a dense paper, it is dense in the best possible way: many of the small diversions could have made interesting papers on their own. Indeed, even close readers of the paper today can find connections between Mazur's asides and cutting edge mathematics. When Mazur raises a question in the text, it is almost invariably very interesting. One particular (great) habit that Mazur has is thinking about various isomorphisms and by pinning down various canonical choices identifies refined invariants. To take a random example, consider his exploration of the Shimura subgroup at the end of section 11. He finishes with a question which to a casual reader may as well be a throw-away remark. But this question was first solved by Merel, and more recently generalized in some very nice work of Emmanuel Lecouturier. Lecouturier's ideas then played an important role in the work of Michael Harris and Akshay Venkatesh. Again, one could give many more such examples of this. Very few papers have the richness of footnotes and asides that this paper does. Never forget that one of the hardest things in mathematics is coming up with interesting questions and observations, and this paper contains many great ones - it is bursting with the ideas of a truly creative mathematician.\n\nFinally, the result itself is amazing, and (pretty much) remains the only method available for proving the main theorem (the second proof due to Mazur is very related to this one). To give a sense of how great the theorem is, note that if $E$ is a semistable elliptic curve, then either $E$ is isogenous to a curve with a $p$-torsion point, or $E[p]$ is absolutely irreducible. This result (added for clarity: explicitly, Mazur's Theorem that $E/\\mathbf{Q}$ doesn't have a $p$-torsion point for $p > 7$) is crucially used in Wiles' proof of Fermat. One could certainly argue that without this paper (and how it transformed algebraic number theory) we would not have had Wiles' proof of Fermat, but it's even literally true that Mazur's theorem was (and remains so today, over 40 years later) an essential step in any proof of Fermat."
}
] |
[
{
"from": "human",
"value": "Tracing XML request/responses with JAX-WS"
},
{
"from": "gpt",
"value": "Here is the solution in raw code:\n```Endpoint ep = Endpoint.create(new WebserviceImpl());\nList handlerChain = ep.getBinding().getHandlerChain();\nhandlerChain.add(new SOAPLoggingHandler());\nep.getBinding().setHandlerChain(handlerChain);\nep.publish(publishURL);\n```\nWhere SOAPLoggingHandler is (ripped from linked examples):\n```package com.myfirm.util.logging.ws;\nimport java.io.PrintStream;\nimport java.util.Map;\nimport java.util.Set;\nimport javax.xml.namespace.QName;\nimport javax.xml.soap.SOAPMessage;\nimport javax.xml.ws.handler.MessageContext;\nimport javax.xml.ws.handler.soap.SOAPHandler;\nimport javax.xml.ws.handler.soap.SOAPMessageContext;\n/*\n * This simple SOAPHandler will output the contents of incoming\n * and outgoing messages.\n */\npublic class SOAPLoggingHandler implements SOAPHandler {\n // change this to redirect output if desired\n private static PrintStream out = System.out;\n public Set getHeaders() {\n return null;\n }\n public boolean handleMessage(SOAPMessageContext smc) {\n logToSystemOut(smc);\n return true;\n }\n public boolean handleFault(SOAPMessageContext smc) {\n logToSystemOut(smc);\n return true;\n }\n // nothing to clean up\n public void close(MessageContext messageContext) {\n }\n /*\n * Check the MESSAGE_OUTBOUND_PROPERTY in the context\n * to see if this is an outgoing or incoming message.\n * Write a brief message to the print stream and\n * output the message. The writeTo() method can throw\n * SOAPException or IOException\n */\n private void logToSystemOut(SOAPMessageContext smc) {\n Boolean outboundProperty = (Boolean)\n smc.get (MessageContext.MESSAGE_OUTBOUND_PROPERTY);\n if (outboundProperty.booleanValue()) {\n out.println(\"\\nOutbound message:\");\n } else {\n out.println(\"\\nInbound message:\");\n }\n SOAPMessage message = smc.getMessage();\n try {\n message.writeTo(out);\n out.println(\"\"); // just to add a newline\n } catch (Exception e) {\n out.println(\"Exception in handler: \" + e);\n }\n }\n}\n```"
}
] |
[
{
"from": "human",
"value": "In Vim, how do I insert characters at the beginning of each line in a selection?\nFor instance, I want to comment out a block of code by prepending ```//``` at the beginning of each line assuming my language's comment system doesn't allow block commenting like ```/* */```. How would I do this?"
},
{
"from": "gpt",
"value": "The general pattern for search and replace is:\n```:s/search/replace/\n```\nReplaces the first occurrence of 'search' with 'replace' for current line\n```:s/search/replace/g\n```\nReplaces all occurrences of 'search' with 'replace' for current line, 'g' is short for 'global'\nThis command will replace each occurrence of 'search' with 'replace' for the current line only. The % is used to search over the whole file. To confirm each replacement interactively append a 'c' for confirm:\n```:%s/search/replace/c\n```\nInteractive confirm replacing 'search' with 'replace' for the entire file\nInstead of the % character you can use a line number range (note that the '^' character is a special search character for the start of line):\n```:14,20s/^/#/\n```\nInserts a '#' character at the start of lines 14-20\nIf you want to use another comment character (like //) then change your command delimiter:\n```:14,20s!^!//!\n```\nInserts a '//' character sequence at the start of lines 14-20\nOr you can always just escape the // characters like:\n```:14,20s/^/\\/\\//\n```\nInserts a '//' character sequence at the start of lines 14-20\nIf you are not seeing line numbers in your editor, simply type the following\n```:set nu\n```\nHope this helps!"
}
] |
[
{
"from": "human",
"value": "Why doesn't the nuclear fusion in a star make it explode?"
},
{
"from": "gpt",
"value": "The fusion that occurs in the core of the Sun occurs in nothing like the conditions you might be thinking of in a bomb, or a fusion reactor. In particular, it occurs at much lower temperatures and at a much lower rate. A cubic metre of material in the solar core is only releasing around 250 W of power by fusion.\nThe fusion rate is set by the temperature (and to a lesser extent, density) of the core. This in turn is set by the need for a pressure gradient to balance the weight of material pressing down on it from above. At 15 million kelvin (the core temperature, which is much lower than the temperatures in nuclear bombs or fusion reactors), the average proton has a lifetime of several billion years before being converted (with three others) into a helium nucleus. There are two reasons this is slow. First, you have to get protons, which repel each other electromagnetically, close enough together to feel the strong nuclear force. This is why high temperatures are needed. Second, because the diproton is unstable, one of the protons needs to change into a neutron via a weak force interaction, whilst it is in the unstable diproton state, to form a deuterium nucleus. This is just inherently unlikely and means the overall reaction chain to helium is very slow.\nThe reason there is no bomb-like explosion is because there is no problem in shifting 250 W per cubic metre away from the core, in the same way that a compost heap, which generates about the same power density, does not spontaneously explode. In the case of a star any additional heat goes into more radiation that diffuses away and in work done in expanding the star. As a result, the temperature of the core is stable. Ultimately, any additional energy emerges as sunlight at the solar photosphere.\nIf for some reason, the opacity to radiation in the core increased, then the temperature would rise and more energy would be generated by fusion. This is exactly what happens in the core as more hydrogen is turned into helium; the core temperature and luminosity do rise, but slowly, on timescales of billions of years."
}
] |
[
{
"from": "human",
"value": "Does hot water freeze faster than cold water?"
},
{
"from": "gpt",
"value": "In certain settings, cold water freezers slower than hot water. This is called the Mpemba effect:\n\nHot water can in fact freeze faster than cold water for a wide range of experimental conditions. This phenomenon is extremely counterintuitive, and surprising even to most scientists, but it is in fact real. It has been seen and studied in numerous experiments. While this phenomenon has been known for centuries, and was described by Aristotle, Bacon, and Descartes, it was not introduced to the modern scientific community until 1969, by a Tanzanian high school student named Mpemba.\n\nHere are some suggested reasons:\n\n1. Evaporation — As the initially warmer water cools to the initial temperature of the initially cooler water, it may lose significant amounts of water to evaporation. The reduced mass will make it easier for the water to cool and freeze. Then the initially warmer water can freeze before the initially cooler water, but will make less ice.\n2. Dissolved Gasses — Hot water can hold less dissolved gas than cold water, and large amounts of gas escape upon boiling. So the initially warmer water may have less dissolved gas than the initially cooler water."
}
] |
[
{
"from": "human",
"value": "O que é Reflection. Por que é útil?\n* É recomendável usar em projetos?\n* Como usar?\n* Em quais situações Reflection pode ser usado?"
},
{
"from": "gpt",
"value": "Reflection é um termo usado para indicar a capacidade de obter metadados sobre o próprio programa compilado, em português pode-se referir a isso como reflexão mesmo.\nComo assim, metadados?\nObter informações sobre os tipos de dados, de forma dinâmica, é muito importante para códigos que atuam sobre objetos que não se conhece em tempo de design. É muito usado em bibliotecas que se propõem a lidar com objetos de tipos que elas desconhecem em sua concepção.\nO ASP.NET MVC por exemplo, só é um projeto possível por causa da reflexão. De que forma ele poderia chamar os métodos dos controllers, que ele nem conhece?\nVisto, isso é preciso dizer que reflexão não é sempre rápido, é necessário verificar permissões, o nível de segurança em que o código está rodando, entre outras. Por isso, quando se obtém um resultado qualquer vindo através da reflexão, geralmente essas informações são colocadas em cache, ou seja, salvas em memória.\nÉ recomendável?\nDepende do uso:\n\n* sim, é recomendável: para criar comportamentos dinâmicos usando atributos, a única opção é reflexão\n* sim, é recomendável: para chamar método que não se conhece durante a concepção do sistema, também é a única opção\n* não, é abuso fazer isso: chamar um método, de uma classe conhecida em design-time usando reflexão... é absurdo\n\nExistem tantas formas de usar reflexão que eu poderia continuar essa lista por muito tempo. Então mais vale o seguinte conselho: bom senso.\nComo usar?\nExistem alguns vetores de uso de reflexão que eu conheço. Para demonstrá-los, vamos considerar primeiramente a seguinte classe:\n```public class Pessoa\n{\n public string Nome { get; set; }\n}\n```\n\n* Expressões Lambda: esse é um vetor de reflexão, pois é possível usar expressões lambda para chegar aos tipos, métodos entre outros objetos refletidos, que tenham sido usados na expressão. É comumente utilizado para tornar fortemente tipada a obtenção de objetos refletidos de membros de uma classe, antes disso só era possível obter esses objetos indicando uma string com o nome do método.\n```Expression> expr = p => p.Nome;\nvar lambda = (LambdaExpression)expr;\nvar cast = lambda.Body as UnaryExpression;\nvar member = (cast != null ? cast.Operand : lambda.Body) as MemberExpression;\nvar propInfo = member.Member as PropertyInfo;\nConsole.WriteLine(propInfo.Name); // \"Nome\"\n```\n* typeof(Tipo): essa é uma das formas mais comuns de obter informações refletidas do sistema. Serve para obter informações sobre o tipo indicado diretamente.\n```var tipo = typeof(Pessoa);\nConsole.WriteLine(tipo.Name); // \"Pessoa\"\n```\n* obj.GetType(): tão comum quando ```typeof```, mas ao invés de se referir a um tipo diretamente, refere-se ao tipo do objeto em questão: mas tem um detalhe, não é do tipo declarado da variável, mas sim do objeto em si.\n```var p = new Pessoa();\nvar tipo = p.GetType();\nConsole.WriteLine(tipo.IsSealed); // \"false\"\n```\n* Assembly: é usada para obter tipos em larga escala: por exemplo, para escanear todos os tipos existentes em um assembly, ou então em todos os assemblies carregados.\n```// localizando um tipo, dentre todos os tipos carregados\nvar todosOsTiposCarregados = AppDomain.CurrentDomain\n .GetAssemblies()\n .SelectMany(a => a.GetTypes())\n .Where(t => t.Name == \"Pessoa\");\n```\n\nE o namespace Reflection.Emit?\nNão está claro na minha mente se considero isso reflexão ou não. Mas de qualquer forma, é necessário usar reflexão para gerar código dinamicamente, então tem uma conexão.\nComo disse, esse namespace contém classes usadas para gerar código dinamicamente... e então compilá-los. Lembra do que eu falei sobre cache de reflexão... compilar métodos é um bom alvo de cache, pois depois de compilado os futuros usos serão muitíssimo rápidos."
}
] |
[
{
"from": "human",
"value": "What is the difference between minimum and infimum?"
},
{
"from": "gpt",
"value": "The minimum is attained, the infimum isn't necessarily. \nExample.\nLet $f(x) = \\frac{1}{x}$. Then $f$ has no minimum value on the interval $(0,\\infty)$. The minimum is the smallest element in the set. That is\n$$\n\\min\\{f(x)\\mid x\\in (0,\\infty)\\}\n$$\ndoesn't exist because there is not smallest number in the set.\nAnother example is the minimum of the set $S = (0,1) = \\{x\\mid 0<x<1\\}$. Here again there isn't a smallest number\n$$\n\\min\\{x\\mid 0<x<1\\}\n$$\ndoesn't exist.\nThe infimum of a set $S$ is defined as the greatest number that is less than or equal to all elements of S (from Wikipedia). The infimum is also sometimes called the greatest lower bound.\nIt is a fact that every non empty set (bounded below) of real numbers has an infimum. But, as we saw, not every real set has a minimum.\nSo in the example\n$$\n\\inf\\{f(x)\\mid x\\in (0,\\infty)\\} = 0.\n$$\nNote that the infimum and the minimum can be the same. Consider for example $S = \\{1,2,3,\\dots\\}$. Then the infimum and minimum is both $1$.\nConsider this other example. If $f$ is a continuous function on a closed interval $[a,b]$, then it is a fact that $f$ attains a minimum over that interval. So here again\n$$\n\\inf\\{f(x)\\mid x\\in [a,b]\\} = \\min\\{f(x)\\mid x\\in [a,b]\\}.\n$$"
}
] |
[
{
"from": "human",
"value": "I had a Nespresso Vertuo Next machine. It stopped working properly and during the troubleshooting video call, the Nespresso support agent said that the machines should not be connected to a GFCI outlet because they can potentially damage the machine. As part of our home inspection when we purchased the house, it was recommended to install such outlets anywhere that water is common, including the kitchen. As such, all the outlets in our kitchen are GFCI outlets.\nThis call with Nespresso was the first time I'd ever seen someone claim that GFCI outlets can potentially damage coffee machines.\nCan they damage Nespresso machines? If so, can they damage other coffee machines (I also have a Baratza grinder and a Bonavita drip machine I usually hook into the same outlet)? They sent us a replacement and now I am questioning where to put it."
},
{
"from": "gpt",
"value": "\n> the Nespresso support agent said that the machines should not be connected to a GFCI outlet\n\nWhere do they think Nespresso machines are used? Bedrooms? No, they're kitchen appliances and GFCIs are widely used in kitchens. GFCIs are code requirements and have been for ~20 years. If they are unable to make a GFCI-compatible machine, then their machine is unfit for purpose, and they have breached their implied warranty. Also, the things use water, for Pete's sake.\n\nThat said, machines do break. On a modern "mass produced, not expected to be repaired" machine, that means it is "at end-of-life".\n\n> This call with Nespresso was the first time I'd ever seen someone claim that GFCI outlets can potentially damage coffee machines.\n\nLOL no. Such a claim is both nonsensical and inexcusable, and is certainly not coming from their technical department, nor has it been reviewed by their legal team, nor will you ever get them to put that in writing. Most likely it is because companies put tremendous pressure on customer service agents to have a high rate of good ratings, "close it in 1 call" and "avoid shipping replacement units". A GFCI-trip call is a worst-case scenario for an agent, and so they're going to say whatever they need to to get a satisfactory 1-call close. That is about their only option.\n\nIn a rare case, the GFCI might be end-of-life and not be working properly, but that is easily tested by plugging it into the next outlet over that is on a different GFCI, since kitchens generally have 2 independent circuits.\n\nNow, other things can damage a coffee maker. Notably a "Lost Neutral", where the neutral lead from the power company breaks, causing neutral in the house to wander - the two 120V "halves of the service" still add up to 240V but they are lopsided, with one side higher than 120V. That is particularly hard on resistive heat appliances, since their heat increases by the square of voltage increase. Lost neutrals can be pernicious and difficult to detect, so it can go weeks or months without being noticed. It's not like service panels have voltmeters on them."
}
] |
[
{
"from": "human",
"value": "I have extremely bad posture, what can I do?"
},
{
"from": "gpt",
"value": "You can do stretches, strengthening and breathing exercises to improve your posture. But you also have to become aware of your posture and find a way to interrupt prolonged sitting intervals with brief breaks. A good ergonomic chair is also helpful.\n\n## Musculo-Skeletal Effects of Poor Sitting Posture\n\nPoor sitting posture can create muscle imbalances over time.\n\n* In general, your front stuctures shorten, with muscles like your hip flexors and pectorals tightening up - pulling you into a ball. Your crunched sitting posture caves your lower ribs down towards your abdomen which prevents the diaphragm from expanding freely. \n* At the same time the muscles of your back stretch out and weaken, especially in the range where they should be working to maintain your posture.\n\nAs this posture persists over time, you joints begin to lose normal range of motion as well making it more difficult to assume a good posture.\n\n\n## Correcting Slumped Sitting Posture\nTailor an exercise program to stretch tightened muscles and strengthen weakened muscles. Additionally, you need a method to become aware of your posture and correct it while you are sitting. This is difficult because as you say, your attention is on your work. Exercise programs like Yoga, Tai-Chi and Pilates are good because they all address and make you very aware of your posture, joint alignments, flexibility, core control and breathing.\n\nUse Specific Exercises to Correct Muscle Imbalances:\n\n* Back, Upper Back and Scapular muscles: Back Extensions strengthen your paraspinals. Use different arm positions (Y, T, W, L) to target your lower traps, mid traps, rhomboids and scapular stabilizors. Learn the feel of retracting your scapulas.\nYou can do these on the floor next to your desk. \nOr if you prefer not to get on the floor, use resistance bands for reverse flys, wide rows, narrow rows and rotations. \nAway from work, you can also strengthen these muscles using weights, cables and body weight exercises like inverted rows, cable rows, bent over rows, reverse flys etc. And squats are a good for strengthening multiple muscles important to posture.\n* Core: Plank, Side Plank, Bird Dog and Bridge will stabilize your trunk and spine.\n* Stretches: - Hip Flexors, Hamstrings, Abs and Pecs. The wall pec stretch stretches the pecs but also contracts the rhomboid and trapezius scapular muscles to help improve the positioning of the shoulder and shoulder blade and is easy to do at work. \n* Sitting Posture and Breathing Awareness - This sitting posture exercise helps make you aware of your sitting alignment and expanding your diaphragm. Practice a few diaphramatic breaths throughout the day along with this sitting exercise to elongate your spine.\n\nCreate your ideal short exercise routine that you can do during short breaks during the day. An exercise ball next to your desk can help you target these muscles in just a few minutes:\n\n* Lie back over the ball and stretch out your front \n* Lie face down over the ball and do the Y, W, T, L exercises.\nAnd consider using the ball as your desk chair for short periods.\n\n## Remembering your Posture throughout the day:\n\nThis is the hard part. As your muscle imbalances begin to resolve and your diaphragmatic breathing improves, you’ll find that the slouched posture becomes less comfortable and you will automatically begin to sit better. Until then, use a timed reminder or try tying your posture corrections to tasks that you do at the computer regularly. For example, correct your posture each time you check your email or some other specific link. Turn on your web cam for visual reminders.\n\nKeep at it until you feel the improvement. You'll have less problems going forward if you improve your posture. It either gets better - or it gets worse."
}
] |
[
{
"from": "human",
"value": "How to add margin top to ```class=\"row\"``` elements using twitter bootstrap framework?"
},
{
"from": "gpt",
"value": "Editing or overriding the row in Twitter bootstrap is a bad idea, because this is a core part of the page scaffolding and you will need rows without a top margin.\n\nTo solve this, instead create a new class \"top-buffer\" that adds the standard margin that you need.\n\n```.top-buffer { margin-top:20px; }```\n\nAnd then use it on the row divs where you need a top margin.\n\n```<div class=\"row top-buffer\"> ...```"
}
] |
[
{
"from": "human",
"value": "In FTL: Faster Than Light, what triggers crew experience increases?"
},
{
"from": "gpt",
"value": "###Piloting\nA crew member who is controlling a helm gains one point of experience for each incoming projectile that is dodged during combat. This includes asteroids, so long as you are in combat at the time. (Note that the dodges do not count while you ship is under the effects of +60% evasion from a cloak.)\nThis can be explicitly trained by finding an enemy that cannot break through your shields, and letting him fire at you. Increasing your dodge chance (i.e. more power to engines) will help this occur quicker, but even with the default chance you can max out the skill eventually.\n\n###Engines\nExactly as for piloting, a crew member manning the engines gains one point of XP for each projectile evaded while uncloaked in combat - and so it can be trained in the same way.\n\n###Weapons\nA crew member who is manning the weapons station gains one point of experience for each weapon that is fired. It doesn't matter whether it hits or misses, or whether it can do damage (e.g. a beam weapon fired at a shield).\n(Multi-shot weapons such as burst lasers still count as a single fire order and hence a single XP.)\nThis can be trained by firing non-lethal weapons (e.g. ones that do ion damage) repeatedly against a target that cannot damage you.\n\n###Shields\nA crew member manning the shields station gains one point of experience for each shield \"bubble\" that is brought back up during combat. It doesn't matter why the shields were down, be that direct enemy fire, ion damage, asteroid strikes, or even manually removing and then restoring power to the shield systems.\nThis can be trained by letting a target that cannot break through all of your shields repeatedly take one layer down. (For fastest results, move your pilot away from the helm, or reduce your engines to 0 power. This will reduce your dodge chance to 0%, ensuring that everything hits your shields.)\nNote that you used to be able to manually remove power and restore it in combat, to get points, but this has been patched.\n\n###Repair\nA crew member gains one point of experience for getting the \"finishing blow\" when repairing a (sub)system. Note that hull breaches provide no repair experience.\nThere's no specific way to train this as it requires your systems to be damaged. Though you can focus the XP gains by always having the same person run around and repair everything after a battle.\n\n###Combat\nA crew member gains one point of experience for getting the finishing blow when fighting an enemy crew member or (sub)system.\nThis generally gets trained in \"real\" situations. However, since XP is gained for taking out systems, a few extra XP can be gained from boarding if one tries to take out the enemy's systems before the crew."
}
] |
[
{
"from": "human",
"value": "In Adobe Photoshop I am able to select multiple layers at once with Shift+Click.\nHow can I do that in GIMP?"
},
{
"from": "gpt",
"value": "It's not possible. The only thing you can do is link layers together or merge them. \nInstead, there are several alternative methods of handling multiple layers, each appropriate to a different task. You can chain layers to move or transform them, shift-click objects on different layers in the canvas to align them, or create a new layer based on all visible layers.\n\nInstructions:\n\n* Chaining Layers:\n 1) Make sure the Layers dialog box is open. If it is not, click on the Windows menu and select \"Dockable Dialogs\" > \"Layers.\" You can also hold down the \"Ctrl\" key and press \"L.\"\n 2) Hover your mouse between the Layer Visibility icon, which looks like an eye, and the layer thumbnail. A raised box will appear. Click this box to activate the chain.\n 3) Repeat this process on all of the other layers you wish to chain together. Once these layers are chained, you can move or apply transformation effects to all of them simultaneously.\n\n* Selecting for Alignment\n 4) Make sure the Toolbox is open. If it is not, click on the Windows menu and select \"Toolbox.\" You can also hold down the \"Ctrl\" key and press \"B.\"\n 5) Click on the Alignment Tool in the Toolbox. It looks like an empty rectangle with arrows pointing away from all sides of it.\n 6) Press and hold the \"Shift\" key while clicking on the layers in the canvas. Or click somewhere off the layers and drag a selection rectangle around them. Once you have selected these layers, you can choose an alignment method.\n\n* Merging Visible Layers\n 7) Make sure the Layers dialog box is open. If it is not, click on the Windows menu and select \"Dockable Dialogs\" > \"Layers.\" You can also hold down the \"Ctrl\" key and press \"L.\"\n 8) Click on the Visibility icon on any layer you do not want to include in your new layer. This will render that layer invisible.\n 9) Click on the Layers menu and select \"New from Visible.\"\n 10) Select the option that best describes how you want the layers to be merged from the dialog box that appears, and click \"Merge.\" This will merge all of the visible layers into a new layer but will not affect your original layers."
}
] |
[
{
"from": "human",
"value": "In the python built-in open function, what is the exact difference between the modes ```w```, ```a```, ```w+```, ```a+```, and ```r+```?\nIn particular, the documentation implies that all of these will allow writing to the file, and says that it opens the files for \"appending\", \"writing\", and \"updating\" specifically, but does not define what these terms mean."
},
{
"from": "gpt",
"value": "The opening modes are exactly the same as those for the C standard library function ```fopen()```.\n\nThe BSD ```fopen``` manpage defines them as follows:\n\nThe argument mode points to a string beginning with one of the following sequences (Additional characters may follow these sequences.):\n * ``r'' Open text file for reading. The stream is positioned at the beginning of the file.\n * ``r+'' Open for reading and writing. The stream is positioned at the beginning of the file.\n * ``w'' Truncate file to zero length or create text file for writing. The stream is positioned at the beginning of the file.\n * ``w+'' Open for reading and writing. The file is created if it does not exist, otherwise it is truncated. The stream is positioned at the beginning of the file.\n * ``a'' Open for writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subsequent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar.\n * ``a+'' Open for reading and writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subse- quent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar."
}
] |
[
{
"from": "human",
"value": "How can democracy not be the rule of the poor?"
},
{
"from": "gpt",
"value": "Here are some possible explanations why democracy is not necessarily the rule of the poor:\n\n* Voters voting by habit, not by analysis. Many people vote the same way their parents voted. Other vote exactly the opposite way. Neither involves analysis of their objective economic interest.\n* A middle class believing to be rich, a lower middle class believing to be middle class. In many countries, voters tend to vote in the interest of the class they aspire to be, not in the interest of the class they actually are. Part of that is self-deception about inequality and the own position. That is visible in tax policy, for instance, where the middle class votes against taxes on the upper class.\n* An upper class better able to work the political system than the poor. In some countries, attempts to simplify the access to the political system (e.g. through direct democracy or participatory budgeting) have backfired when it comes to the poor. These mechanisms remain complicated enough that it takes some familiarity with the political system to get anything done. Knowing how to organize a grassroots movement, how to write minutes of a meeting, how to collect signatures for a ballot initiative. So instead of better welfare in poor parts of town, there are better playgrounds in rich parts of town."
}
] |
[
{
"from": "human",
"value": "How can I write colored text to the Windows console with C++? That is, how can I write different text with different colors?"
},
{
"from": "gpt",
"value": "ANSI escape color codes are: 0=black 1=blue 2=green and so on to 15=white; colorattribute = foreground + background * 16. \nTo get red text on yellow use 4 + 14*16 = 228, light red on yellow would be 12 + 14*16 = 236.\n\nHere is an example for coloring text in the Windows console:\n```\n HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);\n // you can loop k higher to see more color choices\n for(int k = 1; k < 255; k++)\n {\n // pick the colorattribute k you want\n SetConsoleTextAttribute(hConsole, k);\n cout << k << \" I want to be nice today!\" << endl;\n }\n```"
}
] |
[
{
"from": "human",
"value": "What's the best way to create a temporary file in Android? \nCan File.createTempFile be used? The documentation is very vague about it.\nIn particular, it's not clear when temporary files created with ```File.createTempFile``` are deleted, if ever."
},
{
"from": "gpt",
"value": "Here is an example:\n```\nFile outputDir = context.getCacheDir(); // context being the Activity pointer\nFile outputFile = File.createTempFile(\"prefix\", \".extension\", outputDir);\n```\nExplanation:\n1. The `getCacheDir()` method returns the directory on the filesystem where your app can create cache files. \n2. The `createTempFile()` function creates a temporary file in a given directory."
}
] |
[
{
"from": "human",
"value": "I have javascript function like this:\n```function myFunction(number) {\n var x=number;\n ...\n ... more initializations\n //here need to wait until flag==true\n while(flag==false)\n {}\n ...\n ... do something\n}\n```\nThe problem is that the javascript is stuck in the while and stuck my program. so my question is how can I wait in the middle of the function until flag is true without \"busy-wait\"?"
},
{
"from": "gpt",
"value": "Because javascript in a browser is single threaded (except for webworkers which aren't involved here) and one thread of javascript execution runs to completion before another can run, your statement:\n```while(flag==false) {}\n```\nwill simply run forever (or until the browser complains about a non-responsive javascript loop), the page will appear to be hung and no other javascript will ever get a chance to run, thus the flag's value can never be changed.\nFor a little more explanation, Javascript is an event driven language. That means that it runs a piece of Javascript until it returns control back to the interpreter. Then, only when it returns back to the interpreter, Javascript gets the next event from the event queue and runs it.\nAll things like timers and network events run through the event queue. So, when a timer fires or a network request arrives, it does not ever \"interrupt\" the currently running Javascript. Instead, an event gets put in the Javascript event queue and then, when the currently running Javascript finishes, the next event is pulled from the event queue and it gets its turn to run.\nSo, when you do an infinite loop such as ```while(flag==false) {}```, the currently running Javascript never finishes and thus the next event is never pulled from the event queue and thus the value of ```flag``` never gets changed. They key here is that Javascript is not interrupt driven. When a timer fires, it does not interrupt the currently running Javascript, run some other Javascript and then let the currently running Javascript continue. It just gets put in the event queue waiting until the currently running Javascript is done to get its turn to run.\n\nWhat you need to do is rethink how your code works and find a different way to trigger whatever code you want to run when the ```flag``` value changes. Javascript is designed as an event-driven language. So, what you need to do is figure out what events you can register an interest in so you can either listen for the event that might cause the flag to change and you can examine the flag on that event or you can trigger your own event from whatever code might change the flag or you can implement a callback function that whatever code changes that flag can call your callback whenever the piece of code responsible for changing the flag value would change it's value to ```true```, it just calls the callback function and thus your code that wants to run when the flag gets set to ```true``` will get to run at the right time. This is much, much more efficient than trying to use some sort of timer to constantly check the flag value.\n```function codeThatMightChangeFlag(callback) {\n // do a bunch of stuff\n if (condition happens to change flag value) {\n // call the callback to notify other code\n callback();\n }\n}\n```"
}
] |
[
{
"from": "human",
"value": "According to this famous blog post, the effective transcript length is:\n$\\tilde{l}_i = l_i - \\mu$\nwhere $l_i$ is the length of transcript and $\\mu$ is the average fragment length. However, typically fragment length is about 300bp. What if when the transcript $l_i$ is smaller than 300? How do you compute the effective length in this case?\nA related question: when computing the FPKM of a gene, how to choose a transcript? Do we choose a \"canonical\" transcript (how?) or combine the signals from all transcripts to a gene-level FPKM?"
},
{
"from": "gpt",
"value": "The effective length is $\\tilde{l}_i = l_i - \\mu + 1$ (note the R code at the bottom of Harold's blog post), which in the case of $\\mu \n\nRegarding choosing a particular transcript, ideally one would use a method like salmon or kallisto (or RSEM if you have time to kill). Otherwise, your options are (A) choose the major isoform (if it's known in your tissue and condition) or (B) use a \"union gene model\" (sum the non-redundant exon lengths) or (C) take the median transcript length. None of those three options make much of a difference if you're comparing between samples, though they're all inferior to a salmon/kallisto/etc. metric.\n\nWhy are salmon et al. better methods? They don't use arbitrary metrics that will be the same across samples to determine the feature length. Instead, they use expectation maximization (or similarish, since at least salmon doesn't actually use EM) to quantify individual isoform usage. The effective gene length in a sample is then the average of the transcript lengths after weighting for their relative expression (yes, one should remove $\\mu$ in there). This can then vary between samples, which is quite useful if you have isoform switching between samples/groups in such a way that methods A-C above would miss (think of cases where the switch is to a smaller transcript with higher coverage over it...resulting in the coverage/length in methods A-C to be tamped down)."
}
] |
[
{
"from": "human",
"value": "What is the significance of 1/1/1753 in SQL Server?"
},
{
"from": "gpt",
"value": "The decision to use 1st January 1753 (```1753-01-01```) as the minimum date value for a datetime in SQL Server goes back to its Sybase origins.\nThe significance of the date itself though can be attributed to this man.\n\nPhilip Stanhope, 4th Earl of Chesterfield. Who steered the Calendar (New Style) Act 1750 through the British Parliament. This legislated for the adoption of the Gregorian calendar for Britain and its then colonies. \n\nThere were some missing days (internet archive link) in the British calendar in 1752 when the adjustment was finally made from the Julian calendar. September 3, 1752 to September 13, 1752 were lost.\nKalen Delaney explained the choice this way\n\n> So, with 12 days lost, how can you compute dates? For example, how can you compute the number of days between October 12, 1492, and July 4, 1776? Do you include those missing 12 days? To avoid having to solve this problem, the original Sybase SQL Server developers decided not to allow dates before 1753. You can store earlier dates by using character fields, but you can't use any datetime functions with the earlier dates that you store in character fields.\n\nThe choice of 1753 does seem somewhat anglocentric however as many catholic countries in Europe had been using the calendar for 170 years before the British implementation (originally delayed due to opposition by the church). Conversely many countries did not reform their calendars until much later, 1918 in Russia. Indeed the October Revolution of 1917 started on 7 November under the Gregorian calendar.\nBoth ```datetime``` and the new ```datetime2``` datatype mentioned in Joe's answer do not attempt to account for these local differences and simply use the Gregorian Calendar.\nSo with the greater range of ```datetime2```\n```SELECT CONVERT(VARCHAR, DATEADD(DAY,-5,CAST('1752-09-13' AS DATETIME2)),100)\n```\nReturns\n```Sep 8 1752 12:00AM\n```\nOne final point with the ```datetime2``` data type is that it uses the proleptic Gregorian calendar projected backwards to well before it was actually invented so is of limited use in dealing with historic dates.\nThis contrasts with other Software implementations such as the Java Gregorian Calendar class which defaults to following the Julian Calendar for dates until October 4, 1582 then jumping to October 15, 1582 in the new Gregorian calendar. It correctly handles the Julian model of leap year before that date and the Gregorian model after that date. The cutover date may be changed by the caller by calling ```setGregorianChange()```. \nA fairly entertaining article discussing some more peculiarities with the adoption of the calendar can be found here."
}
] |
[
{
"from": "human",
"value": "I saw this video where someone says that electromagnetic wave is a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward.\nI wonder where the photon is in this explanation. What is the relation between electromagnetic wave and photon?"
},
{
"from": "gpt",
"value": "Both the wave theory of light and the particle theory of light are approximations to a deeper theory called Quantum Electrodynamics (QED for short). Light is not a wave nor a particle but instead it is an excitation in a quantum field.\n\nQED is a complicated theory, so while it is possible to do calculations directly in QED we often find it simpler to use an approximation. The wave theory of light is often a good approximation when we are looking at how light propagates, and the particle theory of light is often a good approximation when we are looking at how light interacts i.e. exchanges energy with something else.\n\nSo it isn't really possible to answer the question where the photon is in this explanation. In general if you're looking at a system, like the one in the video, where the wave theory is a good description of light you'll find the photon theory to be a poor description of light, and vice versa. The two ways of looking at light are complementary."
}
] |
[
{
"from": "human",
"value": "In The Light Fantastic, after talking about the dimensions of the Pyramid of Tsort, it says \n\n All in all, it was a lot of effort to go through just to sharpen a razor.\n\nWhat's the joke here?"
},
{
"from": "gpt",
"value": "In the real world, there is or was a belief, brought on by the Egyptomania of the early 20th century, that pyramids had special powers—including, specifically, the power to sharpen or maintain the sharpness of razor blades.\n\n Pyramid power refers to the belief that the ancient Egyptian pyramids and objects of similar shape can confer a variety of benefits. Among these assumed properties are the ability to preserve foods, sharpen or maintain the sharpness of razor blades, improve health, function \"as a thought-form incubator\", trigger sexual urges, and cause other effects. Such unverified theories regarding pyramids are collectively known as pyramidology.\n\nCzechoslovakian Karel Drbal even patented a pyramid-shaped device specifically designed for razor blades. Among the specifications: \n\n It is beneficial to leave a new blade in the pyramid one to two weeks before using it. It is essential to place it there immediately after the first shave, and not the old, dull one. But it is possible to use an old one, if it is properly resharpened. The blade placed using the method above is left unobstructed until the next shave. The west edge should always face west. It improves the sharpening effect.\n"
}
] |
[
{
"from": "human",
"value": "After a ```git pull origin master```, I get the following message:\n\nwarning: Pulling without specifying how to reconcile divergent branches is\ndiscouraged. You can squelch this message by running one of the following\ncommands sometime before your next pull:\n git config pull.rebase false # merge (the default strategy)\n git config pull.rebase true # rebase\n git config pull.ff only # fast-forward only\nYou can replace "git config" with "git config --global" to set a default\npreference for all repositories. You can also pass --rebase, --no-rebase,\nor --ff-only on the command line to override the configured default per\ninvocation.\nremote: Enumerating objects: 4, done.\nremote: Counting objects: 100% (4/4), done.\nremote: Compressing objects: 100% (4/4), done.\nremote: Total 4 (delta 0), reused 0 (delta 0), pack-reused 0\nUnpacking objects: 100% (4/4), 51.49 KiB | 850.00 KiB/s, done.\n\nThe pull seems successful, but I am unsure.\nWhat can I do to fix this?"
},
{
"from": "gpt",
"value": "This is a new warning added in Git 2.27:\n\n* \"git pull\" issues a warning message until the pull.rebase configuration variable is explicitly given, which some existing users may find annoying---those who prefer not to rebase need to set the variable to false to squelch the warning.\n\nTo remove the warning, set one of the suggested values to your preferred default behaviour for git pull if you don't specify behaviour on the command line (using --ff, --no-ff, --ff-only, --rebase). In all cases, git will attempt a fast-forward merge if possible. The settings control what happens when there are changes in your branch but not present in the remote branch.\n\n```\n git config pull.rebase false # merge (the default strategy)\n```\nThis is the existing default behaviour; set this for no warning, and no change in behaviour; git will merge the remote branch into your local one.\n```\ngit config pull.rebase true # rebase\n```\n\nHere, git will attempt to rebase your changes on top of the remote branch. \n```\ngit config pull.ff only # fast-forward only\n```\nIf a fast-forward merge is not possible, git will refuse to proceed.\n> Refuse to merge and exit with a non-zero status unless the current HEAD is already up-to-date or the merge can be resolved as a fast-forward"
}
] |
[
{
"from": "human",
"value": "In India, we eat rice using our fingers. Generally in the West, a fork or spoon is used. I have tried eating rice with spoon but I don't feel satisfied with it.\nWe eat dry rice but we mix curry and vegetables with it and then eat it with our hands.\nIs there a way to eat rice with your hands in front of Westerners such that it doesn't appear to be disgusting to them? By disgusting I mean that they shouldn't feel like vomiting or looking away to avoid me. Even though in India we eat with our fingers, many people eat such that their whole palm is covered with food. That indeed looks disgusting. \nI cannot avoid hurting them by following different etiquette, but I certainly want to maintain a degree of cleanliness."
},
{
"from": "gpt",
"value": "First and foremost, you can eat the way you want. When it comes to western culture, the only thing that is generally disliked are noises, eating with the mouth wide open (included speaking with full mouth), and (excessively) dirtying the table. Eating rice with hands will be seen (mostly) as strange, not disgusting.\n\nSecond, context. Are you eating at a formal dinner with your colleagues and a client? Eat with the fork, as everybody else. Are you eating at the work's canteen? Eat the way you prefer and let other deal with their limits. And so on.\n\nThird, respect and open mindness: you are worried other peoples find you disgusting for eating rice with hands, but at the same time you show the same \"closed mind\" (in a way, ok? I'm not trying to offend you) in wanting to eat rice with hands at all cost otherwise you will not feel satisfied; believe me, eating with a fork will not starve you as will not change food nutritional content.\n\nEven more if you are eating at a foreign restaurant, e.g. at an italian restaurant rice is cooked and thought to be eatean with a fork. Respect local culture, open yourself to a new experience, and try to learn a different way to enjoy food.\n\nIn the end: feel free to do whatever fit you most, don't be afraid of other people judging you (their fault), but at the same time don't close yourself in your garden (your fault)."
}
] |
[
{
"from": "human",
"value": "The typical argument goes like this:\n\n Without net neutrality, cable companies could censor websites, favoring their own business partners. \n\nTypically, proponents of legislation point to some perceived injustice, and argue that new laws are needed to address it. But the very use of the subjunctive in the quotation (could censor), suggests that this might be considered by its opponents as a solution in search of a problem. If so, why haven't they used that rhetorical tactic? Conversely, if such incidents have occurred, why don't the neutrality supporters cite them?"
},
{
"from": "gpt",
"value": "There have been a few examples, in part, these examples have pushed this more into the spotlight. It's often under the term 'traffic shaping' if you are interested, here is Sandvine, a company that offers those shaping services. If you are interested in the peer2peer domain (which is heavily bringing this issue to the forefront) here is a national list of ISP's and their throttling practices on bit torrent users: Curious if this documented list counts as citing.\nFor the US:\nVerizon has run testing on it. \n\n “YouTube is being throttled to 10Mbps as well,” one person wrote on Thursday. “In the ‘stats for nerds,’ it would load at roughly 1,250KBps which translates to 10Mbps. Put the VPN on and that number tripled easily. Didn’t have an issue playing 1080p in 60fps, though.”\n\n(part of the problem with throttling is if you know your way around it, there isn't an issue. Tax on the non-tech savvy).\nVerizon stated they were not throttling, however, articles suggest they did and still are. Here is an article stating Verizon is actively throttling video connections over their network. Over to Comcast who has been engaging in this practice but not admitting it until much more recently:\n\n* When Comcast throttled BitTorrent users, they had a lawsuit brought against them that won.\n* Sprint: They deny they do, though it's apparent they can. User backlash results in people monitoring if they are being throttled. Of course, there are several accusations.\n* Bell and Rogers in Canada. \n\nTelus has also blocked union sites and have engaged in some pretty underhanded anti-union tactics. Not sure if blocking sites fall under net neutrality though."
}
] |
[
{
"from": "human",
"value": "Does having a longer Ethernet cable slow your connection?"
},
{
"from": "gpt",
"value": "For all practical purposes, there will be no effect on the speed of your connection.\n\nThere will be a very insignificant amount of delay due to long cables. This won't affect the maximum speed of your connection, but it would cause some latency. pjc50 points out that it's about a nanosecond for every foot of cable length, which is a good rule of thumb used by many engineers when developing systems which are very dependent on latencies on those timescales.\n\nIn reality, you will never notice a difference. A \"fast\" ping time on the Internet is 10 ms, which is 10,000,000 ns. Adding even a few hundred feet of cable isn't going to have a noticeable effect at that point. In fact, nearly every step of the way involves delays which are more extreme than those seen from signal propagation. For example, most consumer grade routers will wait for the last byte of an incoming packet to be received and check it for errors before sending the first byte of the packet on its way. This delay will be on the order of 5,000 ns! Given that the maximum length of cable you can run (per the Ethernet spec) is 300 ft, the cable length could never cause more than 300 ns of delay due to the cable!"
}
] |
[
{
"from": "human",
"value": "Border around formatted text in Inkscape"
},
{
"from": "gpt",
"value": "Here is solution step by step:\n\n* Create some flowed text by using the text tool to first drag a rectangle and then type.\n* Separately, create a rectangle roughly around the text (really, the rectangle can be anywhere)\n* With the rectangle selected, chose \"Path\" > \"Linked Offset\". Then grab the small diamond and drag it in to make a second, smaller rectangle. \n* Select the text and the smaller rectangle and choose \"Text\" > \"Flow into frame\". Then select the smaller rectangle and remove the stroke to make it invisible. Now the text will flow within the smaller (invisible) rectangle, which is dynamically linked to the shape of the larger (visible) rectangle. \n* Then, if you like, you can add a small border radius on the visible rectangle and you get this nice looking result. \n* If you do need to modify the space between the text and the rectangle, you can carefully select it by grabbing it with the text and the subtracting the text from the selection, or you can drill down to it within the XML Editor. When you do get it selected, then you can adjust the margin.\n* Finally, group the text and the two rectangles, and you're all set! "
}
] |
[
{
"from": "human",
"value": "I learned about the equilibrium constant. Now, I've seen that the equilibrium constant of burning is extremely small $(K \\ll 1)$. here, I have a question. you see, $K$ is still NOT 0, which means that the forward reactions happen at least a tiny bit. Then, shouldn't we see some parts of anything burning at least a little bit?"
},
{
"from": "gpt",
"value": "The equilibrium constant for combustion of organic matter in air with oxygen is not small, but extremely large ($K_\\mathrm{eq} \\gg 1$), as is expected from a reaction that is simultaneously very exothermic and (usually) increases entropy due to the formation of more gaseous molecules than the input oxygen.\n\nThe major reason carbon-based life can exist at ambient temperature in an oxygen atmosphere is purely kinetic, not thermodynamic. You, the tree outside and everything else made of carbon is right now undergoing continuous combustion. However, in the absence of catalyst, this process is too slow below a couple hundred degrees Celsius for it to be self-sustaining. More technically, combustion of organic matter is a highly exergonic process, but the activation energy is high. The meagre amount of heat generated by the handful of molecules reacting is too quickly diluted into the surroundings, and the reaction does not accelerate and spiral out of control (a fire, as described by the eternal Feynman).\n\nVery luckily for us, Life figured out this vast untapped source of chemical energy held back by kinetics approximately three billion years ago and developed a whole metabolic process to extract this energy in a stepwise fashion using catalysis, which we call aerobic respiration. Without it, multicellular organisms could well never had evolved."
}
] |
[
{
"from": "human",
"value": "The name \"Bleach\" seems to be having no relevance to the plot unlike most other series. Was it just chosen at Kubo-sensei's whim or does it have some significance? Maybe some cultural significance associated with shinigami, etc. that I am now aware of?"
},
{
"from": "gpt",
"value": "Kubo himself has commented on this. Below is the quote from Bleach Answers Wiki:\n\n> The title wasn't Bleach when Tite Kubo decided to draw a story about Shinigami. This was before he drew the one-shot manga that appeared in Akamaru Jump. The weapon wasn't a sword, but a scythe. Only Rukia had a scythe and the other characters used guns. At that point, the title was Snipe (as in \"Sniper\"). Right before he started drawing, he began to think that a sword would be better and realized he couldn't use Snipe as a title anymore. He began looking for a title that grasped the bigger picture. Shinigami are associated with the color black, but it would have been boring to use \"black\". \"White\", on the other hand, can suggest black as a complementary color. So Tite Kubo chose \"Bleach\" to evoke the impression of the color white.\n\nThere are also other theories, which aren't as well supported by interviews with Kubo but still seem to have some evidence supporting them. One is that shinigami \"cleanse\" souls, similar to what bleach does (Kubo himself mentioned this in an interview, but it's not clear whether he noticed this before or after deciding on the name). Another is that Kubo is a fan of the band Nirvana, and their first album is Bleach. So there might be more than one reason, but the quoted one above is the one he usually gives when asked about it."
}
] |
[
{
"from": "human",
"value": "Why don't rally cars have airbags?"
},
{
"from": "gpt",
"value": "If you are ever lucky enough to drive a competition car, the first thing that will strike you is that you can't move. Once you're in your bucket seat with your fire retardant overalls on, wearing a full face helmet and neck brace and strapped firmly in with your multipoint harness, you'll feel like you've been pinned to the seat.\n\nYou can't look over your shoulder like you can in a road car. You can't reach the location where the glove-box is. In the event of a collision, the chances of your helmet coming into contact with the steering wheel are absolutely minimal.\n\nHowever, if you watch rally cars in action, especially on gravel, you'll see them yumping (rally speak of hitting a jump), you'll see them in and out of ditches, ruts and rivers. Many of the impacts a rally car suffers during normal use would far exceed the trigger points to deploy a standard air-bag. Deployment of such a device when landing an airborne car that's travelling in excess of 120mph (200km/h) with spectator lined roads could have deadly consequences.\n\nSimilarly, in racing when there are multiple cars jostling for position on the same piece of track, deployment would be likely from all the \"door handling\" (a term to describe \"acceptable\" contact between race cars) and if one in the middle of a pack were to deploy, forcing the drivers hands away from the wheel, it would likely cause carnage.\n\nIn summary, air bags are not fitted to competition cars because they could pose a serious and significant risk to both the occupants of the cars and people outside such as other competitors, marshals and spectators."
}
] |
[
{
"from": "human",
"value": "Was the Millennium Falcon a one-off or was it mass produced?"
},
{
"from": "gpt",
"value": "The Millennium Falcon is not unique; it was a model YT-1300f Corellian light freighter. This implies that there are other similar ships, including other variations on the YT-1300 model.\n\nYou can find a lot of information about the Millennium Falcon in a companion book, Star Wars: The Force Awakens: Incredible Cross-Sections. The model number also appears on-screen at least once: in Star Wars: Rebels episode "Fighter Flight" on a poster.\n\nNote, however, that the Millennium Falcon has been highly customized; some of the custom work is mentioned in The Force Awakens. It's not clear just how much of the ship is stock and how much is custom.\n\nIf you're willing to go outside Canon into Legends, the model number of the ship, and other similar Corellian light freighters, can be seen in some of the video games, and there is an entire novel called Millennium Falcon that details its history. The ship was serial #YT-1300 492727ZED, one of many such vessels, but from the beginning the ship was shown to have some "quirks" to distinguish it from its counterparts.\n\nIn Legends, the ship was also destroyed and rebuilt from scratch at least once, so it's arguable as to whether it's still the same ship or not."
}
] |
[
{
"from": "human",
"value": "Usually when I see lists of things to do to be more energy efficient, they require one to own their own home. What can I do to be more energy efficient in an apartment? \nFor example, I can't install solar panels, I can't upgrade/change my appliances, I can't install better insulation or windows, and I can't install a programmable thermostat. \nPretty much the only thing I can do (and have done) is switch all of my bulbs to CFLs. I also keep all of my electronics on power strips which I turn off when I leave my apartment and when I'm sleeping."
},
{
"from": "gpt",
"value": "There are a lot of little things you can do that can add up. Here are some suggestions:\n\n* Wash clothes in cold water only.\n* Turn your freezer/refrigerator up a few degrees.\n* Purchase insulated curtains/shades, and close them during the day when you are at work. \n* Make sure your furniture is arranged so that it is not blocking any vents or radiators.\n* Purchase a draft guard to place at the bottom of your door leading to the outside where temperature-controlled air can escape from your apartment.\n* Install a low flow shower water head. Shower head screw on/off easily. Keep the old head to replace when/if you move.\n* Clean Your Refrigerator: It's unlikely that you'll purchase new energy efficient appliances for your rental, but you can tweak the ones you already have. For example, by cleaning the coils on the back of your fridge you'll help one of the biggest energy hogging appliances run more efficiently.\n* Use Energy Efficient Bulbs, even if you're put off by the slightly higher upfront cost, you can simply unscrew them and take them with you when it's time to move.\n* Use Power Strips: A lot of energy is wasted through electronics left plugged in, and the resulting vampire energy. Put a stop to it by using one of a number of cool, smart energy strips, which will turn off vampire energy when the electronics are not in use.\n* Use a Humidifier & Lower Your Heat: Keeping humidity levels up during the winter is not only good for your health, but also for your air and furniture. It also helps keep the ambient air temperature feeling warmer than dry air, which means you can turn your thermostat down. If your HVAC system doesn't have a humidifier built-in, then you can get an inexpensive and portable humidifier to use during dry conditions."
}
] |
[
{
"from": "human",
"value": "Is there any way to exit ```less``` without clearing the screen?"
},
{
"from": "gpt",
"value": "This is actually a function of the terminal emulator you are using (xterm, gnome-terminal, konsole, screen). An alternate screen, or altscreen, gets launched when programs such as ```less``` or ```vim``` are invoked. This altscreen has no history buffer and exits immediately when you quit the program, switching back to the original screen which restores the previous window content history and placement. \n\nYou can prevent ```less``` from launch in an altscreen by passing the argument \"-X\". \n```less -X /path/to/some/file```\n\nYou can also pass \"-X\" as an environment variable. So if you are using ```bash```, place this in ```~/.bashrc```:\n```export LESS=\"-X\"\n```\nHowever, this disbles the termcap (terminal capability) initialization and deinitialization, so other views when you use ```less``` may appear off.\n\nAnother option would be to use ```screen``` and set the option ```altscreen off``` in your ```~/.screenrc```. ```less``` will not clear the screen and should preserve color formatting. Presumably ```tmux``` will have the same option."
}
] |
[
{
"from": "human",
"value": "How can I do 'insert if not exists' in MySQL?"
},
{
"from": "gpt",
"value": "In MySQL, ```ON DUPLICATE KEY UPDATE``` or ```INSERT IGNORE INTO table``` can be viable solutions.\n\n* An example of ```ON DUPLICATE KEY UPDATE``` update based on mysql.com:\n\n```\nINSERT INTO table (a,b,c) VALUES (1,2,3)\n ON DUPLICATE KEY UPDATE c=c+1;\n\nUPDATE table SET c=c+1 WHERE a=1;\n```\n\n* An example of INSERT IGNORE based on mysql.com:\n\n```\nINSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]\n [INTO] tbl_name [(col_name,...)]\n {VALUES | VALUE} ({expr | DEFAULT},...),(...),...\n [ ON DUPLICATE KEY UPDATE\n col_name=expr\n [, col_name=expr] ... ]\n```\n\nOr:\n```\nINSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]\n [INTO] tbl_name\n SET col_name={expr | DEFAULT}, ...\n [ ON DUPLICATE KEY UPDATE\n col_name=expr\n [, col_name=expr] ... ]\n```\n\nOr:\n```\nINSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]\n [INTO] tbl_name [(col_name,...)]\n SELECT ...\n [ ON DUPLICATE KEY UPDATE\n col_name=expr\n [, col_name=expr] ... ]\n```"
}
] |
[
{
"from": "human",
"value": "What does ```class``` do in Ruby?"
},
{
"from": "gpt",
"value": "First, the ```class``` syntax opens up ```foo```'s singleton class (eigenclass). This allows you to specialise the behaviour of methods called on that specific object.\n```a = 'foo'\nclass << a\n def inspect\n '\"bar\"'\n end\nend\na.inspect # => \"bar\"\na = 'foo' # new object, new singleton class\na.inspect # => \"foo\"\n```\n\nNow, to answer the question: ```class``` opens up ```self```'s singleton class, so that methods can be redefined for the current ```self``` object (which inside a class or module body is the class or module itself). Usually, this is used to define class/module (\"static\") methods:\n```class String\n class << self\n def value_of obj\n obj.to_s\n end\n end\nend\nString.value_of 42 # => \"42\"\n```\nThis can also be written as a shorthand:\n```class String\n def self.value_of obj\n obj.to_s\n end\nend\n```\nOr even shorter:\n```def String.value_of obj\n obj.to_s\nend\n```\n\nWhen inside a function definition, ```self``` refers to the object the function is being called with. In this case, ```class``` opens the singleton class for that object; one use of that is to implement a poor man's state machine:\n```class StateMachineExample\n def process obj\n process_hook obj\n end\nprivate\n def process_state_1 obj\n # ...\n class << self\n alias process_hook process_state_2\n end\n end\n def process_state_2 obj\n # ...\n class << self\n alias process_hook process_state_1\n end\n end\n # Set up initial state\n alias process_hook process_state_1\nend\n```\nSo, in the example above, each instance of ```StateMachineExample``` has ```process_hook``` aliased to ```process_state_1```, but note how in the latter, it can redefine ```process_hook``` (for ```self``` only, not affecting other ```StateMachineExample``` instances) to ```process_state_2```. So, each time a caller calls the ```process``` method (which calls the redefinable ```process_hook```), the behaviour changes depending on what state it's in."
}
] |
[
{
"from": "human",
"value": "\"I have a problem where i'm initialising a variable on the scope in a controller. Then it gets changed in another controller when a user logs in. This variable is used to control things such as the navigation bar and restricts access to parts of the site depending on the type of user, so its important that it holds its value. The problem with it is that the controller that initialises it, gets called again by angular some how and then resets the variable back to its initial value. \nI assume this is not the correct way of declaring and initialising global variables, well its not really global, so my question is what is the correct way and is there any good examples around that work with the current version of angular?\""
},
{
"from": "gpt",
"value": "You've got basically 2 options for \"global\" variables:\n* use a ```$rootScope```\n* use a service\n\n```$rootScope``` is a parent of all scopes so values exposed there will be visible in all templates and controllers. Using the ```$rootScope``` is very easy as you can simply inject it into any controller and change values in this scope. It might be convenient but has all the problems of global variables.\n\nServices are singletons that you can inject to any controller and expose their values in a controller's scope. Services, being singletons are still 'global' but you've got far better control over where those are used and exposed.\n\nUsing services is a bit more complex, but not that much, here is an example:\n```\nvar myApp = angular.module('myApp',[]);\nmyApp.factory('UserService', function() {\n return {\n name : 'anonymous'\n };\n});\n```\nand then in a controller:\n```\nfunction MyCtrl($scope, UserService) {\n $scope.name = UserService.name;\n}\n```"
}
] |
[
{
"from": "human",
"value": "How do I initialize a TypeScript Object with a JSON-Object?"
},
{
"from": "gpt",
"value": "This question is quite broad, here are two solutions.\n\n### Solution 1: Helper Method\nHere's an example of using a Helper Method that you could change to fit your needs:\n```class SerializationHelper {\n static toInstance(obj: T, json: string) : T {\n var jsonObj = JSON.parse(json);\n if (typeof obj[\"fromJSON\"] === \"function\") {\n obj[\"fromJSON\"](jsonObj);\n }\n else {\n for (var propName in jsonObj) {\n obj[propName] = jsonObj[propName]\n }\n }\n return obj;\n }\n}\n```\nThen use it:\n```var json = '{\"name\": \"John Doe\"}',\n foo = SerializationHelper.toInstance(new Foo(), json);\nfoo.GetName() === \"John Doe\";\n```\n\n* Advanced Deserialization\nThis could also allow for some custom deserialization by adding your own ```fromJSON``` method to the class (this works well with how ```JSON.stringify``` already uses the ```toJSON``` method, as shown below):\n```interface IFooSerialized {\n nameSomethingElse: string;\n}\nclass Foo {\n name: string;\n GetName(): string { return this.name }\n toJSON(): IFooSerialized {\n return {\n nameSomethingElse: this.name\n };\n }\n fromJSON(obj: IFooSerialized) {\n this.name = obj.nameSomethingElse;\n }\n}\n```\nThen use it:\n```var foo1 = new Foo();\nfoo1.name = \"John Doe\";\nvar json = JSON.stringify(foo1);\njson === '{\"nameSomethingElse\":\"John Doe\"}';\nvar foo2 = SerializationHelper.toInstance(new Foo(), json);\nfoo2.GetName() === \"John Doe\";\n```\n\n### Solution 2: Base Class\nAnother way you could do this is by creating your own base class:\n```class Serializable {\n fillFromJSON(json: string) {\n var jsonObj = JSON.parse(json);\n for (var propName in jsonObj) {\n this[propName] = jsonObj[propName]\n }\n }\n}\nclass Foo extends Serializable {\n name: string;\n GetName(): string { return this.name }\n}\n```\nThen use it:\n```var foo = new Foo();\nfoo.fillFromJSON(json);\n```\n\nThere are too many different ways to implement a custom deserialization using a base class, and ultimately it depends on how your own choice."
}
] |
[
{
"from": "human",
"value": "Why is digital photography so expensive?"
},
{
"from": "gpt",
"value": "Digital photography is not expensive but buying into it is.\n\nWith film photography camera and lenses were a moderate investment but these lasted quite long. Even a battery in a film camera would last several years. Cameras and lenses were mechanical and much simpler than today's models. This made them require less expertise to manufacture and also made them incredibly sturdy. A lifespan of a decade for a camera and several decades for lenses was not uncommon.\n\nHowever, the real cost of film photography is film and development. Each roll of film costs several dollars and gives up to 36 images. Specialized rolls were more expensive and had fewer exposures. Then, if you did not do development yourself, you had to pay for development. Also prints. Even if you did your own, chemicals and papers were recurring costs. And there was the added expensive of creating a wet darkroom. In the end, the hobby was quite costly.\n\nWith digital photography, you pay nearly all costs up-front and it comes out much cheaper for anyone who is into the art and shoots regularly. A mid-range DSLR with good lens and memory can easily be obtained for less than $2500. Many entry level ones exist for under $1000. If you were to shoot 10,000 frames before both your camera and lens dies, it would cost you between 2.5¢ to 10¢ per frame. It is far far less than even just the film for each frame. Still most cameras have shutters rated to at least 100,000 frames and lenses last much longer than cameras. Even if you only captured 1000 images, you would have a better deal than with film photography. All digital cameras come with rechargeable batteries, good for 500-1000 shots-per-charge (for DSLRs) and can be charged at least 50-100 times before needing to be replaced. That means you actually will not use up the battery before you have taken full advantage of the better value of digital photography.\n\nMarket conditions are forcing the price of digital photography to increase. The loss of market share of compact and ultra-compact digital cameras to cellphones forced manufacturers to discontinue such products and concentrate on higher-end offering. While the number of unit sales of digital cameras has been declining, the average price is going up to compensate for reduced revenue. This trend is expected to continue for a while."
}
] |
[
{
"from": "human",
"value": "If user input is inserted without modification into an SQL query, then the application becomes vulnerable to SQL injection, like in the following example:\n```$unsafe_variable = $_POST['user_input'];\nmysql_query(\"INSERT INTO `table` (`column`) VALUES ('$unsafe_variable')\");\n```\nThat's because the user can input something like ```value'); DROP TABLE table;--```, and the query becomes:\n```INSERT INTO `table` (`column`) VALUES('value'); DROP TABLE table;--')\n```\nWhat can be done to prevent this from happening?"
},
{
"from": "gpt",
"value": "The correct way to avoid SQL injection attacks, no matter which database you use, is to separate the data from SQL, so that data stays data and will never be interpreted as commands by the SQL parser. It is possible to create an SQL statement with correctly formatted data parts, but if you don't fully understand the details, you should always use prepared statements and parameterized queries. These are SQL statements that are sent to and parsed by the database server separately from any parameters. This way it is impossible for an attacker to inject malicious SQL.\n\nYou basically have two options to achieve this:\n\n1. Using PDO (for any supported database driver):\n```\n$stmt = $pdo->prepare('SELECT * FROM employees WHERE name = :name');\n$stmt->execute([ 'name' => $name ]);\n\nforeach ($stmt as $row) {\n // Do something with $row\n}\n```\n\n2. Using MySQLi (for MySQL):\nSince PHP 8.2+ we can make use of execute_query() which prepares, binds parameters, and executes SQL statement in one method:\n```\n$result = $dbConnection->execute_query('SELECT * FROM employees WHERE name = ?', [$name]);\n\nwhile ($row = $result->fetch_assoc()) {\n // Do something with $row\n}\n```\n\nUp to PHP8.1:\n```\n$stmt = $dbConnection->prepare('SELECT * FROM employees WHERE name = ?');\n$stmt->bind_param('s', $name); // 's' specifies the variable type => 'string'\n$stmt->execute();\n\n$result = $stmt->get_result();\nwhile ($row = $result->fetch_assoc()) {\n // Do something with $row\n}\n```\nIf you're connecting to a database other than MySQL, there is a driver-specific second option that you can refer to (for example, pg_prepare() and pg_execute() for PostgreSQL). PDO is the universal option."
}
] |
[
{
"from": "human",
"value": "I want to be able to output the current loop iteration to my template.\nAccording to the docs, there is a ```loop.counter``` variable that I am trying to use:\n```\n{% for user in userlist %}\n * \n {{ user }} {{loop.counter}}\n \n {% if loop.counter == 1 %}\n This is the First user\n {% endif %}\n{% endfor %}\n\n```\nBut is being outputed to my template. What is the correct syntax?"
},
{
"from": "gpt",
"value": "The counter variable inside the loop is called loop.index in Jinja2.\n\n```\n>>> from jinja2 import Template\n\n>>> s = \"{% for element in elements %}{{loop.index}} {% endfor %}\"\n>>> Template(s).render(elements=[\"a\", \"b\", \"c\", \"d\"])\n1 2 3 4\n```\n\nIn addition to `loop.index`, there is also\n\n* `loop.index0` (index starting at `0`)\n* `loop.revindex` (reverse index; ending at `1`)\n* `loop.revindex0` (reverse index; ending at `0`)\netc."
}
] |
[
{
"from": "human",
"value": "Are the players on the same team as the DM?"
},
{
"from": "gpt",
"value": "The GM isn't on a team, they’re the referee.\nThe way Pathfinder, D&D, and similar games work is that there are one or more player characters (the \"party\") that are controlled by the players.\nThe GM (DM, in D&D) usually doesn't run a player character (PC), but they are responsible for representing all aspects of the world, all the non-player characters (NPCs) and monsters and stuff, and adjudicating the rules.\nSo the GM may be controlling a character or creature that is friendly to the PCs, or a character or creature that is hostile to the PCs, or both at once! The GM is the orcs you fight and the friendly cleric that heals you and the rain that falls on you.\n\nGMs aren't supposed to be hostile to you in general; they are generally taught to give the players a fun challenge. It's no good to be \"against\" the players, because you can always kill PCs just by saying \"rocks fall, everyone dies!\" The GM is on your team inasmuch as they want everyone to have fun. Because their role is running monsters and the other stuff that you fight, they are often in opposition to you tactically, as they are responsible for the \"other team\" as part of their job. Some new GMs do get confused by this and decide it's them against the party, but that ends up being a problem that has to be solved by them wising up or their players leaving. \n\nThe context of your question isn't clear, so I'll add a note that might be relevant - some GMs, especially new ones, do run a PC of their own that's part of the party, these are referred to as GMPCs (or DMPCs). This is generally frowned upon because of the conflict of interest inherent in doing it, but it happens. That GMPC may be on your side, on your side most of the time, betray you, have other plans - just like any other person in the game world might. "
}
] |
[
{
"from": "human",
"value": "C++ vs. The Arduino Language?"
},
{
"from": "gpt",
"value": "If you have previous programming experience and you are aware of concepts as OOP, it is better to go for C/C++. The arduino language is really great for beginners, but have some limitations (e.g. you must have all your files in the same folder). And it is basically a simplification of C/C++ (you can practically copy&paste arduino code to a C/C++ file, and it will work). Also it makes sense that you can go and use a full well known IDE such as Eclipse.\n\nInitially it is required a bit more of setup and configuration of your dev environment, but IMHO it is worth it for programmers with experience in any other language.\n\nIn any case, it won't harm you to start using the arduino language and the arduino IDE for a few days to get familiar with the arduino hardware and then move to C/C++ with Eclipse for really developing your project."
}
] |
[
{
"from": "human",
"value": "How can I adapt Ubuntu to a high resolution display?\nI have a display with 3200x1600px on only 11'' and everything looks really tiny."
},
{
"from": "gpt",
"value": "There are some steps to take:\n1. System wide Menu and titlebar scaling\nStarting with Ubuntu 14.04 we have an option that helps a bit:\nScaling Support\nopen the System Settings (here in english:)\n```LANG=c unity-control-center\n```\nGo to \"Displays\" and set the \"Scale for menu and title bars\".\n\nSince Ubuntu 17.10 the scaling can be set in \n```LANG=c gnome-control-center\n```\nGo to ```Settings > Devices > Displays``` there\nsee also: How to find and change the screen DPI?\n\n2. Universal Access\nGo to \"Universal Access\" (```unity-control-center universal-access```) and select \"Large Text\".\nNote: not all applications handle this correctly, some will not reserve the extra space, so some UI elements are not accessible with this option!\n\n3.Increase unity dock size\nIn ```unity-control-center```->```Appearance```->```Look``` at the botom, you can adjust the size.\n\n4. Adapt Firefox\nsee: Adjust Firefox and Thunderbird to a High DPI touchscreen display (retina)\n(or use Chrome, which works fine since Version 41.0.2272.76 Ubuntu 14.10, though Chrome will need to be restarted to take effect)\n\n5. Increase font in Pidgin\nThere is a plugin you can install:\n```sudo apt-get install pidgin-extprefs\n```\nThen you can increase the font in Plugins->Extended Prefs\n\n6. Create starter for applications that still don't scale\nSome applications still don't obey the global scaling (mainly java) for those few applications you can create a starter to only Fix scaling of java-based applications for a high DPI screen\n\nNote: In older Ubuntu versions, with unity-tweak-util in the section \"Fonts\" you can set the \"Text Scaling Factor\" to 2.0. This will scale the fonts in most applications to double size. "
}
] |
[
{
"from": "human",
"value": "Say I want to make a file:\n```filename = "/foo/bar/baz.txt"\nwith open(filename, "w") as f:\n f.write("FOOBAR")\n```\nThis gives an ```IOError```, since ```/foo/bar``` does not exist.\nWhat is the most pythonic way to generate those directories automatically? Is it necessary for me explicitly call ```os.path.exists``` and ```os.mkdir``` on every single one (i.e., /foo, then /foo/bar)?"
},
{
"from": "gpt",
"value": "In Python 3.2+, using the APIs requested by the OP, you can elegantly do the following:\n```\nimport os\nfilename = "/foo/bar/baz.txt"\nos.makedirs(os.path.dirname(filename), exist_ok=True)\nwith open(filename, "w") as f:\n f.write("FOOBAR")\n```\n\nWith the Pathlib module (introduced in Python 3.4), there is an alternate syntax (thanks David258):\n```from pathlib import Path\noutput_file = Path("/foo/bar/baz.txt")\noutput_file.parent.mkdir(exist_ok=True, parents=True)\noutput_file.write_text("FOOBAR")\n```\n\nIn older python, there is a less elegant way:\nThe ```os.makedirs``` function does this. Try the following:\n```import os\nimport errno\nfilename = "/foo/bar/baz.txt"\nif not os.path.exists(os.path.dirname(filename)):\n try:\n os.makedirs(os.path.dirname(filename))\n except OSError as exc: # Guard against race condition\n if exc.errno != errno.EEXIST:\n raise\nwith open(filename, "w") as f:\n f.write("FOOBAR")\n```\nThe reason to add the ```try-except``` block is to handle the case when the directory was created between the ```os.path.exists``` and the ```os.makedirs``` calls, so that to protect us from race conditions."
}
] |
[
{
"from": "human",
"value": "Assume that Jane Doe has published a paper in 2010 where she has developed a model or a theorem or a similar result, let’s say, that it relates to growth.\nNow assume that Jane Doe is writing another paper in 2015, where she refers to the model/theorem from her paper in 2010.\nIs it acceptable for Jane to write something like the following?\n\n Doe’s growth model (2010), implies that ...\n Doe’s growth theorem (2010) implies that ...\n The Doe growth model (2010) implies ...\n"
},
{
"from": "gpt",
"value": "This is, unfortunately, a case where English grammar can be tricky and exactly how you phrase things is going to matter.\n\nIt is often seen a presumptuous to name something after yourself: \"Newton's Laws\" and \"Hawking radiation\" and \"Rayleigh scattering\" are retrospective judgements of significance by the community. Claiming a similar name is an assertion in advance that your work will be at a similar level of significance. \n\nThe problem, then, is that if you say \"Doe's growth model\", it is ambiguous whether that is intended to be a construction like \"Hawking radiation\" or whether it just means \"the growth model that happens to have been developed by Doe.\"\n\nThis potential problem can be avoided by rephrasing to avoid the parallel construction: \"the growth model by Doe et al (2010)\", or better yet, \"the growth model presented in (Doe, 2010).\" The point here is that the significant item under discussion is the growth model, not Doe, and the sentence should be phrased to make that as clear as possible.\n\nFinally, note that these sorts of phrasings can work both for double blind review and for review where the authors' identities are known: even when the identity of the author is known, the important thing should be the relationship between the work, not the fraction of authors that are shared between two papers (unless you are specifically trying to talk about independent co-discovery)."
}
] |
[
{
"from": "human",
"value": "I've been with my current employer for about a year now. Due to the way the company is setup, I'm the only one with knowledge on a process that is quite important to the company. The company is going through some restructuring, and has been letting people go. As the newest guy in my department, I'm obviously concerned. \nMy question though, is if I am let go, am I obligated to spend my time teaching someone else this process that only I know about?"
},
{
"from": "gpt",
"value": "As long as you're still getting paid, you're obligated to do whatever your employer tells you to do during your official work hours (if it's within the context of your job and legal).\n\nRefusing to do so would, at best, burn bridges and, at worst, result in them instantly terminating your contract and no longer paying you or potentially get you into legal trouble.\n\nIf they're no longer paying you, you're not obligated to do anything. You're also not obligated to work overtime (paid or not).\n\nIf you're asked to work overtime or do unpaid work, it would be reasonable to decline or (if it's a non-trivial amount of hours) request additional payment. Although this is not to say they would think it's reasonable.\n\nYour contract may, of course, create different obligations, and you should adhere to those instead of what's written here. Any relevant notice periods should also be specified there, in some other legal document or in some regional employment law.\n\nIf you want to leave a good impression and avoid burning bridges, it would be advisable to try to pass on knowledge (through teaching or documentation) even if you're not specifically asked to do so. You might even want to (gently) push back on other tasks and emphasise the importance of passing on your knowledge. Although some won't be able to see the importance of transferring what you know."
}
] |
[
{
"from": "human",
"value": "Bash test: what does \"=~\" do?"
},
{
"from": "gpt",
"value": "The ```~``` is actually part of the operator ```=~``` which performs a regular expression match of the string to its left to the extended regular expression on its right.\n```[[ \"string\" =~ pattern ]]\n```\nNote that the string should be quoted, and that the regular expression shouldn't be quoted.\nA similar operator is used in the Perl programming language.\nThe regular expressions understood by ```bash``` are the same as those that GNU ```grep``` understands with the ```-E``` flag, i.e. the extended set of regular expressions.\n\nSomewhat off-topic, but good to know:\nWhen matching against a regular expression containing capturing groups, the part of the string captured by each group is available in the ```BASH_REMATCH``` array. The zeroth/first entry in this array corresponds to ```&``` in the replacement pattern of ```sed```'s substitution command (or ```$&``` in Perl), which is the bit of the string that matches the pattern, while the entries at index 1 and onwards corresponds to ```\\1```, ```\\2```, etc. in a ```sed``` replacement pattern (or ```$1```, ```$2``` etc. in Perl), i.e. the bits matched by each parenthesis.\n\nExample:\n```string=$( date +%T )\nif [[ \"$string\" =~ ^([0-9][0-9]):([0-9][0-9]):([0-9][0-9])$ ]]; then\n printf 'Got %s, %s and %s\\n' \\\n \"${BASH_REMATCH[1]}\" \"${BASH_REMATCH[2]}\" \"${BASH_REMATCH[3]}\"\nfi\n```\nThis may output\n```Got 09, 19 and 14\n```\nif the current time happens to be 09:19:14.\nThe ```REMATCH``` bit of the ```BASH_REMATCH``` array name comes from \"Regular Expression Match\", i.e. \"RE-Match\".\n\nIn non-```bash``` Bourne-like shells, one may also use ```expr``` for limited regular expression matching (using only basic regular expressions).\nA small example:\n```$ string=\"hello 123 world\"\n$ expr \"$string\" : \".*[^0-9]\\([0-9][0-9]*\\)\"\n123\n```"
}
] |
[
{
"from": "human",
"value": "If I have a Bash script like:\n```#!/bin/bash\nf() {\n # echo function name, \"f\" in this case\n}\n```\nIs there any way to do this? This could be used in help messages such as\n```printf \"Usage: %s: blah blah blah \\n\" $(basename $0) >&2;\n```\nOnly in this case what I wanted is not ```$0```, which is the file name of the script."
},
{
"from": "gpt",
"value": "From the Bash Reference Manual:\n\nFUNCNAME\nAn array variable containing the names of all shell functions currently in the execution call stack. The element with index 0 is the name of any currently-executing shell function. The bottom-most element (the one with the highest index) is "main". This variable exists only when a shell function is executing. Assignments to FUNCNAME have no effect and return an error status. If FUNCNAME is unset, it loses its special properties, even if it is subsequently reset.\nThis variable can be used with BASH_LINENO and BASH_SOURCE. Each element of FUNCNAME has corresponding elements in BASH_LINENO and BASH_SOURCE to describe the call stack. For instance, ${FUNCNAME[$i]} was called from the file ${BASH_SOURCE[$i+1]} at line number ${BASH_LINENO[$i]}. The caller builtin displays the current call stack using this information.\n\nWhen bash arrays are accessed without an index the first element of the array will be returned, so ```$FUNCNAME``` will work in simple cases to provide the name of the immediately current function, but it also contains all other functions in the call stack. For example:\n```# in a file "foobar"\nfunction foo {\n echo foo\n echo "In function $FUNCNAME: FUNCNAME=${FUNCNAME[*]}" >&2\n}\nfunction foobar {\n echo "$(foo)bar"\n echo "In function $FUNCNAME: FUNCNAME=${FUNCNAME[*]}" >&2\n}\nfoobar\n```\nWill output:\n```$ bash foobar\nIn function foo: FUNCNAME=foo foobar main\nfoobar\nIn function foobar: FUNCNAME=foobar main\n```"
}
] |
[
{
"from": "human",
"value": "I know that the public practice of any religion other than Wahabbi Islam is strictly forbidden in Saudi Arabia, and there would be no places of worship. I also know that the morality police raided a a hotel several years ago where Mass was being celebrated, and arrested the priest and the acolytes.\nBut I am also told that many expats from countries with large Catholic communities such as the Philippines, India, and Sri Lanka do gather in private homes for worship. Is this officially tolerated, or would I endanger the hosts or other participants by asking about them?"
},
{
"from": "gpt",
"value": "Essentially, there is no religious freedom in Saudi Arabia, and nobody is likely to openly change - indeed:\n\n> conversion from Islam to another religion is considered apostasy and punishable by death.\n\nAll citizens are considered Muslim by the state.\nIn 2008, the Vatican tried to get a Catholic Church built in Saudi Arabia, and had discussions with the Saudi government, but the result was that it was not permitted.\nSo that brings us to the Christians (including Catholics) that DO exist in the country. There are considered to be more than a million Roman Catholics in the country, as well as from many others.\n\n> Saudi Arabia allows Christians to enter the country as foreign workers for temporary work, but does not allow them to practice their faith openly. Because of that Christians generally only worship in secret within private homes. Items and articles belonging to religions other than Islam are prohibited. These include Bibles, crucifixes, statues, carvings, items with religious symbols, and others.\n\nSo, to your question about whether this secret practice is allowed or tolerated? Not so much. They have an entire 'government' organisation - The Saudi Arabian Mutaween (Arabic: مطوعين), or Committee for the Propagation of Virtue and the Prevention of Vice (i.e., the religious police) which prohibits the practice of any religion other than Islam.\nThe government also doesn't let clergy of other religions into the country for conducting religious services (naturally this means no Christian baptisms, weddings etc by official clergy legally).\nAnd if they find someone practising in their homes? International Christian Concern (ICC) protested what it reported as the 2001 detention of 11 Christians in Saudi Arabia, for practicing their religion in their homes. In June 2004, ICC reported on what it termed a \"pogrom-like\" crackdown by Saudi police on Christians after media reports of Koran desecration in Guantanamo Bay."
}
] |
[
{
"from": "human",
"value": "Is there a way to achieve protections similar to \"Copyleft\" under the patent system?"
},
{
"from": "gpt",
"value": "\"Copyleft\" is a specific application of copyright law (restricting the license to use copyrighted material by requiring that those obtaining it must also redistribute it for free / under specified terms).\n\nAny work under \"copyleft\" is protected by copyright law (thanks to The Berne Convention copyright is \"automatic\" in most of the world), and those copyright protections are what allow the author to dictate the terms under which the copyrighted work is used.\n\nAs far as I'm aware there is no Berne Convention equivalent in patent law: in order for you to have patent protection and the ability to dictate how your invention is used (by licensing the patent for it to impose Copyleft-style requirements on those using it) you must file for (and receive) a patent.\n\nNote that the sort of \"keeping information free\" protection the Copyleft folks would seek is inherent in US Patent Law and most other patent systems -- it is known as Prior Art.\nIn its most simplistic definition (cribbed from Wikipedia):\n\n> Prior art …, in most systems of patent law, constitutes all information that has been made available to the public in any form before a given date that might be relevant to a patent's claims of originality. If an invention has been described in the prior art, a patent on that invention is not valid.\n\nSo if you have designed and publicly released something patentable (and can document the date of such a release to the satisfaction of the patent authorities) you can seek to invalidate any subsequent patent claims on the grounds that prior art exists."
}
] |
[
{
"from": "human",
"value": "In monopoly, can an opponent put a property up for auction at a higher price than I have in cash?"
},
{
"from": "gpt",
"value": "This is definitely allowed, and a good tactic to leverage a cash advantage.\nThere is no minimum or maximum price for an auction. The rules state:\n\nBidding may start at any price.\n\nMonopoly auctions do not have "the bidding will start at $10". They just have "bidding is open". Once the auction begins, a bid by any player sets the minimum price for the auction at that bid. Your opponent could easily bid $10 on the property, forcing you to either let them have it for $10 or bid more than you have in cash. You could bid first, say starting the bid at $5, but that doesn't meaningfully change the situation, as they can just respond by out-bidding you at $10.\n\nYou may bid more money than you have in cash, but you are still obligated to pay it if you win the auction. This may be a reasonable play if you can afford it by mortgaging property and/or selling houses, especially to drive up the price for your opponent. But be careful not to bid more than you can pay after mortgages and sales, as if you win the auction and can't pay your bid, you go bankrupt. In particular, if you bid more than you can afford, you opponents can all just let you win the bid, at which point you go bankrupt and the property you just bought would go up for auction again (in addition to all your other properties)."
}
] |
[
{
"from": "human",
"value": "What is the purpose of having a countdown during a rocket launch?"
},
{
"from": "gpt",
"value": "There is a window of opportunity for a launch to occur, known as a launch window. This certainly occurs when trying to get somewhere such as the International Space Station, but even earth orbiting satellites have a window of time where the rocket can meet its performance.\n\nBottom line is, the exact time of launch is very important, and a complex sequence of events have to happen in just the right order to make it happen. SpaceFlightNow provided a generic countdown timeline for the Falcon 9, I'll re-post a few of the highlights. Some of these things are done to keep the batteries topped off till the last minute. You don't want to put the fuel in until the last minute, to save it (Most rocket fuel is cryogenic). When the rocket has its fuel loaded, you want to keep non-essential personnel away from it, making it so any tasks they need to do is put off until the end. You don't want to arm the system to launch until you are completely ready, thus, it only happens right before. There are many more such examples of why things are done in a particular order, but hopefully this will give you a sense.\n```L-13:30:00 Dragon Power Up\nL-7:30:00 Launch Vehicle Power Up\nL-3:50:00 Liquid Oxygen Loading\nL-3:40:00 Rocket Propellant 1 Loading\nL-1:00:00 Weather Briefing\nT-0:09:40 Verify that Terminal Countdown has started\nT-0:09:30 Merlin 1C: Lox Bleeder Valves Open\nT-0:07:00 Dragon to Internal Power\nT-0:04:46 Stage 1 & Stage 2 Auto Sequence starts\nT-0:04:30 Transfer to Internal Power\nT-0:04:10 Vehicle Release Auto Sequence\nT-0:03:40 TEA-TEB Ignition System Activation\nT-0:03:25 Flight Termination System to Internal Power\nT-0:03:11 Flight Termination System Armed\nT-0:03:02 LOX Topping Termination\nT-0:03:00 Second Stage Thrust Vector Actuator Test\nT-0:02:00 Range Verification\nT-0:01:30 Final Engine Chilldown, Pre-Valves/Bleeders Open\nT-0:01:00 Flight Computer to start-up\nT-0:00:50 First Stage Thrust Vector Actuator Test\nT-0:00:40 Propellant Tank Pressurization\nT-0:00:20 All Tanks at Flight Pressure\nT-0:00:15 Arm Pyrotechnics\nT-0:00:03 Merlin Engine Ignition\nT-0:00:00 LIFTOFF\n```"
}
] |
[
{
"from": "human",
"value": "How does one attack a two-time pad (i.e. one time pad with key reuse)?\nI am new to cryptography and my problem is with two time pad attacks on OTP.\nThe problem I had in my course was that I have 10 ciphertexts encrypted with the same key $K$. I am then given another ciphertext that I should decrypt.\nI know that XOR-ing two ciphers gives me the XOR of their original messages.\nMy question is what is the correct thing to do after that?\nI tried to take 3 ciphertexts $C_1, C_2$ and $C_3$.\nThen get $S_1 = C_1 \\oplus C_2 \\oplus $```' '```, also get $S_2 = C_1 \\oplus C_3 \\oplus$ ```' '```.\nAfter that I compared all corresponding characters in $S_1$ and $S_2$,\nand if $S_1[i] = S_2[i]$ then I calculate $S_1[i] \\oplus C_2[i]$ to get $K[i]$.\nI tried this on paper before coding and it worked, but I might be missing something.\nIs this the right approach? Why does it work?"
},
{
"from": "gpt",
"value": "The classical answer to \"what is the correct thing to do after you have the XOR of the two original messages\" is crib-dragging.\n\nThat is, you take a guess of a common phrase that may appear in one of the plaintexts (the classical example against ASCII english is the 5 letter \" the \"), and exclusive-or that against the XOR of the two original messages in various locations. If one of the plaintexts had the text of the crib (\" the \" in our example), then the result of the exclusive-or is what the other plaintext had in that position; if neither plaintext had that, it's likely that the result of the exclusive-or is just gibberish. And, once you have a plausible short section, you can extend it (for example, if you know that one of the plaintexts is \" na**\", you can go through the dictionary of all words that start with \"na\", use those as cribs, and see which makes the other plaintext make sense).\n\nIn addition, you can often deduce things directly from the bit pattern. For example, if the messages are in ASCII, then one thing to note that bit 6 of letters is set, but bit 6 of spaces, numbers and (most) punctuation is clear; because spaces are far more common than numbers and punctuation, then that will give you a good guess of where spaces occur in the texts (albeit without telling you which message a specific space appears in).\n\nNow, if you have 11 messages all encrypted with the same pad (an \"11-time pad\"), things get even easier. Obviously, you can grab a crib across one message, and check it against the other 10; if it makes all 10 make sense, then it is almost certainly accurate. Even better, by using the observation that you can distinguish spaces from letters (again, by comparing bit 6), you can find where all the spaces appear in the messages; these all act like 1 character cribs, probably revealing virtually all the text of all the messages."
}
] |
[
{
"from": "human",
"value": "I have a small home automation lab (that I keep saying I'll expand, but haven't). In this setup, I have a control system to control lights (utilizing the x10 protocol), blinds, a Nest thermostat and two web cams.\nWith the recent record setting DDoS attacks utilizing unsecured IoT devices, I'd like to secure my small setup a bit. \nWhat can a home user do to secure their network while still maintaining the \"connect from anywhere\" aspect that is a big part of the marketing?"
},
{
"from": "gpt",
"value": "The absolute most common issue with IoT devices is default passwords. So change all the passwords. Pick a unique, random password for every device and write it down on paper (paper is safe from remote attackers and hard drive failures). 12 random (i.e. computer-generated) lowercase letters represent a good compromise between security and being hard to type. Each device should have a different password so that breaking one doesn't let the attacker break all of them. Enter the passwords in a password manager and use that password manager in the computers you use to control the devices.\n\nIf the device has different authorization channels, for example an administration password and a day-to-day usage password, use different passwords for both and only record the administration password on selected devices.\n\nThe second generic security measure is to ensure that all your devices are behind a firewall or at least a NAT device. A typical home router is sufficient, but you should turn off UPnP which can allow inadvertent back channels from the outside. The goal is to ensures that there's no direct way to connect from the Internet to the device. Connections should always go through a gateway that itself requires authentication to cross, and that you keep patched with any security updates.\n\nYou should also apply security updates on all the device if they exist at all, which can be a problem."
}
] |
[
{
"from": "human",
"value": "What are objective advantages or disadvantages of using the markup language LaTeX instead of a WYSIWYG word processor like MS Word or LibreOffice Writer?\nPlease use objective arguments."
},
{
"from": "gpt",
"value": "LaTeX is better at:\n\n* Dealing with mathematical notation. Layout and entry are generally easier using LaTeX than some other sort of equation editor. \n* Consistent handling of intra-document references and bibliography. As of a couple of years ago the major WYSIWYG editors still had problems with re-numbering cross-references and bibliography items. This is never a problem with BibTeX or LaTeX. \n* Separation of content and style. In principle this means that you can write your document without caring how it is formatted, and at the end of the day wrap it in the style-file provided by the journal publisher before submission to conform to the house style. In practice some of the journal publishers demand special formatting commands that partially moots this process. Furthermore recent versions of Word and LibreOffice Writer, when properly used, should be able to keep track of various levels of section heading separate from the body text, and apply uniform styling to each level. The gap is somewhat closing. \n* Tables and illustrations. With PSTricks or TikZ, one can produce high quality illustrations within the document (though the learning curve is a bit steep there). And I've found LaTeX to be better at preparing complex tables. \n\nWYSIWYG (especially Word and Writer) is better at:\n\n* Collaborative editing. Without using an online site for collaborative LaTeX editing (such as ShareLaTeX), working collaboratively on a LaTeX file ideally requires some sort of separate revision control software. Word and Writer have very good comments/annotations and edit-tracking features. When a large number of authors are commenting on the writing of one file, this can be very useful. \n* Spell check. Admittedly most text editors one uses to edit TeX files also do spell check. But this is generally conveniently built into WYSIWYG editors. \n* Compatibility. Unless you work in mathematics, computer science, or physics (and sometimes even if you work in those fields), it is more likely that your collaborators will know what to do with a Word or Writer file than a LaTeX file. \n* Minimum barrier to entry. If you just care about getting the ideas down on paper, you can use a WYSIWYG editor just like a typewriter. Sure, it may be tedious to fix the formatting later on, compared to LaTeX where one need to first figure out how to setup a bare-minimum file before using, it may be an attractive point. \n\nA wash:\n\n* Most journals provide templates for both Word and LaTeX, so there's no real winner there. \n\nSuggestion: if you want to convince someone to start using LaTeX, start them out first in one of the WYSIWYG environments designed for LaTeX first (for example LyX). This will help somewhat to ameliorate the scary entry barrier. "
}
] |
[
{
"from": "human",
"value": "Could Gandalf not have made his own One Ring?"
},
{
"from": "gpt",
"value": "Possibly\nIn the foreword to The Lord of the Rings, Tolkien observes:\n\nSaruman, failing to get possession of the Ring, would in the confusion and treacheries of the time have found in Mordor the missing links in his own researches into Ring-lore, and before long he would have made a Great Ring of his own with which to challenge the self-styled Ruler of Middle-earth.\n\nTolkien therefore definitely conceives of Saruman as being capable of making a Ring, and therefore Gandalf would presumably have been likewise capable.\nHowever, what we can also take from that statement is that making a Ring is not a simple task; there is some "lore" involved, and we know from elsewhere that Saruman had made especial study of the Rings of Power, but yet Tolkien states that even he would have required knowledge found only in Mordor to complete his own work.\n\nSo, to summarize:\n\n* Yes, but,\n* It's neither quick nor easy, because,\n* There is considerable "lore" involved, and,\n* It requires hundreds or even thousands of years of dedicated study to get to a "Saruman-level" of "lore", and,\n* Even then there will be missing knowledge which may only be found in Mordor.\n* So for all practical purposes it's actually "No".\n"
}
] |
[
{
"from": "human",
"value": "It’s the year 2018, and you live in the good ol’ North American landmass. The fascist landmass. By this year, the dystopian N.A.F party controls all of the landmass and secret police prowl the streets armed with automatic rifles. Protest the rules and NAF makes you disappear -- permanently. \nOnto the subject\nAs you’ve seen in a lot of movies and whatnot, dystopian governments like to make people fit into a mandatory dress code. 1984 did it, a lot of other dystopian media did it, and so on. I plan to do the same, but I want to make my dystopian government a logical one, that only does what’s necessary to keep power. What is a logical reason why mandatory dress codes would be forced upon citizens?"
},
{
"from": "gpt",
"value": "**Control the Supply**\nThe citizens of your dystopia work in a government-controlled factory. They shop in a government-controlled shop with government-controlled prices. What does the shop have in stock? The products of more government-controlled factories, of course.\n\nIt is not illegal to wear non-standard clothes. It is illegal to buy or sell them (that would be black marketeering) and it is usually illegal to make them (that would be diversion of government-supplied raw materials). One might "repair" government-pattern clothes into non-standard shapes, e.g. cutting trousers short or making a skirt out of an old bedsheet.\n\nFew people make their own clothing. A sure sign they have too much time on their hands, why don't they join in a government-sponsored community activity instead?\n\nOf course there is "special store No. 0815" which is open to senior regime officials and their families. These officials work so long hours in their selfless service of the people that they cannot possibly stand in the waiting line of a normal store. The special store opens by appointment, e.g. right between the parade for the Dear Leader's birthday and the official reception. And it stocks suitable clothes for that reception."
}
] |
[
{
"from": "human",
"value": "When would one use an impact driver versus a regular drill?"
},
{
"from": "gpt",
"value": "Drills and impact drivers are similar in that they're both rotating power tools and they have some overlap in usage, but they work differently and have different strengths.\n\nDrills apply a constant torque and tend to have good control across a range of speeds and clutch settings. They're good for work that requires care or precision, like drilling a hole or driving a a smaller screw.\n\nImpact drivers provide much higher torque but in rapid bursts–very much like a hammer provides bursts of linear force. \n\nThis makes them good for heavy-duty jobs like driving long lag bolts (you can even take the lug nuts off your car wheels with most impact drivers), but the banging action makes them a little bit difficult to control. They're also not good for driving into something very soft–between the hammer strikes there's no torque applied and the bit can actually rotate backwards. You can see this by holding the chuck with your hands while the driver is on; you'll feel the pulsing of the strikes but chuck won't actually rotate. Finally, some impact drivers don't have a clutch, which means you need to be very careful if you're driving something that could be over-tightened or broken by the high torque.\n\nBesides the higher torque, impact drivers have another huge advantage in that you don't have to hold them tightly to provide the torque. The momentum of the rotating striker inside is what provides the rotation force, so even when you're driving something very tough it's quite easy to hold the tool. Drills on the other hand provide a constant torque and require the user to provide the reaction and prevent the tool from spinning.\n\nIf you have to pick a single all-purpose tool for general DIY use, a drill is much easier to control and is useful in a wider range of situations."
}
] |
[
{
"from": "human",
"value": "Alternative to Windows Snipping Tool for Mac OSX"
},
{
"from": "gpt",
"value": "Mac OS has a built-in snipping tool similar to the one in Windows:\n* ⌘+⇧+3 captures the whole screen\n* ⌘+⇧+4 captures a custom rectangle\n(click and drag over the screen region you want to capture)\n* ⌘+⇧+4 then space captures a specific window (move the mouse cursor over the desired window, then click)\n\nPress esc to cancel.\nScreenshots are saved onto your Desktop and timestamped.\nHolding control in addition to any of the sequences above causes the image to be copied instead of saved to the desktop.\nBy default the image format is png. However, you can change the format by typing in the Terminal:\n```defaults write com.apple.screencapture type image_format\nkillall SystemUIServer\n```\nWhere ```image_format``` is one of ```jpg```, ```tiff```, ```pdf```, ```png```, ```bmp``` or ```pict``` (among others). If you omit the second line, you will need to log out and in again for the change to take effect.\n\nThe settings for the shortcuts in the system preferences are located at:\nSystem Preferences → Keyboard → Keyboard Shortcuts → Screen Shots\nThe MacOS X application that enables screenshots is called Grab. It is located at ```/Applications/Utilities/Grab.app```"
}
] |
[
{
"from": "human",
"value": "What is the difference between kerning vs. letter spacing?"
},
{
"from": "gpt",
"value": "Any font has built-in spacing determined by the “side bearing” of each character. In metal type, the side bearing is the physical right or left edge of the individual piece of type that determines its spacing from the characters on either side. Digital fonts mimic this in the basic design process.\n\n* “To kern” means to adjust the spacing between a pair of letters, numerals, punctuation, etc. (properly “glyphs”) so that their spacing looks correct. The letters AV, for example, have to overlap in any usual typeface, otherwise at least one of them ends up looking lost and bewildered. Their kerning is tightened to snug them up to one another. An italic f will collide with a following question or quote mark in some fonts, so the kerning must be opened up from the default spacing.\n\nAn ancient scribe could kern effortlessly with pen and ink, and if you read old manuscripts you'll see lots of places where a letter intrudes into the space of the one next to it. With metal type, it wasn't so easy. Kerning required cutting away some of the metal base, or table, of the character. The amount of time, skill and work involved is left as an exercise for readers whose minds are in need of a little boggling. The shapes of the letters in metal type intended for long passages of text such as books, or especially newspapers, were designed to minimize the amount of kerning that would be necessary.\n\nKerning tables built into a digital font assign preset kerning values to specific letter pairs, according to the designer's best estimate of how they should look. There can be a dozen or so pairs (or none!) in a cheapo font, thousands in a high-end one. No matter how thoroughly and well a kerning table is built, though, there will still be situations where some awkward combination requires that the kerning be loosened or tightened from the preset values of the glyphs.\n\n* Letter spacing (often “tracking” in software applications) adjusts the spacing between all the glyphs in a piece of text. This can help to make a page look a little more open and inviting, for example, especially with a similarly open leading. Very large type, such as a big headline, almost always benefits from tightening the tracking. Tiny type, such as in captions or footnotes, is made more readable by opening the letter spacing a bit, especially if the paper is absorbent and will allow the ink to spread a little.\n\nA final note: Use caution with both kerning and tracking. Tracking is very often wildly overdone, kerning is seldom done enough."
}
] |
[
{
"from": "human",
"value": "I read somewhere that C♯ and D♭ actually differ 41 by cents from each other. As far as I know, there should be 2 semitones between C and D. Moreover, C♯ is one semitone above C and D♭ is one semitone below D. Therefore, C♯ and D♭ should be equivalent. If so, how can C♯ and D♭ actually differ by 41 cents from each other?"
},
{
"from": "gpt",
"value": "When we talk about the exact frequencies of each pitch class, we have to know the temperament, and a reference pitch. For example, 12-tone equal temperament (12TET) with A4=440Hz is a standard in modern music. From those two parameters, we can extrapolate the exact frequency of every possible note.\n\n12TET is nearly ubiquitous nowadays (at least in Western music), but it doesn't sound as clean as Just Intonation (JI). In essence, 12TET has made every key sound equally imperfect. JI creates a scale where the intervals in the primary chords are all very nice simple ratios, and so the chords ring out very cleanly, but it only works in that key. Important note: within a given JI tuning, each of the 12 pitch classes still has only one single frequency. There is no difference between C♯ and D♭ in, say, \"Pythagorean tuning based on A, with A=440Hz\".\n\nBut most music doesn't stay in one key. While a piano can't make pitch adjustments on the fly (which is why we've agreed to use 12TET for it), most instruments in an orchestra can. So when the piece is in A major, the orchestra will use JI and adjust C♯ to be a little flatter than it would be if using 12TET. But then if the piece modulates to F♯ minor, they'll start playing it slightly sharp.\n\nWhen people say that C♯ is not the same as D♭, what they really mean (whether they realize it or not) is that context may make for different micro-adjustments. In C major, a C♯ might be the third of an A major chord, perhaps a secondary dominant of the ii chord, while D♭ might be the root of the Neapolitan chord. These would result in different tuning choices."
}
] |
[
{
"from": "human",
"value": "Not sure if this is a Mozilla-specific JS syntax, but I often found variables being declared this way, for example, in add-on SDK docs:\n```var { Hotkey } = require(\"sdk/hotkeys\");\n```\nand in various chrome Javascript (```let``` statement is being used in place of ```var```),\n```let { classes: Cc, interfaces: Ci, results: Cr, utils: Cu } = Components;\n```\nI found it very confusing but I am not being able to find any documentation about both syntax, even on MDN."
},
{
"from": "gpt",
"value": "They're both JavaScript 1.7 features. The first one is block-level variables:\n\n ```let``` allows you to declare variables, limiting its scope to the block, statement, or expression on which it is used. This is unlike the ```var``` keyword, which defines a variable globally, or locally to an entire function regardless of block scope.\n\nThe second one is called destructuring:\n\n Destructuring assignment makes it possible to extract data from arrays or objects using a syntax that mirrors the construction of array and object literals.\n ...\n One particularly useful thing you can do with destructuring assignment is to read an entire structure in a single statement, although there are a number of interesting things you can do with them, as shown in the section full of examples that follows.\n\nFor those familiar with Python, it's similar to this syntax:\n```>>> a, (b, c) = (1, (2, 3))\n>>> a, b, c\n(1, 2, 3)\n```\nThe first code chunk is shorthand for:\n```var {Hotkey: Hotkey} = require(\"sdk/hotkeys\");\n// Or\nvar Hotkey = require(\"sdk/hotkeys\").Hotkey;\n```\nYou can rewrite the second code chunk as:\n```let Cc = Components.classes;\nlet Ci = Components.interfaces;\nlet Cr = Components.results;\nlet Cu = Components.utils;\n```"
}
] |
[
{
"from": "human",
"value": "When reading some documentation about the security of a product, I found that the vendor uses the SHA-2 of a password to encrypt data (AES-256), instead of using this password directly.\nAre there any advantages of doing so?\nAn attacker is not going to crack the encrypted data using this SHA-2-as-a-password key but rather exhaust the password keyspace (if feasible) and try its hash. Therefore the only reason I can think of is that there is an extra computational step (the creation of the hash). I would have rather increased the password entropy if the point is to computationally complexify the attack."
},
{
"from": "gpt",
"value": "It sounds like a primitive version of a key derivation function (KDF), in particular they probably could have avoided reinventing the wheel by using PBKDF2.\n\nThere are several reasons why you don't want to use the password directly as an AES key. \n\n* To distribute the bits. The main property here is that a hash function's output is, statistically speaking, uniformly distributed. People tend to pick passwords that aren't fully random, in particular, most passwords would only contain characters you can type in a keyboard. When used as an encryption key, a non-statistically random key may expose weaknesses in the encryption function.\n* To fit the keys to the encryption key length. Most passwords are going to be either longer or shorter than the key space of the encryption function. By hashing your password, the exact key length will be exactly the size of the input key of your encryption function. While the entropy of the derived key doesn't increase, this avoids the likelihood of exposing weakness in the encryption function if you just simply zero pad the password or worse truncate the password.\n* To slow down key derivation decryption. Per your description, the software is only using a single SHA256 round, which is not much. But with proper password based KDF, like PBKDF2, there are usually tens of thousands or hundreds of thousands of rounds of the underlying hash function. This slows down computing the keys, increasing the effective strength of passwords without increasing its length.\n* To keep the user's plain text password out of memory, thus preventing it from being accidentally dumped to disk during hibernation or crash dump. While this wouldn't protect the hash from being used to decrypt the data you're encrypting, it will prevent the password from being reused to decrypt other files (which presumably uses different salt) or being tried on your online accounts or other devices that you use.\n"
}
] |
[
{
"from": "human",
"value": "My online friend is asking for money in order to visit my home country. Is this a legit request or a scam?"
},
{
"from": "gpt",
"value": "The first rule for every such question is simple:\n\n If you have to ask, it's probably a scam\n\nWhile we cannot possibly examine every possible situation from every angle, online scams are simply far too common to trust someone you have known only on the Internet or even what appears to be a real friend or relative, who may have had their email accounts hacked or might be simply being impersonated. So the basic advice is: don't send out your private details, don't pay for anything and don't send any money for any purposes. \nSome common examples are:\n\n* love interest who wants to meet you.\n* 'supposed' friend in need.\n* prize/lottery.\n* inheritance of some distant relative.\n* person in distress asking you to take care of their financial assets.\n* some government/private fine/tax/loan/offer scam\n\nIf you know the person who seems to have send the email asking for money in person, contact them in an other way or have them answer a personal question only they know the answer to, before you accept their email as real.\nIn many countries there are organisations to whom you can report internet fraud or suspected internet fraud. They also offer advice about common scams and how to recognise them. \nIn the USA: The IC3 (part of the FBI) (further information from the FBI on romance scams)\nIn Canada: The Canadian Anti-Fraud Centre\nIn the UK: Action Fraud \nThe second rule for every such question is also simple:\n\n google this requirement\n\nA legitimate requirement, if exist, would be found on the relevant official sources, including the airline or Embassy web sites. For example, Indian nationals are required to show USD 500/week to enter Thailand. \nMany countries also require visa applicants to show means of financial resources. However cash is rarely accepted as such evidence (since one can just borrow it for a day), and depositing this cash in applicant's bank account a day before applying for a visa usually backfires."
}
] |
[
{
"from": "human",
"value": "When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.\nIt is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.\nHowever, I didn't find a good reference for this on the web. What is the source of these values?"
},
{
"from": "gpt",
"value": "If you convert RGB -> grayscale with slightly different numbers / different methods, you won't see much difference at all on a normal computer screen under normal lighting conditions -- try it.\n\n### RGB -> ^gamma -> Y -> L*\nIn color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),\nare called "nonlinear" or\nGamma corrected.\n"Linear" values are defined as\n```Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma\n```\nwhere gamma is 2.2 for many PCs.\nThe usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))\n(purists tongue-click) but here I'll drop the '.\nBrightness on a CRT display is proportional to RGBlin = RGB ^ gamma,\nso 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.\n(LCD displays are more complex;\nfurthermore, some graphics cards compensate for gamma.)\nTo get the measure of lightness called ```L*``` from RGB,\nfirst divide R G B by 255, and compute\n```Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma\n```\nThis is ```Y``` in XYZ color space; it is a measure of color "luminance".\n(The real formulas are not exactly x^gamma, but close;\nstick with x^gamma for a first pass.)\nFinally,\n\n```L* = 116 * Y ^ 1/3 - 16\n```\n"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --\nWikipedia Lab color space"
}
] |
[
{
"from": "human",
"value": "¿Cuál es la diferencia entre echo, print, print_r, var_dump y var_export en PHP?"
},
{
"from": "gpt",
"value": "Aquí hay una descripción general de estas funciones de PHP:\n\n###echo\n\n* Muestra una o más cadenas separadas por comas\n* No tiene un valor de retorno\n* Ejemplo:\n```echo \"Cadena 1\", \"Cadena 2\";\n```\n\n###print\n\n* Muestra solo una simple cadena\n* Devuelve 1, por lo cual puede ser usada en una expresión\n* Ejemplos:\n```print \"Hello\";\nif($expresion && print \"Cadena\"){\n // Hacer algo\n}\n```\n\n###print_r()\n\n* Muestra una representación más entendible de un solo valor cualquiera\n* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible\n* Puede devolver su valor de salida como un valor de retorno si le pasa ```true``` como su segundo argumento\n* Útil para la depuración\n\nEjemplo: Si tenemos el siguiente arreglo:\n```$a = array ('a' => 'manzana', 'b' => 'banana', 'c' => array ('x', 'y', 'z'));\nprint_r ($a);\n```\nNuestro ```print_r($a);``` nos devolverá algo como esto:\n```Array\n(\n [a] => manzana\n [b] => banana\n [c] => Array\n (\n [0] => x\n [1] => y\n [2] => z\n )\n)\n```\n\n###var_dump()\n\n* Muestra una representación más entendible de un valor cualquiera o más separados por comas\n* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible\n* Usa un formato diferente al anterior ```print_r()```, por ejemplo, también muestra el tipo del valor\n* Útil para la depuración\n* No tiene un valor de retorno\n\nEjemplo: Si tenemos el siguiente arreglo:\n```$a = array(1, 2, array(\"a\", \"b\", \"c\"));\nvar_dump($a);\n```\nNuestro ```var_dump($a);``` nos devolverá algo como esto: (nótese que nos muestra el tipo de valor de cada elemento en nuestro arreglo)\n```array(3) {\n [0]=>\n int(1)\n [1]=>\n int(2)\n [2]=>\n array(3) {\n [0]=>\n string(1) \"a\"\n [1]=>\n string(1) \"b\"\n [2]=>\n string(1) \"c\"\n }\n}\n```\n\n###var_export()\n\n* Muestra una representación más entendible y ejecutable de un valor cualquiera\n* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible\n* Usa un formato de salida diferente de ```var_dump()``` y ```print_r()```, la salida es un código de PHP válido\n* Útil para la depuración\n* Puede devolver su valor de salida como un valor de retorno si le pasa ```true``` como su segundo argumento\n\nEjemplo, si realizamos lo siguiente\n```class A {\n public $var;\n}\n$a = new A;\n$a->var = 5;\nvar_export($a);\n```\nObtendremos como resultado:\n```A::__set_state(array(\n 'var' => 5,\n))\n```\n\n###Notas\n\n* ```echo``` es mucho más rápido que ```print```\n* ```echo``` y ```print``` son constructores de lenguaje\n* ```print_r()``` y ```var_dump()``` son funciones\n* ```echo``` y ```print``` pueden recibir sus argumentos mediante paréntesis como una función ```echo(\"Hello\", \"world\");``` ó ```print(\"Hello World\");```\n"
}
] |
[
{
"from": "human",
"value": "In Android, I defined an ```ImageView```'s ```layout_width``` to be ```fill_parent``` (which takes up the full width of the phone).\nIf the image I put to ```ImageView``` is bigger than the ```layout_width```, Android will scale it, right? But what about the height? When Android scales the image, will it keep the aspect ratio?\nWhat I find out is that there is some white space at the top and bottom of the ```ImageView``` when Android scales an image which is bigger than the ```ImageView```. Is that true? If yes, how can I eliminate that white space?"
},
{
"from": "gpt",
"value": "* Yes, by default Android will scale your image down to fit the ImageView, maintaining the aspect ratio. However, make sure you're setting the image to the ImageView using ```android:src="..."``` rather than ```android:background="..."```. ```src=``` makes it scale the image maintaining aspect ratio, but ```background=``` makes it scale and distort the image to make it fit exactly to the size of the ImageView. (You can use a background and a source at the same time though, which can be useful for things like displaying a frame around the main image, using just one ImageView.)\n\n* You should also see ```android:adjustViewBounds``` to make the ImageView resize itself to fit the rescaled image. For example, if you have a rectangular image in what would normally be a square ImageView, adjustViewBounds=true will make it resize the ImageView to be rectangular as well. This then affects how other Views are laid out around the ImageView.\n\nThen as Samuh wrote, you can change the way it default scales images using the ```android:scaleType``` parameter. \n\nBy the way, the easiest way to discover how this works would simply have been to experiment a bit yourself! Just remember to look at the layouts in the emulator itself (or an actual phone) as the preview in Eclipse is usually wrong.\n\n"
}
] |
[
{
"from": "human",
"value": "I want to compare 2 arrays of objects in JavaScript code. The objects have 8 total properties, but each object will not have a value for each, and the arrays are never going to be any larger than 8 items each, so maybe the brute force method of traversing each and then looking at the values of the 8 properties is the easiest way to do what I want to do, but before implementing, I wanted to see if anyone had a more elegant solution. Any thoughts?"
},
{
"from": "gpt",
"value": "As serialization doesn't work generally (only when the order of properties matches: ```JSON.stringify({a:1,b:2}) !== JSON.stringify({b:2,a:1})```) you have to check the count of properties and compare each property as well:\n\n```const objectsEqual = (o1, o2) =>\n Object.keys(o1).length === Object.keys(o2).length\n && Object.keys(o1).every(p => o1[p] === o2[p]);\nconst obj1 = { name: 'John', age: 33};\nconst obj2 = { age: 33, name: 'John' };\nconst obj3 = { name: 'John', age: 45 };\nconsole.log(objectsEqual(obj1, obj2)); // true\nconsole.log(objectsEqual(obj1, obj3)); // false```\n\nIf you need a deep comparison, you can call the function recursively:\n\n```const obj1 = { name: 'John', age: 33, info: { married: true, hobbies: ['sport', 'art'] } };\nconst obj2 = { age: 33, name: 'John', info: { hobbies: ['sport', 'art'], married: true } };\nconst obj3 = { name: 'John', age: 33 };\nconst objectsEqual = (o1, o2) =>\n typeof o1 === 'object' && Object.keys(o1).length > 0\n ? Object.keys(o1).length === Object.keys(o2).length\n && Object.keys(o1).every(p => objectsEqual(o1[p], o2[p]))\n : o1 === o2;\nconsole.log(objectsEqual(obj1, obj2)); // true\nconsole.log(objectsEqual(obj1, obj3)); // false```\n\nThen it's easy to use this function to compare objects in arrays:\n```const arr1 = [obj1, obj1];\nconst arr2 = [obj1, obj2];\nconst arr3 = [obj1, obj3];\nconst arraysEqual = (a1, a2) =>\n a1.length === a2.length && a1.every((o, idx) => objectsEqual(o, a2[idx]));\nconsole.log(arraysEqual(arr1, arr2)); // true\nconsole.log(arraysEqual(arr1, arr3)); // false\n```"
}
] |
[
{
"from": "human",
"value": "I'm working with another developer on a project, and we're using Github as our remote repo. I'm on a Mac using git 1.7.7.3, he's on Windows using git 1.7.6.\nThis is what's happening\n\n* One of us (let's call him developer A, but it doesn't matter which one) pushes a set of commits to GitHub.\n* The other (developer B) makes some local commits.\n* B does a ```git pull```.\n* B does a ```git push```.\n* Looking at the commit history log, I see Merge branch 'master' of github.com:foo/bar\n\nThe commit log gets littered with \"Merge branch\" messages over time, and also shows developer B as committing changes that developer A made. The only way we've found to prevent this issue has been to do a ```git pull --rebase``` at step 3, but I don't know what side effects rebasing will introduce. This is my first time working on a multi-developer git repo, so is this just normal behavior? Any thoughts on how to solve this issue?"
},
{
"from": "gpt",
"value": "The commit you are seeing is perfectly fine. A ```pull``` effectively runs ```git fetch``` and then ```git merge``` so a merge is usually happening when you run ```git pull```.\n\nThe alternative to use rebasing instead of merging is possible, but usually you should avoid it. Rebasing allows you to keep a linear history, but also removes any information about the branching that originally happened. It will also cause the history of the current branch being rewritten, recreating all commits that are not contained in the target branch (in your case, the remote). As the recreated commits are different commits, this can cause a lot of confusion when developing together with others, especially when people already checked out parts of those commits before they get rewritten (for example with feature branches). So as a rule of thumb, you should never rewrite any commit that was already pushed.\n\nThe commits you see are there to combine two (or more) branches. It is perfectly fine to have a commit that does nothing else then merging multiple branches. In fact it makes it very clear when you have a merge commit that combines branches when looking at the history. In comparison to rebasing, merging also allows you to effectively see the original history as it was developed, including the actual branches that coexisted.\n\nSo, long story short: Yes, having merge commits is perfectly fine and you should not worry about them."
}
] |
[
{
"from": "human",
"value": "You are asleep on your boat on open sea. When you wake up you discover you have been robbed. A quick inspection of security cam footage reveals that the pirates who robbed you left your ship exactly an hour ago. The sea is flat, extends indefinitely, and is fully covered in a thick persistent fog. You have no idea in which direction the pirates fled. But you do know that these pirates always continue in a straight line at full speed away from their victim. Their maximum speed on open water is 20 nautical miles per hour. Your boat can reach 21 nautical miles per hour. \nHow do you catch the pirates?"
},
{
"from": "gpt",
"value": "If we assume the ocean is flat and extends indefinitely in all directions, there is a strategy that guarantees we can catch the pirates in at most 800,000 years.\n\nPut our current location as the origin of a coordinate system. We will describe our position in polar coordinates, as a function of time: $(r(t),\\theta(t))$ (where we have arbitrarily chosen a direction to be $\\theta=0$, and $t=0$ is when we realized we had been robbed).\n\nWe begin by traveling in the $\\theta=0$ direction for 20 hours, putting our position at $(420, 0)$. We are then the same distance from the origin as the pirates. Next, we will travel in a spiral, in a manner so that $r'(t)=20$ at all times. This guarantees we will always be the same distance from the origin as the pirates. For $t\\geq 20$, we will have $r(t)=420+20(t-20)=20t+20$.\n\nOur speed is\n$$\n\\sqrt{(r')^2+r^2(\\theta')^2} = 21\\text{ mph},\n$$\nand $r'(t)=20$ for $t>20$, so\n$$\n\\theta'(t)=\\sqrt{\\frac{41}{r^2}}=\\frac{\\sqrt{41}}{20+20t}.\n$$\nIf there is a $t\\geq 20$ for which $\\theta(t)$ is the angle in which the pirates fled, we will catch them. This means we will certainly catch the pirates by the time $\\theta$ has increased from $0$ to $2\\pi$. If $t_0$ is the time this happens, we have\n$$\n2\\pi=\\int_{20}^{t_0}\\theta'(t)\\,dt=\\int_{20}^{t_0}\\frac{\\sqrt{41}}{20+20t}dt.\n$$\nSolving for $t_0$ gives\n$$\nt_0=21\\mathrm{exp}\\left(\\frac{40\\pi}{\\sqrt{41}}\\right)-1\\approx 7,005,043,026.\n$$\nThis means we can catch the pirates in at most 7,005,043,026 hours, or about 800 millennia. Better later than never!"
}
] |
[
{
"from": "human",
"value": "Was ist der Unterschied zwischen den Fragewörtern \"wieso\", \"weshalb\" und \"warum\"? "
},
{
"from": "gpt",
"value": "Es gibt im heutigen Deutschen keinen semantischen Unterschied zwischen den drei Frageadverbien (genauer: Kausalfrageadverbien) warum, weshalb und wieso. Die drei Wörter sind also Synonyme und somit beliebig gegeneinander austauschbar.\nWenn man sehr feinsinnig ist, könnte man im Ursprung der drei Wörter folgende Unterscheidung finden:\n\n* warum als Frage nach dem Grund oder dem Motiv\n\n Warum freut sich die Frau?\n Weil sie im Lotto gewonnen hat.\n\n* wieso als Frage nach der Ursache\n\n Wieso fällt ein Apfel auf den Boden?\n Weil er der Schwerkraft ausgesetzt ist.\n\n* weshalb als Frage nach dem Zweck\n\n Weshalb besucht er die Schule?\n Weil er etwas lernen möchte.\n\n\nEtymologie\n\n* warum\nbezeugt im Althochdeutschen wār umbe über Mittelhochdeutschen warumbe aus wār wo und umbe um. Gleichgestellt zu worum; Quelle: Grimmsches Wörterbuch\n* wieso\nÜbertragung aus dem lateinischen qui sic? → wie so? alt. Ital. per che?, modern Ital.: perché? = per che ragione? (\"aus welchem Grund\"); Quelle: Grimmsches Wörterbuch\n* weshalb\naus welchem Grunde – Verschmelzung der Präposition halb(en) (wegen) mit dem vorangestellten Genitiv des Pronomens was; Quelle: Grimmsches Wörterbuch \n\nAllerdings sind die Grenzen zwischen den Begriffen häufig verwischt und in der Alltagssprache wird diese Unterscheidung kaum wahrgenommen."
}
] |