id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,771,954 | http://www.appointfix.com/ | Goldie: The Best Free Appointment Scheduling Software | Goldie; Admin | # Schedule
like a pro.
## The beauty of booking with ease
## Goldie's got everything you need
A better scheduling app, as rated by other independent professionals
## 4.8
## *****
## 4.6
## *****
## 4.9
## *****
## Empowering entrepreneurs in
beauty and beyond
On the Starter plan, you will be limited to 20 appointments / month. If you decide to upgrade to Pro or Pro Plus, you will be able to schedule unlimited appointments in the Goldie app.
Yes, you can use the Goldie booking app on unlimited devices. This flexibility allows you and your team to manage appointments from any location, ensuring you never miss important details related to your business.
Yes, Goldie’s Starter plan is totally free, and it offers basic functionalities like appointment scheduling, unlimited clients, online booking, or manual text appointment reminders.
Yes, Goldie is available worldwide. This means you can book appointments and manage clients, no matter where you are located. | true | true | true | Grow your business and reduce no-shows with automated text reminders, online booking, and payment solutions. It's so easy to use, you'll enjoy scheduling! | 2024-10-12 00:00:00 | 2022-09-10 00:00:00 | website | heygoldie.com | Goldie | null | null |
|
24,502,293 | https://github.com/gnebehay/parser | GitHub - gnebehay/parser: A simple parser for mathematical expressions. | Gnebehay | This repository contains a parser for simple mathematical expressions
of the form `2*(3+4)`
written in 92 lines of Python code.
No dependencies are used except for what can be found in the Python standard library.
It exists solely for educational reasons.
```
python3 compute.py '2*(3+4)'
```
and you should receive `14`
as the result.
This is perhaps not so surprising, but you can also run
```
python graphviz.py '2*(3+4)' > graphviz_input
dot -Tpng graphviz_input -o output.png
```
to get a visual representation of the abstract syntax tree (this requires having Graphviz installed).
@nicolaes ported this project to TypeScript and used it as a DSL for permissions.
To my personal surprise, it seems that most mainstream languages nowadays are being parsed using handwritten parsers, meaning that no compiler generation tools such as ANTLR are used. Since a basic building block of almost all programming languages are math expressions, this repository explores building a handwritten parser for these simple math expressions.
This particular problem must have been solved millions of times by undergrad computer science students all around the world. However, it has not been solved by me until this date, as in my undergrad studies at TU Vienna we were skipping the low-level work and built a parser based on yacc/bison. I really enjoyed doing this small side project because it takes you back to the roots of computer science (this stuff dates back to 1969, according to Wikipedia) and I like a lot how you end up with a beautiful and simple solution.
Be aware that I am by no means an expert in compiler construction and someone who is would probably shudder at some of the things happening here, but to me it was a nice educational exercise.
The literature regarding this topic is very formal, which makes it a bit hard to get into the topic for an uninitiated person. In this description, I have tried to focus more on intuitive explanations. However, to me it is quite clear that if you don't stick to the theory, then you will soon run into things that are hard to make sense of if you cannot connect it to what's going on in the literature.
The problem is to bring algebraic expressions represented as a string
into a form that can be easily reused for doing something interesting with it,
such as computing the result of the expression or visualizing it nicely.
The allowed algebraic operations are `+,-,*,/`
as well as using (nested) parentheses `( ... )`
.
The standard rules for operator precedence apply.
There are different ways how this problem can be tackled, but in general LL(1) parsers have a reputation for being very simple to implement.
An LL(1) parser is a top-down parser that keeps replacing elements on the parser stack with the right-hand side of the currently matching grammar rule. This decision is based on two pieces of information:
- The top symbol on the parser stack, which can be either a terminal or a non-terminal.
A terminal is a token that appears in the input, such as
`+`
, while a non-terminal is the left-hand side of a grammar rule, such as`Exp`
. - The current terminal from the input stream that is being processed.
For example, if the current symbol on the stack is `S`
and the current input terminal is `a`
and there is a rule in the grammar that allows
```
S -> a P
```
then `S`
should be replaced with `a P`
.
Here, `S`
and `P`
are non-terminals, and for the remainder of this document,
capitalized grammar elements are considered non-terminals,
and lower-case grammar elements, such as `a`
are considered a terminal.
To continue the example, `a`
on top of the stack is now matched to the input stream terminal `a`
and removed from the stack.
The process continues until the stack is empty (which means the parsing was successful)
or an error occurs (which means that input stream doesn't conform to the grammar).
As there are usually multiple grammar rules to choose from, the information which rule to apply in which situation needs to be encoded somehow and is typically stored in a parsing table. In our case however the grammar is so simple that this would almost be an overkill and so instead the parsing table is represented by some if-statements throughout the code.
Here is the starting point for our grammar:
```
(1) Exp -> Exp [ + | - | * | / ] Exp
(2) Exp -> ( Exp )
(3) Exp -> num
```
The grammar is rather self-explanatory.
It is however ambiguous, because it contains a rule of the form `NtN`
.
This means that it is not defined yet whether `2+3*4`
should be interpreted
as `2+3=5`
followed by `5*4=20`
or as `3*4=12`
followed by `2+12=14`
.
By cleverly re-writing the grammar, the operator precedence can be encoded in the grammar.
```
(1) Exp -> Exp [ + | - ] Exp2
(2) Exp -> Exp2
(3) Exp2 -> Exp2 [ * | / ] Exp3
(4) Exp2 -> Exp3
(5) Exp3 -> ( Exp )
(6) Exp3 -> num
```
For the previous example `2+3*4`
the following derivations would be used from now on:
```
Exp
(1) Exp + Exp2
(2) Exp2 + Exp2
(4) Exp3 + Exp2
(6) num + Exp2
(3) num + Exp2 * Exp3
(4) num + Exp3 * Exp3
(6) num + num * Exp3
(6) num + num * num
```
Compare this to the derivation of `3*4+2`
```
Exp
(1) Exp + Exp2
(2) Exp2 + Exp2
(3) Exp2 * Exp3 + Exp2
(4) Exp3 * Exp3 + Exp2
(6) num * Exp3 + Exp2
(6) num * num + Exp2
(4) num * num + Exp3
(6) num * num + num
```
We see that in both examples the order in which the rules for the operators
`+`
and `*`
are applied is the same.
It is perhaps slightly confusing that `+`
appears first,
but if you look at the resulting parse tree you can convince yourself that
the result of `*`
flows as an input to `+`
and therefore it needs to be computed first.
Here, I used a left-most derivation of the input stream.
This means that you would always try to replace the left-most symbol next
(which corresponds to the symbol on the top of the stack),
and not something in the middle of your parse tree.
This is what one `L`
in LL(1) actually stands for, so this is also how our parser will operate.
However, there is one more catch.
The grammar we came up with is now non-ambiguous, but still it cannot be parsed by an LL(1) parser,
because multiple rules start with the same non-terminal
and the parser would need to look ahead more than one token to figure out which rule to apply.
Indeed, for the example above you have to look ahead more than one rule
to figure out the derivation yourself.
As the `1`
in LL(1) indicates, LL(1)-parsers only look ahead one symbol.
Luckily, one can make the grammar LL(1)-parser-friendly by rewriting all the left recursions
in the grammar rules as right recursions.
```
(0) S -> Exp $
(1) Exp -> Exp2 Exp'
(2) Exp' -> [ + | - ] Exp2 Exp'
(3) Exp' -> ϵ
(4) Exp2 -> Exp3 Exp2'
(5) Exp2' -> [ * | / ] Exp3 Exp2'
(6) Exp2' -> ϵ
(7) Exp3 -> num
(8) Exp3 -> ( Exp )
```
Here, `ϵ`
means that the current symbol of the stack should be just popped off,
but not be replaced by anything else.
Also, we added another rule `(0)`
that makes sure
that the parser understands when the input is finished.
Here, `$`
stands for end of input.
While we are not going to use an explicit parsing table, we still need to know its contents so that the parser can determine which rule to apply next. To simplify the contents of the parsing table, I will use one little trick that I discovered while implementing the whole thing and that is:
*If there is only one grammar rule for a particular non-terminal,
just expand it without caring about what is on the input stream.*
This is a bit different from what you find in the literature,
where you are instructed to only expand non-terminals if the current terminal permits it.
In our case, this means that the non-terminals `S, Exp`
and `Exp2`
will be expanded no matter what.
For the other non-terminals, it is quite clear which rule to apply:
```
+ -> rule (2)
- -> rule (2)
* -> rule (5)
/ -> rule (5)
num -> rule (7)
( -> rule (8)
```
Note that the rules can only be applied when the current symbol on the stack is fitting to the
left-hand side of the grammar rule.
For example, rule `(2)`
can only be applied if currently `Exp'`
is on the stack.
Since we also have some rules that can be expanded to `ϵ`
,
we need to figure out when that should actually happen.
For this it is necessary to look at what terminal appears *after* a nullable non-terminal.
The nullable non-terminals in our case are `Exp'`
and `Exp2'`
.
`Exp'`
is followed by `)`
and `$`
and `Exp2`
is followed by `+, -, )`
and `$`
.
So whenever we encounter `)`
or `$`
in the input stream while `Exp'`
is on top of the stack,
we just pop `Exp'`
off and move on.
A nice thing about LL(1) parsing is that you can just use the call stack for keeping track
of the current non-terminal.
So in the Python implementation, you will find for the non-terminal `Exp`
a function `parse_e()`
that in turn calls `parse_e2()`
, corresponding to `Exp2`
.
In previous versions of this repository (e.g. commit `f1dcad8`
),
each non-terminal corresponded to exactly one function call.
However, since many of those function calls were just passing variables around,
it seemed to make sense to refactor the code
and now only `parse_e()`
, `parse_e2()`
and `parse_e3()`
are left.
A look at the function `parse_e3()`
shows us how to handle terminals:
```
def parse_e3(tokens):
if tokens[0].token_type == TokenType.T_NUM:
return tokens.pop(0)
match(tokens, TokenType.T_LPAR)
e_node = parse_e(tokens)
match(tokens, TokenType.T_RPAR)
return e_node
```
Here, it is checked whether the current token from the input stream is a number.
If it is, we consume the input token directly without putting it on some intermediate stack.
This corresponds to rule `(7)`
.
If it is not a number, it must be a `(`
, so we try to consume this instead
(the function `match()`
raises an exception if the expected and the incoming tokens are different).
The abstract syntax tree (AST) can be constructed on the fly during parsing.
The trick here is to only include those elements that are interesting
(in our case `num, +, -, *, /`
and skip over all the elements that are
only there for grammatical reasons.
One thing you might find worthwile to try is to start with the concrete syntax tree that includes all the elements of the grammar and kick out things that you find are useless. Keeping things visualized definitely helps with this.
All of the standard math operators are left-associative,
meaning that `3+2+1`
should be interpreted as `((3+2)+1)`
.
For addition, getting this right is not super-crucial, as additions anyway are commutative.
However, once you start playing around with subtractions (or divisions)
this becomes really important as you definitely want `3-2-1`
to evaluate to `0`
and not to `(3-(2-1))=2`
.
Indeed, this aspect is something that I overlooked in the first version.
Interestingly, in vanilla LL(1) parsing there is no support for left recursion.
As you saw before, we actually had to rewrite all left recursions using right recursions.
However, left-associativity essentially means using left recursions and right-associativity means
using right recursions.
If you just blindly use right recursions like I did,
then suddenly all your operators will be right-associative.
Let's look at two different ASTs for the expression `3-2-1`
.
This is the default AST, implementing right-associativity.
You can recreate this behaviour and also the picture by going back to commit `14e9b79`
.
This is the desired AST, implementing left-associativity.
How can we now implement left-associativity?
The key insight here is that something needs to be done whenever you have two or more
operators of the same precedence level in a row.
So whenever we parse a `-`
or `+`
operation and the next token to be processed is also
either `-`
or `+`
, then we should actually be using left recursion.
This requires us to step outside of the `LL(1)`
paradigm for a moment and piece together the
relevant subtree differently, for example like so:
```
def parse_e(tokens):
left_node = parse_e2(tokens)
while tokens[0].token_type in [TokenType.T_PLUS, TokenType.T_MINUS]:
node = tokens.pop(0)
node.children.append(left_node)
node.children.append(parse_e2(tokens))
left_node = node
return left_node
```
The same is done in `parse_e2()`
for getting the associativity of multiplication and division right. | true | true | true | A simple parser for mathematical expressions. Contribute to gnebehay/parser development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2020-09-17 00:00:00 | https://opengraph.githubassets.com/5b73ec06f5239b54395958364bee03ceac3a7ec2fe3758c46fd51af01914d16e/gnebehay/parser | object | github.com | GitHub | null | null |
1,554,732 | http://blog.dohop.com/index.php/2010/07/28/over-educated-programmer-does-online-marketing/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,230,532 | https://spectrum.ieee.org/the-case-against-quantum-computing | The Case Against Quantum Computing | Mikhail Dyakonov | # The Case Against Quantum Computing
## The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.
We've been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex systems, and artificial intelligence." We've been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape." We've even been told that “the encryption that protects the world's most sensitive data may soon be broken" by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
It's become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world's top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, it's natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future." Having spent decades conducting research in quantum and condensed-matter physics, I've developed my very pessimistic view. It's based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
**The idea of quantum computing** first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy," he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it's fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer's clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With *N* transistors, there are 2* N* possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on" and “off" states, according to a prescribed program.
Illustration: Christian Gralingen
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states*.* Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron's internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron's spin as ↑ and ↓.
Here's where things get weird. With the quantum bit, those two states aren't the only ones possible. That's because the spin state of an electron is described by a quantum-mechanical wave function*.* And that function involves two complex numbers, *α* and *β* (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, *α* and *β*, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
That's because those two squared magnitudes correspond to the *probabilities* for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a *continuum* of possible states, as defined by the values of the quantum amplitudes *α* and *β*. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.
Yes, quantum mechanics often defies intuition. But this concept shouldn't be couched in such perplexing language. Instead, think of a vector positioned in the *x-y* plane and canted at 45 degrees to the *x*-axis. Somebody might say that this vector simultaneously points in both the *x-* and *y-*directions. That statement is true in some sense, but it's not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it's become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 22 or 4 *basic* states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of *N* qubits, the state of the system is described by 2* N* complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with *N* bits at any given moment must be in *one* of its 2* N* possible states
*,*the state of a quantum computer with
*N*qubits is described by the
*values*
*of the 2*, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
Nquantum amplitudesHow is information processed in such a machine? That's done by applying certain kinds of transformations—dubbed “quantum gates"—that change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That's a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer *needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe*.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let's continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, it's absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem *proves* it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming *logical* qubits using multiple *physical* qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.
Even without considering these impossibly large numbers, it's sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it's not like this hasn't long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits" and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm…." It's now the end of 2018, and that ability has still not been demonstrated.
Illustration: Christian Gralingen
**The huge amount of scholarly literature** that's been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.
By contrast, the *theory* of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local" noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks*, *massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.
A decade and a half ago, ARDA's Experts Panel noted that “it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely." Here, the key words are “*under certain assumptions*." That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they can't. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated *exactly*. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the “on" state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled *exactly*. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? There are no clear answers to these crucial questions.
**While various strategies** for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor's algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I'm skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. *No*, *never*.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That's because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What's more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.
All these problems, as well as a few others I've not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum-computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: “This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work."
*Editor's note: A sentence in this article originally stated that concerns over required precision “were never even discussed." This sentence was changed on 30 November 2018 after some readers pointed out to the author instances in the literature that had considered these issues. The amended sentence now reads: “There are no clear answers to these crucial questions."*
## About the Author
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.
- RISC-V Chip Delivers Quantum-Resistant Encryption - IEEE Spectrum ›
- Quantum Randomness Now Boosts Everyday Security - IEEE Spectrum ›
- Two of World’s Biggest Quantum Computers Made in China - IEEE Spectrum ›
- How Much Has Quantum Computing Actually Advanced? ›
- Schrödinger’s Tardigrade Claim Incites Pushback - IEEE Spectrum ›
- AI Doesn’t Have to Be This Way - IEEE Spectrum ›
- Quantum Computing’s Hard, Cold Reality Check - IEEE Spectrum ›
- Quantinuum Successfully Teleports a Logical Qubit - IEEE Spectrum › | true | true | true | The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables | 2024-10-12 00:00:00 | 2018-11-15 00:00:00 | article | ieee.org | IEEE Spectrum | null | null |
|
16,591,697 | http://foreignpolicy.com/2018/03/14/house-proposal-targets-confucius-institutes-as-foreign-agents-china-communist-party-censorship-academic-freedom/ | House Proposal Targets Confucius Institutes as Foreign Agents | Bethany Allen | # House Proposal Targets Confucius Institutes as Foreign Agents
## The draft bill is the first legislative attempt to push back against the Chinese state-run programs.
A new draft proposal in the House of Representatives seeks to require China’s cultural outposts in the United States, the Confucius Institutes, to register as foreign agents.
The effort, spearheaded by U.S. Rep. Joe Wilson (R-S.C.), targets any foreign funding at U.S. universities that aims to promote the agenda of a foreign government.
A new draft proposal in the House of Representatives seeks to require China’s cultural outposts in the United States, the Confucius Institutes, to register as foreign agents.
The effort, spearheaded by U.S. Rep. Joe Wilson (R-S.C.), targets any foreign funding at U.S. universities that aims to promote the agenda of a foreign government.
“The bottom line is transparency,” Wilson tells Foreign Policy in an interview.
The draft bill does not single out Confucius Institutes by name, but according to Wilson it will apply to the Chinese government-run programs, which offer language and culture classes on more than 100 American college and university campuses. The institutes have come under increasing scrutiny in recent months due to their sometimes heavy-handed attempts to censor discussion of topics that the Chinese Communist Party deems off-limits, leading to growing concerns about academic freedom.
Wilson’s initiative would clarify language in the Foreign Agents Registration Act (FARA), a Nazi-era law intended to combat foreign propaganda. FARA requires organizations and individuals engaged in lobbying or public discourse on behalf of a foreign government to register with the Department of Justice, and to disclose their funding and the scope of their activities. FARA does not prohibit such funding or activities but rather seeks to provide transparency about the true source of the messaging.
As currently written, FARA includes an exemption for “bona fide” academic and scholastic pursuits, but what is meant by “bona fide” is not clearly spelled out. The draft proposal would redefine what is meant by a bona fide academic pursuit to exclude any foreign-funded endeavor that promotes the agenda of a foreign government. If enacted, the legislation would, in turn, trigger mandatory registration for the institutes, though it would not interfere with their activities.
“The goal is transparency by the foreign agents themselves and also by the universities,” Wilson says. “The American people need to know that they are being provided propaganda.”
Wilson joins a growing number of lawmakers to express concerns about the Chinese state-funded programs. In February, Republican Florida Sen. Marco Rubio called on his state’s schools to close their Confucius Institutes, citing “China’s aggressive campaign to ‘infiltrate’ American classrooms, stifle free inquiry, and subvert free expression both at home and abroad.” And last week, U.S. Rep. Seth Moulton, a Democrat from Massachusetts, sent a letter to 40 colleges and universities in his state, urging them to close their Confucius Institutes or refrain from opening them in the first place.
The Chinese Communist Party has openly said that Confucius Institutes are used for propaganda. Former top party official Li Changchun has referred to the institutes as “an important part of China’s overseas propaganda set-up.”
“Confucius Institutes in the U.S. have been fully complying [with] the university policies and requirement as open and transparent initiatives,” said Gao Qing, executive director of the Confucius Institute U.S. Center in Washington. “It is wise to further comprehend Confucius Institutes’ operations and impact through people who [are involved with] and participate in the programs, not through speculations. The conclusion should not be drawn upon unfounded allegations.”
The draft proposal is the first legislative measure to address Confucius Institutes, according to Ben Freeman, director of the Foreign Influence Transparency Initiative at the nonprofit Center for International Policy. It is also the first that targets FARA’s academic exemption.
“This is an important issue of language clarification for the FARA statute,” Freeman says. Certain key portions of FARA are written in vague language, causing “a lot of confusion for transparency groups and also for people who think they might have to register but aren’t sure.”
On the surface, Confucius Institutes are analogous to Germany’s Goethe-Institut or France’s Alliance Française, which both receive government funding to teach language and culture classes. But the Chinese programs are embedded in American schools, unlike their freestanding European counterparts, and at times institute staff have sought to block host universities from holding discussions on sensitive political topics such as Tibet or Taiwan.
Within Confucius Institutes, “there is a very strong understanding that certain topics are off limits,” said Rachelle Peterson, the author of a 2017 study about the institutes published by the National Association of Scholars. “To speak about China in a Confucius Institute is to speak about the good things. The other things don’t exist as far as the Confucius Institute is concerned.”
A lack of transparency has made it difficult to assess exactly how much control universities have over Confucius Institute management and curricula, or how much money each school receives from the Chinese government. Agreements between institutes and their host institutions are not typically disclosed; much of what is known about specific conditions come from leaked contracts or from Freedom of Information Act requests filed by journalists.
The draft proposal also seeks to strengthen foreign funding disclosure requirements for universities themselves. Section 117 of the Higher Education Act currently requires universities to disclose any foreign funding and contributions exceeding $250,000; the proposal would lower that amount.
Wilson emphasizes that he strongly supports Chinese language education. “My dad served in China during World War II, so I grew up in a family that truly has a deep affection for the people of China,” Wilson tells FP.
“When I first saw the Confucius Institutes, I thought, hey this is good, we want a good and positive relationship with the people of China,” he says. “But it needs to be known that it has a propaganda side too.”
**Bethany Allen** is a journalist covering China from Taipei. She is the author of *Beijing Rules: How China Weaponized Its Economy to Confront the World.* She was previously an assistant editor and contributing reporter at *Foreign Policy*. X: @BethanyAllenEbr
## More from Foreign Policy
### Is the Israel-Hamas War Closer to Its Beginning or Its End?
A year on, FP asks experts to assess the conflict between Israelis and Palestinians.
### What Is Iran Trying to Prove?
Leaders in Tehran believe that Washington will restrain Israel in order to prevent a regional conflagration.
### Israel’s ‘Mission Accomplished’ Moment in the Middle East
Netanyahu may be making the same big mistake as George W. Bush.
### The Risk of Another Korean War Is Higher Than Ever
Pyongyang is playing Russia and China against each other—and has given up on the United States. | true | true | true | The draft bill is the first legislative attempt to push back against the Chinese state-run programs. | 2024-10-12 00:00:00 | 2018-03-14 00:00:00 | article | foreignpolicy.com | Foreign Policy | null | null |
|
17,962,834 | https://temboo.com/blog/ultimate-smart-building-guide | Smart Buildings: The Ultimate Guide With Examples, Techniques, & More | Jessica Califano | What Are Smart Buildings? | The Benefits of Smart Buildings | Smart Buildings Technologies | Examples of Smart Buildings | Temboo’s Smart Building Solutions
Nowadays, more and more companies are considering smart buildings for their offices, hotels, apartment buildings and more.
In fact, according to a recent report, the global smart building market will grow from $5.73 billion in 2016 to $24.73 billion in 2021.
So what’s all the hype about? What are smart buildings and why are so many people talking about them?
Smart building solutions are a large part of the growing IoT and connected sensor ecosystem and yet many people are still unaware of the real scope of the technology.
While there are definitely benefits to designing a building with these technologies from the ground up, the real impact of smart buildings will occur when people start adding ‘brains’ to buildings that exist already.
That’s why we’ve decided to share our expertise in the subject in the hopes that more businesses and building owners will decide to implement this technology in their properties.
In this guide, we’ll answer the question “what are smart buildings?”, go over the benefits of smart buildings, explore the smart building technology that is being used today, and look at some examples of smart buildings from around the world.
## What are Smart Buildings, Anyways?
We kept asking ourselves, your smartphone, your ipad has an OS, why doesn’t this 2 million square foot building have an operating system? Why is data that we collect every single day simply dumped?
-John Gilbert, Executive Vice President, COO, and CTO at the Rudin Management Company in New York City
You probably know what a building is, and have been inside one at some point in your life. But what is it that makes a building ‘smart’?
Well, smart buildings use automation to optimize all or some of the processes that occur inside a building: heating and cooling, security, lighting, ventilation, water usage, and more.
A lot of this comes from data collection. As mentioned in our interview above with John Gilbert, retrieving data from systems that are already in place can have a profound effect on the efficiency, sustainability, and effectiveness of the built environment.
By adding things like connected sensors, microcontrollers, and automation software to building’s control systems, facilities operators and engineers can gain valuable insights into the building’s functions and reap all the benefits of smart building technologies.
## The Benefits of Smart Buildings
You may be asking yourself if setting up these systems is actually worth it in the long run.
Well, take it from John Gilbert who has implemented smart building solutions in multiple locations:
For us at 345 Park Avenue, first year we saved almost a million dollars, $980,000. At 560 Lexington, which is a smaller building, (300,000 sq. feet), we saved a dollar a foot.
…And that’s just the financial benefits of energy use reduction. There are countless other ways that smart buildings can benefit the environment, the building tenets, and the businesses that own the facilities.
**Proactive maintenance of equipment:**With the data collected from equipment in the building, facilities engineers can see indicators of potential problems and take corrective actions before failure occurs. This switch to conditions-based maintenance in real-time and based on historical performance reduces downtime and ensures that things stay running smoothly.**Reduced environmental impact:**Sustainability is on everyone’s minds lately and for good reason. According to the US Energy Information Administration, commercial buildings account for nearly 20% of US energy consumption and 12% of greenhouse gas emissions in the country. By reducing waste and conserving energy, smart buildings create benefits for not only those who profit from or occupy them, but the global community at large.**Productivity and comfort of occupants:**Speaking of those who work or live in the building, smart building technologies can supply a whole new level of comfort, optimizing the space for comfort and productivity. Air quality, lighting levels, heating, and cooling can all be automated and optimized for maximum cognitive function, enabling those who use the building to benefit from the smart systems as well.**Reduced energy use and costs:**As illustrated by John Gilbert’s quote above, reducing energy use in buildings can save building owners a lot! By connecting electrical and mechanical systems in buildings to the cloud, they can automatically switch on and off, reducing waste.**Increased safety of inhabitants:**Imagine if the elevators in the building you’re in were able to detect power outages and safely get riders to the closest floor before shutting down. With smart building technologies, safety measures like those are capable of being implemented across the board.**Data visibility and insights:**Smart buildings can do things like output data on structural integrity, merge data from disparate systems into a common platform for analytics and reporting, and offer a visual snapshot of which facilities are experiencing things like high energy usage, unusual maintenance costs, and more. This visibility into your building’s data offers actionable information that can provide cost-saving solutions and innovations.
## Putting Smart Buildings Technology To Work
If you’re sold on the reasons why smart buildings are so valuable, you’re not the only one. But what are the actual methods of gaining the benefits offered from smart building solutions?
Smart building technology can be used in different ways in different types of buildings. For example, smart office buildings might focus on increasing the productivity of workers while a hotel or residential buildings might try to mimic circadian rhythms to achieve optimal comfort for those inside.
There are many different methodologies of implementing smart building technologies:
**Water Supply Systems**can be automated to detect leaks, monitor quality, and automate heating and cooling.**Chiller plants**can be optimized to incorporate outside weather data to reduce energy use while cooling the building.**Air conditioning and heating systems**can be set up to turn on and off based on the occupancy of a room.- A building’s
**electrical loads**can be categorized and grouped by priority to better understand how critical and non-essential loads are working. **Connected weather stations**can be added to the outside of buildings to optimize internal systems like temperature and air quality.- Sensors can be used to check for
**room occupancy**and match patterns to energy use throughout the day. **Infrastructure**can be added to the cloud for storage and data management.- Multiple
**internal systems**like lighting, air conditioning, water, and ventilation can be connected to see how they affect each other throughout the day and optimize for efficiency. **Structural integrity**can be monitored by tracking how the building responds to ambient vibrations.**Data collection**can be used to maintain optimal comfort settings for residents in the building while also reducing waste.**Remote control**over systems can shorten the response times for building managers and allow them to address issues in the building from a distance.
## Examples of Smart Buildings from Around The World
As you can imagine, more and more businesses are adding smart building technologies to their properties in many different ways. Below are some of our favorite examples of smart buildings throughout the world.
### The Mirage, Las Vegas
The Mirage in Las Vegas uses smart building technology to lower their energy costs through load-shedding. They have weather stations that monitor wind, temperature, humidity and more which can do things like chill water in advance of demand on extremely hot days, reducing operation during peak times.
### UNIQA Tower, Vienna
UNIQA Tower is equipped with a heating and cooling system that is automated and based on the temperature of the outside environment. This has reduced their annual CO2 emissions by 84 tons and has made the operation of the building more cost-effective.
### MIT Green Building, Cambridge
Unsurprisingly, MIT is on the cutting edge of developing and testing new smart building technologies. In 2010, they added sensors to the campus’ Green Building to allow it to sense its own internal damage over time. Lead author of the paper on the study, Hao Sun, told MIT News, “I would envision that, in the future, such a monitoring system will be instrumented on all our buildings, city-wide. Outfitted with sensors and central processing algorithms, those buildings will become intelligent, and will feel their own health in real time and possibly be resilient to extreme events.”
### Deloitte’s The Edge, Amsterdam
Back in 2015, Bloomberg called The Edge ‘The Smartest Building in the World’. If you learn anything about the building, it’s easy to see why. Not only did The Edge get the highest sustainability score ever awarded by BREEAM, it’s also optimized for the prime conditions of the humans who work there every day. The building even has a smartphone app that knows each worker’s preferences for light and temperature and adjusts the rooms to those settings as they move throughout the building.
### Siemens’ The Crystal, London
The Crystal is considered to be one of the most efficient buildings in the world. It produces about 70% less CO2 than comparable office buildings in the UK and incorporates rainwater harvesting, black water treatment, solar heating and automated building management systems.
## Temboo’s Smart Building IoT Solutions
Over the years, Temboo has created IoT applications and solutions that can be used to enhance buildings in many different ways. Below are some of our easy-to-implement smart building IoT solutions:
### Water Usage Management
Can your faucet tell you when you’re wasting water? In this video, we show you how to build a simple prototype to monitor water waste. A more scaled up version of this project could easily be used in larger buildings and structures to reduce waste.
### Smart Building Management with Amazon AWS
Using Temboo and Amazon Web Services, we built a multi-functional smart building application to help building managers ensure that their buildings are safe for living and working.
### Gas Leak Monitor
Gas leaks in buildings can present a serious threat to public safety, and even in less dangerous situations they can be costly and damaging to the environment. We’ve created an Internet of Things application that monitors gas pipes for leaks, and allows a building manager to remotely shut off a pipe if a leak is detected. This video will show you how to build it.
### Control Water Systems at Any Scale
We’ve built a system that senses the water levels in a tank, calls you when water levels are too low, and allows you to remotely refill from a reserve. If the water volume in a tank falls below a specified level, the application will check the weather forecast in order to determine whether rain is expected in the area; if no rain is expected, a call is placed to the tank superintendent to allow him or her to remotely refill the tank from a reserve water source.
These solutions are just the start. Temboo’s IoT platform can be used for many different smart building solutions from remote control, to data visualization, to alerts and monitoring.
If you are interested in hearing more, head to automatedbuildings.com for an in-depth interview with our CEO, Trisala Chandaria, about building automation, IoT, and our new Kosmos System.
Contact us today to learn more about what Temboo’s Kosmos platform can do for your building. And make sure to follow our blog, Twitter, Instagram, Facebook, and YouTube channel for news, tutorials, the latest IoT solutions, and more.
You must be logged in to post a comment. | true | true | true | Nowadays, more and more companies are turning to smart buildings for their offices, hotels, apartment buildings and more. So what’s all the hype about? | 2024-10-12 00:00:00 | 2018-06-07 00:00:00 | article | temboo.com | The Temboo Blog | null | null |
|
35,535,407 | https://en.wikipedia.org/wiki/Wikipedia:List_of_citogenesis_incidents | Wikipedia:List of citogenesis incidents - Wikipedia | null | # Wikipedia:List of citogenesis incidents
Appearance
In 2011, Randall Munroe in his comic *xkcd* coined the term "**citogenesis**" to describe the creation of "reliable" sources through circular reporting.[1][2] This is a list of some well-documented cases where Wikipedia has been the source.
## Known citogenesis incidents
[edit]- Did Sir Malcolm Thornhill make the first cardboard box? A one-day editor said so in 2007 in this edit. Though it was removed a year later, it kept coming back, from editors who also invested a lot in vandalizing the user page of the editor who removed it.
*Thornhill*propagated to at least two books by 2009, and appears on hundreds of web pages. A one-edit editor cited one of the books in the article in 2016.[3] - Ronnie Hazlehurst: A Wikipedia editor added a sentence to Hazlehurst's biography claiming he had written the song "Reach", which S Club 7 made into a hit single. The information was reproduced in multiple obituaries and reinserted in Wikipedia citing one of these obituaries.
[4] - Karl-Theodor zu Guttenberg: A Wikipedia editor added "Wilhelm" as an 11th name to his full name. Journalists picked it up, and then the "reliable sources" from the journalists were used to argue for its inclusion in the article.
[5][6]
- Diffs from German Wikipedia: :de:Diskussion:Karl-Theodor zu Guttenberg/Archiv/001 § Diskussionen zum korrekten vollständigen Namen
- Sacha Baron Cohen: Wikipedia editors added fake information that comedian Sacha Baron Cohen worked at the investment banking firm Goldman Sachs, a claim which news sources picked up and was then later added back into the article citing those sources.
[7] - Korma: A student added 'Azid' to Korma as an alternative name as a joke. It began to appear across the internet, which was eventually used as justification for keeping it as an alternative name.
[8] - Is the radio broadcast where Emperor Hirohito announced Japan's WWII surrender referred to as the "Jewel Voice Broadcast" in English? Google Books search results and the Google Books Ngram Viewer reveal that this moniker appears to have been non-existent until a user added this literal translation of the Japanese name to the Wikipedia article in 2006
[9](and another user moved the article itself to that title in 2016[10]). Since then, the usage of this phrase has skyrocketed[11]and when it was suggested in 2020 that the article should be moved because this "literal translation" is actually incorrect—a better translation of the original Japanese would have been "the emperor's voice broadcast"—it was voted down based on it appearing in plenty of recent reliable sources.[12] - Roger Moore: A student added 'The College of the Venerable Bede' to the early life of Roger Moore, repeatedly editing the page to cause citogenesis. This has been ongoing since April 2007 and was so widely believed that reporters kept asking him about it in interviews.
[13] - Maurice Jarre: When Maurice Jarre died in 2009, a student inserted fake quotes in his Wikipedia biography that multiple obituary writers in the mainstream press picked up. The student "said his purpose was to show that journalists use Wikipedia as a primary source and to demonstrate the power the internet has over newspaper reporting." The fakes only came to light when the student emailed the publishers, causing widespread coverage.
[14] - Invention of QALYs, the quality-adjusted life year. An article published in the Serbian medical journal
*Acta facultatis medicae Naissensis*stated that "QALY was designed by two experts in the area of health economics in 1956: Christopher Cundell and Carlos McCartney".[15]These individuals—along with a third inventor, "Toni Morgan" (an anagram of 'Giant Moron')—were listed on Wikipedia long before the publication of the journal article which was subsequently used as a citation for this claim.[16] - Invention of the butterfly swimming stroke: credited to a "Jack Stephens" in
*The Guardian*(archive), based on an undiscovered joke edit.[17][18] - Glucojasinogen: invented medical term that made its way into several academic papers.
[19] - Founder of
*The Independent*: the name of a student, which was added as a joke, found its way into the Leveson Inquiry report as being a co-founder of*The Independent*newspaper.[20][21] - Jar'Edo Wens: fictitious Australian Aboriginal deity (presumably named after a "Jared Owens") that had an almost ten-year tenure in Wikipedia and acquired mentions in (un)learned books.
[22][18] - Inventor of the hair straightener: credited to Erica Feldman or Ian Gutgold on multiple websites and, for a time, a book, based on vandalism edits to Wikipedia.
[23][24][8] - Boston College point shaving scandal: For more than six years, Wikipedia named an innocent man, Joe Streater, as a key culprit in the 1978–79 Boston College basketball point shaving scandal. When Ben Koo first investigated the case, he was puzzled by how many retrospective press and web sources mentioned Streater's involvement in the scandal, even though Streater took part in only 11 games in the 1977–78 season, and after that never played for the team again. Koo finally realised that the only reason that Streater was mentioned in Wikipedia and in every other article he had read was because it was in Wikipedia.
[25] - The Chaneyverse: Series of hoaxes relying in part on circular referencing. Discovered in December 2015 and documented at User:ReaderofthePack/Warren Chaney.
[26] - Dave Gorman hitch-hiking around the Pacific Rim: Gorman described on his show
*Modern Life is Goodish*(first broadcast 22 November 2016) that his Wikipedia article falsely described him as having taken a career break for a sponsored hitch-hike around the Pacific Rim countries, and that after it was deleted, it was reposted with a citation to*The Northern Echo*newspaper which had published the claim.[27] - The Dutch proverb "de hond de jas voorhouden" ("hold the coat up to the dog") did not exist before January 2007
[28]as the author confessed on national television.[29] - 85% of people attempting a water speed record have died in the attempt: In 2005, an unsourced claim in the Water speed record article noted that 50% of aspiring record holders died trying. In 2008, this was upped, again unsourced, to 85%. The claim was later sourced to sub-standard references and removed in 2018 but not before being cited in
*The Grand Tour*episode "Breaking, Badly." - Mike Pompeo served in the Gulf War: In December 2016, an anonymous user edited the Mike Pompeo article to include the claim that Pompeo served in the Gulf War. Various news outlets and senator Marco Rubio picked up on this claim, but the CIA refuted it in April 2018.
[30][18] - The Casio F-91W digital watch was long listed as having been introduced in 1991, whereas the correct date was June 1989. The error was introduced in March 2009 and repeated in sources such as the BBC,
*The Guardian*, and Bloomberg, before finally being corrected in June 2019 thanks to vintage watch enthusiasts.[31] - The Urker vistaart (fish pie from Urk) was in the article namespace on Dutch Wikipedia from 2009 to 2017. There were some doubts about the authenticity in 2009, but no action was taken. After someone mentioned in 2012 that
*Topchef*, a Dutch show on national television featured the*Urker vistaart*, the article was left alone until 2017 when*Munchies*, Vice Media-owned food website published the confession of the original authors.[32]The article was subsequently moved to the Wikipedia: namespace. - Karl-Marx-Allee: In February 2009, an anonymous editor on the German Wikipedia introduced a passage that said Karl-Marx-Allee (a major boulevard lined with tiled buildings) was known as "Stalin's bathroom". The nickname was repeated in several publications, and later, when the anonymous editor that added it as a joke tried to retract it, other editors restored it due to "reliable" citations. A journalist later revealed that he was the anonymous editor in an article taking credit for it.
[18] - In May 2008, the English Wikipedia article Mottainai was edited to include a claim that the word
*mottainai*appeared in the classical Japanese work*Genpei Jōsuiki*in a portion of the text where the word would have had its modern meaning of "wasteful". (The word actually does appear at two completely different points in the text, with different meanings, and the word used in the passage in question is actually a different word.) Later (around October 2015), at least one third-party source picked up this claim. The information was challenged in 2018 (talk page consensus was to remove it in February, but the actual removal took place in April), and re-added with the circular citation in November 2019. - In June 2006, the English Wikipedia article
*Eleagnus*was edited to include an unreferenced statement "Goumi is among the "nutraceutical" plants that Chinese use both for food and medicine." An immediate subsequent edit replaced the word "Goumi" in the statement with "*E. multiflora*". An equivalent statement was included in the article*Elaeagnus multiflora*when it was created in August 2006. The version of the statement in the article*Eleagnus*was later included in*The Illustrated Encyclopedia of Trees and Shrubs*, a collation of Wikipedia articles MobileReference published in January 2008. In May 2013, after the statement in the article*Elaeagnus multiflora*had been removed for the lack of a long-requested citation, it was immediately reinstated with a citation to MobileReference's*The Illustrated Encyclopedia of Trees and Shrubs*. - Entertainer Poppy posted a tweet in 2020 that showed only ring, party and bride emojis. Someone later edited her article by suffixing the last name of her at-the-time boyfriend, Ghostemane, to hers assuming she was married; it was reverted citing the vagueness of her tweet. The suffix was later restored, now citing an article from Access Hollywood which at the time said that was her legal name, though it has since been corrected.
[33][34][35][36] - In 2009, the English Amelia Bedelia Wikipedia article was edited to falsely claim the character was based on a maid in Cameroon. This claim had subsequently been repeated among different sources, including the current author of the books, Herman Parish. In July 2014, the claim was removed from Wikipedia after the original author of the hoax wrote an article debunking it.
[37][38] - Origin of band Vulfpeck: Jack Stratton created the Wikipedia article for his band Vulfpeck in 2013 under the username Jbass3586; the article claimed that "the members met in a 19th-century German literature class at the University of Michigan" to add to the mythology of the band.
*Billboard*picked this up in a 2013 interview article, and it was eventually added as a citation in the Wikipedia article.[39] - In 2020, an editor inserted a false quote in the article of Antony Blinken, chosen by then President-elect Joe Biden for the position of Secretary of State. The quote, calling Vladimir Putin an "international criminal", was repeated in Russian media like Gazeta.Ru.
[40] - In 2007, an editor inserted (diff) an anecdote into Joseph Bazalgette's page about the diameter of London sewage pipes. The misinformation made it into newspaper articles and books - from the Institution of Civil Engineers, from
*The Spectator*, from The Hindu, from the Museum of London (in modified form), two books these two, both about 'creative thinking.' Further details on the case can be found here. The information was not removed until 2021. - In 2016, an IP-editor added three unsourced statements to Chamaki, a place in Iran: that 600 Assyrians used to populate the village, that the language spoken was Modern Assyrian and that the local church is called "Saint Merry". [
*sic*] In a 2020 article from the*Tehran Times*, these same three statements were repeated.[41]No other sources have been found for these statements. While sources in Farsi may or may not exist for the population and language, this is unlikely for the "Saint Merry" spelling. The*Tehran Times*article was briefly used as a source before the likelihood of citogenesis was realized. - It is well known that Zimbabwe experienced severe hyperinflation in 2008, but could you
*really*trade one US dollar for 2,621,984,228,675,650,147,435,579,309,984,228 Zimbabwe dollars? A single-edit IP said so in 2015, along with reporting the country's unemployment rate to be*800%*and quantitatively using the word "zillion". While the latter two remarks were innocently corrected within 3 days, the 34-digit figure stayed in place for 10 months before it was manually reverted, enough time to make it into at least one book.[42] - Wikipedia has claimed at various times that Bill Gates's house is nicknamed Xanadu 2.0 and many online articles have repeated the claim, some of which are now cited by Wikipedia. No articles quote Gates or another authoritative source, but the moniker was used as the title of a 1997 article about the house.
- In 2006, an article entitled Onna bugeisha was added to English Wikipedia with the claim that it was a term referring to "a female samurai". The term does not exist in Japanese (it occasionally appears in works of fiction, referring to "female martial artists",
*onna*meaning "woman" and*bugeisha*meaning "martial artist") and does not appear to have carried the meaning "female samurai" before the creation of this Wikipedia article. The article was translated into several foreign-language editions of Wikipedia—though not Japanese Wikipedia—and eventually the term started to appear in third-party blogs and online magazine articles, including*National Geographic*(both English and Spanish editions). In 2021, the article was moved to a compromise title, using a Japanese term that is used to refer to women warriors in pre-modern Japan, but seems to have been rarely used in English prior to 2020; within a few months, this term had*also*found its way into a number of online magazines in languages such as English and Polish. - Since May 2010 the Playboy Bunny article claimed that Hugh Hefner "has stated that the idea for the Playboy bunny was inspired by Bunny's Tavern in Urbana, Illinois. [...] Hefner formally acknowledged the origin of the Playboy Bunny in a letter to Bunny's Tavern, which is now framed and on public display in the bar". No sources have been cited to support it. This information spread to a 2011 book, an article of the
*New Straits Times*dated 22 January 2011, and an article of*The Sun*written in September 2017 and titled "This is the real reason that the Playboy girls were called Bunnies" (copied by the*New York Post*, too). Oldest and more reliable sources proved that the costume has been inspired by the Playboy mascot used since 1953. A partially readable photo of the tavern's letter showed that Hefner did not "formally acknowledged the origin of the Playboy Bunny" at all. Unfortunately this information spread to the French, the Spanish, the Catalan, and the Italian Wikipedias. The Catalan Wikipedia used the 2011*New Straits Times*article to support it, while on the Italian Wikipedia*The Sun*article has been used as source. - From 2008, the article on former Canadian prime minister Arthur Meighen claimed that he had been educated at Osgoode Hall Law School. In 2021, a reference to a 2012 newspaper article was added to support this claim. In fact, Meighen never attended law school, as several biographies (including a meticulously detailed three-volume work by Roger Graham) make clear. The author of the newspaper article seems to have found the inaccurate information on Wikipedia.
- In 2008, an uncited claim that a Croatian named "Mirko Krav Fabris" became Conclavist pope in 1978 was added to the article Conclavism. This information ended up in the second (but not the first) edition of the
*Historical Dictionary of New Religious Movements*(2012). In 2014, the information was added that said Mirko Krav Fabris was a stand-up comedian with the stage name "Krav" who became pope as a joke and had died in 2012; the source given for this latter claim was a presentation of a stand-up comedian called Mirko Krav Fabris — who looks way too young to have been born before 1978 — in which it is neither mentioned that the person ever was a papal claimant in any form, nor that the person is dead. The 2015 book*True or False Pope? Refuting Sedevacantism and Other Modern Errors*published by the St. Thomas Aquinas Seminary of the SSPX makes an uncited claim that Mirko Krav Fabris was a Conclavist pope, a comedian with the stage name "Krav", and died in 2012.[43] - According to Richard Herring, Wikipedia was the originator of the claim that he is primarily known as a (professional) ventriloquist. This was repeated in
*Stuff*and then used as a reference for the claim in the article. He mentioned the incident on his podcast in 2021 (Nish Kumar - RHLSTP #315, at 54:47), describing the process of citogenesis without using the term itself. - From 2012, Wikipedia claimed that the electric toaster was invented by a Scotsman named Alan MacMasters (archived Wikipedia biography, AfD resulting in deletion on July 22, 2022). This was a complete fabrication, which entered over a dozen books and numerous online sources, among them a BBC article and the website of the Hagley Museum and Library in Delaware, both subsequently cited in Wikipedia's MacMasters biography. As late as August 2022, Google still named MacMasters as the inventor of the electric toaster, citing the Hagley Museum.
[44]Wikipedia criticism website Wikipediocracy published an interview with the hoaxer.[45] - The term "Sproftacchel" was added as a synonym for photo stand-in on 15 February 2021 by an anon with no other contributions. No trace of the term before that date has been found, but as of July 2022 the term has been used by
*The New Zealand Herald*,[46]Books for Keeps,[47]the Islington council,[48]the Hingham town council,[49]the city of Lincoln council[50]and we have "Sproftacchel Park" now.[51] - In August 2008, an anonymous editor vandalised the page of Cypriot association football team AC Omonia to read that they have "a small but loyal group of fans called 'The Zany Ones'". They drew Manchester City in the UEFA Cup first round that year and a journalist at the
*Daily Mirror*repeated the claim in both his match preview and report. The preview was then used as a source for this false statement. - Triboulet: in 2007 an anonymous editor added a story about the jester slapping the king's buttocks to the article without citation. The story got later picked up in several articles, two of which were later added as citations to the article.
- U.S. National Public Radio was discovered to have published an article with false information lifted from a poorly-edited Wikipedia entry on the Turnspit Dog. The article referred to nonsensical Latin. "Vernepator Cur" is not real Latin. The phrase first appeared in an unsourced Wikipedia article in 2006 [1], and has since spawned hundreds of news articles repeating the false information.[2][3]
- Between November 2007 and April 2014, an anonymous editor added "hairy bush fruit" (毛木果,
*máo mù guǒ*) to a list of Chinese names for kiwifruit. This term was repeated by*The Guardian*, which was later cited by the Wikipedia article as a source for the name.[52] - Terry Phelan, an Irish international footballer, was reported in a book about Eric Cantona to be nicknamed "The Scuttler".
[53]. Cycling statistician and journalist acknowledged in 2024 that this name was invented as an act of wikivandalism by he and his cousin, and that it was picked up from Wikipedia by the author of that book[54]Kelly stated that he had re-added it to the article duly sourced to the book.
## Terms that became real
[edit]In some cases, terms or nicknames created on Wikipedia have since entered common parlance, with false information thus becoming true.
- The term "Dunning–Kruger effect" did not originate on Wikipedia but it was standardized and popularized here. The underlying article had been created in July 2005 as Dunning-Kruger Syndrome, a clone of a 2002 post on everything2 that used both the terms "Dunning-Kruger effect" and "Dunning-Kruger syndrome".
[55]Neither of these terms appeared at that time in scientific literature; the everything2 post and the initial Wikipedia entry summarized the findings of one 1999 paper by Justin Kruger and David Dunning. The Wikipedia article shifted entirely from "syndrome" to "effect" in May 2006 with this edit because of a concern that "syndrome" would falsely imply a medical condition. By the time the article name was criticised as original research in 2008, Google Scholar was showing a number of academic sources describing the Dunning–Kruger effect using explanations similar to the Wikipedia article. The article is usually in the top twenty most popular Wikipedia articles in the field of psychology, reaching number 1 at least once. - In 2005, Wikipedia editors collectively developed a periodization of video game console generations, and gradually implemented it at History of video games and other articles.
[56] - In 2006, a Wikipedia editor claimed as a prank that the Pringles mascot was named "Julius Pringles." After the brand was sold from Procter & Gamble to Kellogg's, the name (sometimes modified slightly to "Julius Pringle") was adopted by official Pringles marketing materials.
[57][58] - In 2007, an editor added that "A group of rabbits or hares are often called a 'fluffle' in parts of Northern Canada" to Rabbit. This statement was removed in 2010, but found its way back to the article in 2015 with a source dated 2011. The expression "a fluffle of rabbits/bunnies" can be found in books published since the 2010s.
[59]Linguist Ben Zimmer traced its origin to the 2007 edit in 2023.[60] - In 2008, a then 17-year-old student added a claim that the coati was also known as "Brazilian aardvark". Although the edit was done as a private joke, the false information lasted for six years and was propagated by hundreds of websites, several newspapers, and even books by a few university presses.
[61][24]The spread was such that the joke indeed became a common name for the animal and was cited in several sources. After the initial removal, the name was reinserted multiple times by users who believed it had become legitimate. - Mike Trout's nickname: Mike Trout's article was edited in June 2012 with a nonexistent nickname for the Major League Baseball player, the "Millville Meteor"; media began using it, providing the article with real citations to replace the first fake ones. Although Trout was surprised, he did not dislike the nickname, signing autographs with the title.
[62] - Michelle/MJ's surname: the article for the then-unreleased film
*Spider-Man: Homecoming*was edited in July 2017 with an unsourced claim that the mononymous character Michelle/MJ (portrayed by Zendaya) had the surname "Jones", a name used in works of fan fiction made in the lead up of the film; media reporting on the film then began using the surname, in spite of the character being kept mononymous by Sony Pictures and Marvel Studios through to its 2019 sequel*Spider-Man: Far From Home*. In the 2021 film*Spider-Man: No Way Home*, in confirming Michelle/MJ as a loose adaptation of Mary Jane Watson, the character was provided the expanded name of "Michelle Jones-Watson", making "Jones" a canon surname.[63] - Riddler's alias: In November 2013, a Wikipedia editor named Patrick Parker claimed as a prank that "Patrick Parker" was also an alias of the DC Comics supervillain the Riddler,
[64]the claim going unnoticed on the page for nine years.[65]In the 2022 film*The Batman*, this alias was made canon as a name used by Paul Dano's Riddler on his fake IDs, with a report by*Comic Book Resources*in April 2022 uncovering the act of citogenesis.[66]The name would see further use in the prequel comic book miniseries*The Riddler: Year One*.[67]
## See also
[edit]## References
[edit]**^**Michael V. Dougherty (21 May 2024).*New Techniques for Proving Plagiarism: Case Studies from the Sacred Disciplines at the Pontifical Gregorian University*. Leiden, Boston: Brill Publishers. p. 209. doi:10.1163/9789004699854. ISBN 978-90-04-69985-4. LCCN 2024015877. Wikidata Q126371346.A published monograph that apparently copies many Wikipedia articles is now treated as an authority for later Wikipedia articles. This state of affairs is arguably not an optimal development.
**^**Munroe, Randall. "Citogenesis".*xkcd*. Retrieved 30 April 2017.**^**Special:Diff/719628400/721070227**^**McCauley, Ciaran (8 February 2017). "Wikipedia hoaxes: From Breakdancing to Bilcholim". BBC. Retrieved 4 June 2017.**^**"False fact on Wikipedia proves itself". 11 February 2009. Archived from the original on 18 January 2012.**^**"Medien: "Mich hat überrascht, wie viele den Fehler übernahmen"".*Die Zeit*. 13 February 2009. Retrieved 11 September 2014.**^**"Wikipedia article creates circular references".- ^
**a**"How pranks, hoaxes and manipulation undermine the reliability of Wikipedia".**b***Wikipediocracy*. 20 July 2014. **^**"Hirohito surrender broadcast".**^**"Hirohito surrender broadcast".**^**"Jewel Voice".*GoogleBooks Ngram Viewer*.**^**Talk:Hirohito surrender broadcast**^**Whetstone, David (8 November 2016). "Sir Roger Moore remembers co-star Tony Curtis and reveals his favourite Bond film".*ChronicleLive*. Retrieved 7 June 2021.**^**Butterworth, Siobhain (3 May 2009). "Open door: The readers' editor on ... web hoaxes and the pitfalls of quick journalism".*The Guardian*– via www.theguardian.com.**^**Višnjić, Aleksandar; Veličković, Vladica; Milosavljević, Nataša Šelmić (2011). "QALY ‐ Measure of Cost‐Benefit Analysis of Health Interventions".*Acta Facultatis Medicae Naissensis*.**28**(4): 195–199.**^**Dr Panik (9 May 2014). "Were QALYs invented in 1956?".*The Academic Health Economists' Blog*.**^**Bartlett, Jamie (16 April 2015). "How much should we trust Wikipedia?".*The Daily Telegraph*.- ^
**a****b****c**Harrison, Stephen (7 March 2019). "The Internet's Dizzying Citogenesis Problem". Future Tense - Source Notes.**d***Slate Magazine*. Retrieved 30 June 2020. **^**Ockham, Edward (2 March 2012). "Beyond Necessity: The medical condition known as glucojasinogen".**^**Allen, Nick. "Wikipedia, the 25-year-old student and the prank that fooled Leveson". The Daily Telegraph.**^**"Leveson's Wikipedia moment: how internet 'research' on The Independent's history left him red-faced".*The Independent*. 30 November 2012.**^**Dewey, Caitlin. "The story behind Jar'Edo Wens, the longest-running hoax in Wikipedia history". The Washington Post.**^**Michael Harris (7 August 2014).*The End of Absence: Reclaiming What We've Lost in a World of Constant Connection*. Penguin Publishing Group. p. 48. ISBN 978-0-698-15058-4.- ^
**a**Kolbe, Andreas (16 January 2017). "Happy birthday: Jimbo Wales' sweet 16 Wikipedia fails. From aardvark to Bicholim, the encylopedia of things that never were".**b***The Register*. Archived from the original on 8 July 2017. Retrieved 12 August 2022. **^**Ben Koo (9 October 2014). "Guilt by Wikipedia: How Joe Streater Became Falsely Attached To The Boston College Point Shaving Scandal".*Awful Announcing*.**^**Feiburg, Ashley (23 December 2015). "The 10 Best Articles Wikipedia Deleted This Week".*Gawker*.**^**Hardwick, Viv (9 September 2014). "Mears sets his sights on UK".*The Northern Echo*. Archived from the original on 29 September 2014. Retrieved 30 August 2017.He once hitchhiked around the Pacific Rim countries
**^**Lijst van uitdrukkingen en gezegden F-J, diff on Dutch Wikipedia**^**NPO (23 March 2018). "De Tafel van Taal, de hond de jas voorhouden" – via YouTube.**^**Timmons, Heather; Yanofsky, David (21 April 2018). "Mike Pompeo's Gulf War service lie started on Wikipedia".*Quartz*. Retrieved 22 April 2018.**^**Moyer, Phillip (15 June 2019). "The case of an iconic watch: how lazy writers and Wikipedia create and spread fake "facts"".*KSNV*. Retrieved 6 July 2019.**^**Iris Bouwmeester (26 July 2017). "Door deze smiechten trapt heel Nederland al jaren in de Urker vistaart-hoax".**^**Special:Diff/966969824**^**Special:Diff/967708571**^**"YouTuber Poppy Is Engaged To Eric Ghoste".*Access Hollywood*. 10 July 2020. Retrieved 22 September 2020.**^**Special:Diff/967760280/968057663**^**Dickson, EJ (29 July 2014). "I accidentally started a Wikipedia hoax".*The Daily Dot*. Retrieved 2 August 2020.**^**Okyle, Carly. "Librarians React to 'Amelia Bedelia' Hoax".*School Library Journal*. Retrieved 2 August 2020.**^**State of the Vulf 2016**^**"Unreliable sources".*meduza.io*. Meduza. 27 November 2020. Retrieved 28 November 2020.**^**"Historical churches in West Azarbaijan undergo rehabilitation works".*Tehran Times*. 4 August 2020. Archived from the original on 25 January 2021. Retrieved 18 April 2021.**^**See "quotations" section: trillionaire**^**More information at: Talk:Conclavism § Pope Krav?**^**Rauwerda, Annie (12 August 2022). "A long-running Wikipedia hoax and the problem of circular reporting".*Input*.**^**"Wikipedia's Credibility Is Toast | Wikipediocracy".*wikipediocracy.com*.**^**Hanne, Ilona (2 April 2022). "Shakespeare celebrated throughout April in Stratford New Zealand".*Stratford Press*. Archived from the original on 4 April 2022. Retrieved 25 December 2022.**^**"The Midnight Fair" (PDF). Reviews.*Books for Keeps*. No. 253. London. March 2022. p. 23. Retrieved 25 December 2022.**^**"3. Public consultation analysis". Consultation Results (PDF).*islington.gov.uk*(Report). Islington Council. 2022. p. 19. Retrieved 25 December 2022.**^**"Hingham Santa's Grotto". Reports (PDF).*hinghamtowncouncil.norfolkparishes.gov.uk*(Report). Annual Town Meetings. Hingham Town Council. April 2022. Retrieved 25 December 2022.**^**"IMP Trail 2021" (PDF).*Lincoln BIG Annual Report*. Lincoln Business Improvement Group. June 2021. p. 10. Retrieved 25 December 2022.**^**"Comfort Station presents: "Sproftacchel Park"".*logansquareartsfestival.com*. Logan Square Arts Festival. June 2022. Archived from the original on 28 July 2022. Retrieved 25 December 2022.**^**"In Praise of the Gooseberry".*The Guardian*. 28 July 2010. Retrieved 29 May 2024.**^**Auclair, Phillipe (21 August 2009).*Cantona: The Rebel Who Would Be King*. Pan Macmillan. p. 326. ISBN 9780230747012. Archived from the original on 11 October 2021. Retrieved 22 September 2022.**^**Kelly, Cillian (13 September 2024). "Wikipedia Wikismhedia".*The Cycling Website*. Retrieved 13 September 2024.**^**https://everything2.com/title/Dunning-Kruger+Effect**^**"Is Wikipedia Really To Blame For Video Game Console Generations?".*Time Extension*. 12 December 2022. Retrieved 31 January 2024.**^**Heinzman, Andrew (25 March 2022). "The Pringle Man's Name Is an Epic Wikipedia Hoax".*Review Geek*. Retrieved 26 March 2022.**^**Morse, Jack (25 March 2022). "The secret Wikipedia prank behind the Pringles mascot's first name".*Mashable*. Retrieved 26 March 2022.**^**Google Books**^**Zimmer, Ben (28 June 2023). "fluffle".*Ads-l*(Mailing list).**^**Randall, Eric (19 May 2014). "How a raccoon became an aardvark".*The New Yorker*. Archived from the original on 29 December 2016. Retrieved 24 November 2016.**^**Lewis, Peter H. (20 September 2012). "Los Angeles Angels centerfielder Mike Trout is a phenom, but will it last?".*ESPN*.**^**Special:Diff/788391600/788391711**^**Special:Diff/580902127/581421492**^**Mannix, J. (14 November 2022).*Are we gonna talk about the “Patrick Parker” portion? I’m pretty sure that was never an identity of his in the comics…**– via Reddit.***Patrick D. Parker:**Oh yeah, that was never in the comics, shows, movies, games or anything. [I] added that name, technically my name, to the wiki page for the Riddler back in 2013ish as one of the Riddler's aliases [as] a fun but dumb social experiment [to] test out how reliable Wikipedia was as a source [and] figured it would be taken down ages ago. Years go by and I forget about it. Jump to*The Batman*release and a friend texts me with a pic of the Riddler ID with my name [and] then it hit me. The writers must have done base level research on the Riddler, saw the name and thought it would be a neat little Easter egg for eagle eyed fans [only] what they ended up doing was taking a lie from the internet and made it into a truth by using that name as an alias for the Riddler so I have retroactively been made correct.**^**Cronin, Brian (2 April 2022). "Where Did Riddler Get the Aliases He Used in*The Batman*?".*Comic Book Resources*.**^**Jackson, Trey (28 February 2023). "*The Riddler: Year One*#3 Review".*Batman On Film*. | true | true | true | null | 2024-10-12 00:00:00 | 2017-04-30 00:00:00 | null | website | wikipedia.org | en.wikipedia.org | null | null |
21,666,476 | https://github.com/systemd/systemd/blob/master/NEWS | systemd/NEWS at main · systemd/systemd | Systemd | -
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
/
# NEWS
18424 lines (14893 loc) · 974 KB
/
# NEWS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
systemd System and Service Manager
CHANGES WITH 257 in spe:
Incompatible changes:
* The --purge switch of systemd-tmpfiles (which was added in v256) has
been reworked: it will now only apply to tmpfiles.d/ lines marked
with the new "$" flag. This is an incompatible change, and means any
tmpfiles.d/ files which shall be used together with --purge need to
be updated accordingly. This change has been made to make it harder
to accidentally delete too many files when using --purge incorrectly.
* The systemd-creds 'cat' verb now expects base64-encoded encrypted
credentials for consistency with the 'decrypt' verb and the
LoadCredentialEncrypted= service setting. Previously it could only
read raw binary data.
Announcements of Future Feature Removals and Incompatible Changes:
* To work around limitations of X11's keyboard handling systemd's
keyboard mapping hardware database (hwdb.d/60-keyboard.hwdb) so far
mapped the microphone mute and touchpad on/off/toggle keys to the
function keys F20, F21, F22, F23 instead of their correct key
codes. This key code mangling will be removed in the next systemd
release v258. To maintain compatibility with X11 applications that
rely on the old function key code mappings, this mangling has now
been moved to the relevant X11 keyboard driver modules instead. Thus,
in order to ensure these keys continue to work as before make sure to
update the xf86-input-evdev and xf86-input-libinput packages to the
newest version before updating systemd to v258.
* Support for automatic flushing of the nscd user/group database caches
has been dropped.
* Support for cgroup v1 ('legacy' and 'hybrid' hierarchies) is now
considered obsolete and systemd by default will refuse to boot under
it. To forcibly reenable cgroup v1 support,
SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 must be set on kernel command
line. The meson option 'default-hierarchy=' is also deprecated, i.e.
only cgroup v2 ('unified' hierarchy) can be selected as build-time
default.
* Support for System V service scripts is deprecated and will be
removed in a future release. Please make sure to update your software
*now* to include a native systemd unit file instead of a legacy
System V script to retain compatibility with future systemd releases.
* Support for the SystemdOptions EFI variable is deprecated.
'bootctl systemd-efi-options' will emit a warning when used. It seems
that this feature is little-used and it is better to use alternative
approaches like credentials and confexts. The plan is to drop support
altogether at a later point, but this might be revisited based on
user feedback.
* systemd-run's switch --expand-environment= which currently is disabled
by default when combined with --scope, will be changed in a future
release to be enabled by default.
* The FileDescriptorName= setting for socket units is now honored by
Accept=yes sockets too, where it was previously silently ignored and
"connection" was used unconditionally.
* systemd-logind now always obeys inhibitor locks, where previously it
ignored locks taken by the caller or when the caller was root. A
privileged caller can always close the other sessions, remove the
inhibitor locks, or use --force or --check-inhibitors=no to ignore the
inhibitors. This change thus doesn't affect security, since everything
that was possible before at a given privilege level is still possible,
but it should make the inhibitor logic easier to use and understand,
and also help avoiding accidental reboots and shutdowns. New 'delay-weak'
and 'block-weak' inhibitor modes were added, if taken they will make
the inhibitor lock work as in the previous versions. Inhibitor locks
can also be taken by remote users (subject to polkit policy).
* systemd-nspawn will now mount the unified cgroup hierarchy into a
container if no systemd installation is found in a container's root
filesystem. $SYSTEMD_NSPAWN_UNIFIED_HIERARCHY=0 can be used to override
this behavior.
* D-Bus method org.freedesktop.systemd1.StartAuxiliaryScope() becomes
deprecated (reach out if you have use cases).
libsystemd:
* New sd-json component is now available as part of libsystemd. The
goal of the library is to allow structures to be conveniently
created in C code and serialized to JSON, and for JSON to
conveniently deserialized into in-memory structures, using callbacks
to handle specific keys. Various data types like integers, floats,
booleans, strings, UUIDs, hex-encoded strings, and arrays are
supported natively.
Service and system management:
* Environment variable $REMOTE_ADDR is now set when using socket
activation for AF_UNIX sockets.
* Multipath TCP (MPTCP) is now supported as a socket protocol.
* New crypttab options fido2-pin=, fido2-up=, fido2-uv= can be used to
enable/disable the PIN query, User Presence check, and User
Verification.
* New crypttab option password-cache=yes|no|read-only can be used to
customize password caching.
* New fstab option x-systemd.wants= creates "Wants" dependencies.
(This is similar to the previously available x-systemd.requires=.)
* The initialization of the system clock during boot and updates has
been simplified: either pid1 or systemd-timesyncd will pick the
latest time as indicated by the compiled-in epoch,
/usr/lib/clock-epoch, and /var/lib/systemd/timesync/clock. See
systemd(1) for an detailed updated description.
* Ctrl-Alt-Delete is re-enabled during late shutdown, so that the user
can still initiate a reboot if the system freezes.
* Unit option PrivateUsers=identity can be used to request a user
namespace with an identity mapping for the first 65536 UIDs/GIDs.
This is analogous to the systemd-nspawn's --private-users=identity.
* Unit option PrivateTmp=disconnected can be used to specify that a
separate tmpfs instance should be used for /tmp/ and /var/tmp/ for
the unit.
* A new sleep.conf HibernateOnACPower= option has been added, which
when disabled would suppress hibernation in suspend-then-hibernate
mode until the system is disconnected from a power source.
* udev rules now set 'uaccess' for /dev/udmabuf, giving locally
logged-in users access to the hardware. This is necessary to support
IPMI cameras with libcamera.
* New RELEASE_TYPE= and EXPERIMENT= fields are documented for the
os-release file. For example, "RELEASE_TYPE=development|stable|lts"
can be used to indicate various stages of the release life cycle,
and "RELEASE_TYPE=experimental" can indicate experimental builds,
with the EXPERIMENT= field providing a human-readable description of
the nature of the experiment.
* The manager (and various other tools too) use pidfds in more places
to refer to processes.
* A bunch of patches to ease building against musl have been merged.
* A build option -D link-executor-shared=false can be used to build
the systemd-executor binary (added in the previous release) in a way
where it does not link to shared libsystemd-shared-….so library.
PID1 holds a reference to the executor binary that was on disk when
the manager was started or restarted, but the shared libraries it is
linked to are not loaded until the executor binary needs to be used.
This partial static linking is a workaround for the issue where,
during upgrades, the old libsystemd-shared-….so may have already
been removed and the pinned executor binary will just fail to
execute.
systemd-logind:
* New DesignatedMaintenanceTime= configuration option allows
shutdowns to be automatically scheduled at the specified time.
* logind now reacts to Ctrl-Alt-Shift-Esc being pressed. It will send
out a org.freedesktop.login1.SecureAttentionKey signal, indicating a
request by the user for the system to display a secure login dialog.
The handling of SAK can be suppressed in logind configuration.
systemd-machined:
* Unprivileged clients are now allowed to register VMs and containers.
Machines started via the [email protected] unit will now be
registered with systemd-machined.
systemd-resolved:
* resolvconf command now supports '-p' switch. If specified, the
interface will not be used as the default route.
* resolvectl now allows interactive polkit authorization. It gained a
--no-ask-password option to suppress it.
systemd-networkd and networkctl:
* IPv6 address labels can be configured in a new [IPv6AddressLabel]
section with Prefix= and Label= settings.
* 'networkctl edit' can now read the new contents from standard input
with the new --stdin option.
* 'networkctl edit' and 'cat' now supports editing .netdev files by
link. 'networkctl cat' can also list all configuration files
associated with an interface at once with ':all'.
* networkctl gained a --no-ask-password option to suppress interactive
polkit authorization.
systemd-boot, systemd-stub, and related tools:
* The EFI stub now supports loading of .ucode sections with microcode
from addons.
* A new .profile PE section type is now documented and supported in
systemd-measure, ukify, systemd-stub and systemd-boot. Those new
sections allow multiple "profiles" to be stored together in the UKI,
with .profile sections creating groupings of sections in the UKI,
allowing some sections to be shared and other sections like .cmdline
or .initrd unique to the profile.
* ukify gained an --extend switch to import an existing UKI to
be extended, and a --measure-base= switch to support measurement
of multi-profile UKIs.
The journal:
* journalctl can now list invocations of a unit with the
--list-invocation options and show logs for a specific invocation
with the new --invocation/-I option. (This is analogous to the
--list-boots/--boot/-b options.)
systemd-sysupdate and related tools:
* systemd-sysupdate can be run as system service, allowing
unprivileged clients to update the system via D-Bus calls.
A new updatectl command-line tool can be used to control the
service.
* systemd-sysupdate gained a new --offline option to force it to
operate locally. This is useful when listing locally installed
versions.
* systemd-sysupdate gained a new --transfer-source= option to set the
directory to which transfer sources configured with
PathRelativeTo=explicit will be interpreted.
Miscellaneous:
* systemctl now supports the --now option with the 'reenable' verb.
* systemd-analyze will now show the SMBIOS #11 vendor strings set for
the machine with a new 'smbios11' verb.
* systemd-analyze gained a new --instance= option that can be used to
provide an instance name to analyze multiple templates instantiated
with the same instance name.
* The 'tpm2' verb which lists usable TPM2 devices has been moved from
systemd-creds to systemd-analyze.
* varlinkctl gained a new verb 'list-methods' to show a list of
methods implemented by a service.
* varlinkctl gained a --quiet/-q option to suppress method call
replies.
* varlinkctl gained a --graceful= option to suppress specified Varlink
errors.
* varlinkctl gained a --timeout= option to limit how long the
invocation can take.
* varlinkctl allows remote invocations over ssh, via the new
"ssh-exec:" address specification. It'll make an ssh connection,
start the specified executable on the remote, and communicate with
the remote process using the Varlink protocol.
"ssh:" address specification has been renamed to "ssh-unix:".
(The old syntax is still supported for backwards compatibility.)
* bootctl gained a --random-seed=yes|no option to control provisioning
of the random seed file in ESP. (This is useful when producing an
image that will be used multiple times.)
* systemd-cryptenroll gained new options --fido2-salt-file= and
--fido2-parameters-in-header= to simplify manual enrollment of FIDO2
tokens.
* systemd-cryptenroll, systemd-repart, and systemd-storagetm gained a
new --list-devices option to list appropriate candidate block
devices.
* systemd-repart's CopyBlocks= directive can now use a char device as
source (in addition to previously supported regular files and block
devices).
* systemd-repart gained a new Compression= and CompressionLevel=
settings to enable internal compression in filesystems created
offline.
* systemd-repart understands a new MakeSymlinks= option to create one
or more symlinks (each specified as a symlink name and target).
* systemd-mount can now output JSON with a new --json= switch.
* A new generator sytemd-import-generator has been added to
synthetisize image download jobs. This provides functionality
similar to importctl, but configured via the kernel command line and
system credentials.
* systemd-inhibit now allows interactive polkit authorization. It
gained a --no-ask-password option to suppress it.
* systemd-id128 gained a new 'var-partition-uuid' verb to calculate
the DPS UUID for /var/ keyed by the local machine-id.
* locatectl gained a -l/--full option to show output without
ellipsization.
* 'busctl monitor' gained new options --num-matches= and --timeout=
to set the number of matches or limit the runtime of the command.
This is intended to be used in scripts.
* systemd-run can output some data as JSON via the new --json= option.
* timedatectl now supports interactive polkit authorization.
— <place>, <date>
CHANGES WITH 256:
Announcements of Future Feature Removals and Incompatible Changes:
* Support for automatic flushing of the nscd user/group database caches
will be dropped in a future release.
* Support for cgroup v1 ('legacy' and 'hybrid' hierarchies) is now
considered obsolete and systemd by default will refuse to boot under
it. To forcibly reenable cgroup v1 support,
SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 must be set on kernel command
line. The meson option 'default-hierarchy=' is also deprecated, i.e.
only cgroup v2 ('unified' hierarchy) can be selected as build-time
default.
* Support for System V service scripts is deprecated and will be
removed in a future release. Please make sure to update your software
*now* to include a native systemd unit file instead of a legacy
System V script to retain compatibility with future systemd releases.
* Support for the SystemdOptions EFI variable is deprecated.
'bootctl systemd-efi-options' will emit a warning when used. It seems
that this feature is little-used and it is better to use alternative
approaches like credentials and confexts. The plan is to drop support
altogether at a later point, but this might be revisited based on
user feedback.
* systemd-run's switch --expand-environment= which currently is disabled
by default when combined with --scope, will be changed in a future
release to be enabled by default.
* Previously, systemd-networkd did not explicitly remove any bridge
VLAN IDs assigned on bridge master and ports. Since version 256, if a
.network file for an interface has at least one valid setting in the
[BridgeVLAN] section, then all assigned VLAN IDs on the interface
that are not configured in the .network file are removed.
* IPForward= setting in .network file is deprecated and replaced with
IPv4Forwarding= and IPv6Forwarding= settings. These new settings are
supported both in .network file and networkd.conf. If specified in a
.network file, they control corresponding per-link settings. If
specified in networkd.conf, they control corresponding global
settings. Note, previously IPv6SendRA= and IPMasquerade= implied
IPForward=, but now they imply the new per-link settings. One of the
simplest ways to migrate configurations, that worked as a router with
the previous version, is enabling both IPv4Forwarding= and
IPv6Forwarding= in networkd.conf. See systemd.network(5) and
networkd.conf(5) for more details.
* systemd-gpt-auto-generator will stop generating units for ESP or
XBOOTLDR partitions if it finds mount entries for or below the /boot/
or /efi/ hierarchies in /etc/fstab. This is to prevent the generator
from interfering with systems where the ESP is explicitly configured
to be mounted at some path, for example /boot/efi/ (this type of
setup is obsolete, but still commonly found).
* The behavior of systemd-sleep and systemd-homed has been updated to
freeze user sessions when entering the various sleep modes or when
locking a homed-managed home area. This is known to cause issues with
the proprietary NVIDIA drivers. Packagers of the NVIDIA proprietary
drivers may want to add drop-in configuration files that set
SYSTEMD_SLEEP_FREEZE_USER_SESSIONS=false for systemd-suspend.service
and related services, and SYSTEMD_HOME_LOCK_FREEZE_SESSION=false for
systemd-homed.service.
* systemd-tmpfiles and systemd-sysusers, when given a relative
configuration file path (with at least one directory separator '/'),
will open the file directly, instead of searching for the given
partial path in the standard locations. The old mode wasn't useful
because tmpfiles.d/ and sysusers.d/ configuration has a flat
structure with no subdirectories under the standard locations and
this change makes it easier to work with local files with those
tools.
* systemd-tmpfiles now properly applies nested configuration to 'R' and
'D' stanzas. For example, with the combination of 'R /foo' and 'x
/foo/bar', /foo/bar will now be excluded from removal.
* systemd.crash_reboot and related settings are deprecated in favor of
systemd.crash_action=.
* Stable releases for version v256 and newer will now be pushed in the
main repository. The systemd-stable repository will be used for existing
stable branches (v255-stable and lower), and when they reach EOL it will
be archived.
General Changes and New Features:
* Various programs will now attempt to load the main configuration file
from locations below /usr/lib/, /usr/local/lib/, and /run/, not just
below /etc/. For example, systemd-logind will look for
/etc/systemd/logind.conf, /run/systemd/logind.conf,
/usr/local/lib/systemd/logind.conf, and /usr/lib/systemd/logind.conf,
and use the first file that is found. This means that the search
logic for the main config file and for drop-ins is now the same.
Similarly, kernel-install will look for the config files in
/usr/lib/kernel/ and the other search locations, and now also
supports drop-ins.
systemd-udevd now supports drop-ins for udev.conf.
* A new 'systemd-vpick' binary has been added. It implements the new
vpick protocol, where a "*.v/" directory may contain multiple files
which have versions (following the UAPI version format specification)
embedded in the file name. The files are ordered by version and
the newest one is selected.
systemd-nspawn --image=/--directory=, systemd-dissect,
systemd-portabled, and the RootDirectory=, RootImage=,
ExtensionImages=, and ExtensionDirectories= settings for units now
support the vpick protocol and allow the latest version to be
selected automatically if a "*.v/" directory is specified as the
source.
* Encrypted service credentials can now be made accessible to
unprivileged users. systemd-creds gained new options --user/--uid=
for encrypting/decrypting a credential for a specific user.
* New command-line tool 'importctl' to download, import, and export
disk images via systemd-importd is added with the following verbs:
pull-tar, pull-raw, import-tar, import-raw, import-fs, export-tar,
export-raw, list-transfers, and cancel-transfer. This functionality
was previously available in "machinectl", where it was used
exclusively for machine images. The new "importctl" generalizes this
for sysext, confext, and portable service images.
* The systemd sources may now be compiled cleanly with all OpenSSL 3.0
deprecations removed, including the OpenSSL engine logic turned off.
Service Management:
* New system manager setting ProtectSystem= has been added. It is
analogous to the unit setting, but applies to the whole system. It is
enabled by default in the initrd.
Note that this means that code executed in the initrd cannot naively
expect to be able to write to /usr/ during boot. This affects
dracut <= 101, which wrote "hooks" to /lib/dracut/hooks/. See
https://github.com/dracut-ng/dracut-ng/commit/a45048b80c27ee5a45a380.
* New unit setting WantsMountsFor= has been added. It is analogous to
RequiresMountsFor=, but creates a Wants= dependency instead of
Requires=. This new logic is now used in various places where mounts
were added as dependencies for other settings (WorkingDirectory=-…,
PrivateTmp=yes, cryptsetup lines with 'nofail').
* New unit setting MemoryZSwapWriteback= can be used to control the new
memory.zswap.writeback cgroup knob added in kernel 6.8.
* The manager gained a org.freedesktop.systemd1.StartAuxiliaryScope()
D-Bus method to devolve some processes from a service into a new
scope. This new scope will remain running, even when the original
service unit is restarted or stopped. This allows a service unit to
split out some worker processes which need to continue running.
Control group properties of the new scope are copied from the
originating unit, so various limits are retained.
* Units now expose properties EffectiveMemoryMax=,
EffectiveMemoryHigh=, and EffectiveTasksMax=, which report the
most stringent limit systemd is aware of for the given unit.
* A new unit file specifier %D expands to $XDG_DATA_HOME (for user
services) or /usr/share/ (for system services).
* AllowedCPUs= now supports specifier expansion.
* What= setting in .mount and .swap units now accepts fstab-style
identifiers, for example UUID=… or LABEL=….
* RestrictNetworkInterfaces= now supports alternative network interface
names.
* PAMName= now implies SetLoginEnvironment=yes.
* systemd.firstboot=no can be used on the kernel command-line to
disable interactive queries, but allow other first boot configuration
to happen based on credentials.
* The system's hostname can be configured via the systemd.hostname
system credential.
* The systemd binary will no longer chainload sysvinit's "telinit"
binary when called under the init/telinit name on a system that isn't
booted with systemd. This previously has been supported to make sure
a distribution that has both init systems installed can reasonably
switch from one to the other via a simple reboot. Distributions
apparently have lost interest in this, and the functionality has not
been supported on the primary distribution this was still intended
for a long time, and hence has been removed now.
* A new concept called "capsules" has been introduced. "Capsules" wrap
additional per-user service managers, whose users are transient and
are only defined as long as the service manager is running. (This is
implemented via DynamicUser=1), allowing a user manager to be used to
manage a group of processes without needing to create an actual user
account. These service managers run with home directories of
/var/lib/capsules/<capsule-name> and can contain regular services and
other units. A capsule is started via a simple "systemctl start
capsule@<name>.service". See the [email protected](5) man page for
further details.
Various systemd tools (including, and most importantly, systemctl and
systemd-run) have been updated to interact with capsules via the new
"--capsule="/"-C" switch.
* .socket units gained a new setting PassFileDescriptorsToExec=, taking
a boolean value. If set to true the file descriptors the socket unit
encapsulates are passed to the ExecStartPost=, ExecStopPre=,
ExecStopPost= using the usual $LISTEN_FDS interface. This may be used
for doing additional initializations on the sockets once they are
allocated. (For example, to install an additional eBPF program on
them).
* The .socket setting MaxConnectionsPerSource= (which so far put a
limit on concurrent connections per IP in Accept=yes socket units),
now also has an effect on AF_UNIX sockets: it will put a limit on the
number of simultaneous connections from the same source UID (as
determined via SO_PEERCRED). This is useful for implementing IPC
services in a simple Accept=yes mode.
* The service manager will now maintain a counter of soft reboot cycles
the system went through. It may be queried via the D-Bus APIs.
* systemd's execution logic now supports the new pidfd_spawn() API
introduced by glibc 2.39, which allows us to invoke a subprocess in a
target cgroup and get a pidfd back in a single operation.
* systemd/PID 1 will now send an additional sd_notify() message to its
supervising VMM or container manager reporting the selected hostname
("X_SYSTEMD_HOSTNAME=") and machine ID ("X_SYSTEMD_MACHINE_ID=") at
boot. Moreover, the service manager will send additional sd_notify()
messages ("X_SYSTEMD_UNIT_ACTIVE=") whenever a target unit is
reached. This can be used by VMMs/container managers to schedule
access to the system precisely. For example, the moment a system
reports "ssh-access.target" being reached a VMM/container manager
knows it can now connect to the system via SSH. Finally, a new
sd_notify() message ("X_SYSTEMD_SIGNALS_LEVEL=2") is sent the moment
PID 1 has successfully completed installation of its various UNIX
process signal handlers (i.e. the moment where SIGRTMIN+4 sent to
PID 1 will start to have the effect of shutting down the system
cleanly). X_SYSTEMD_SHUTDOWN= is sent shortly before the system shuts
down, and carries a string identifying the type of shutdown,
i.e. "poweroff", "halt", "reboot". X_SYSTEMD_REBOOT_PARAMETER= is
sent at the same time and carries the string passed to "systemctl
--reboot-argument=" if there was one.
* New D-Bus properties ExecMainHandoffTimestamp and
ExecMainHandoffTimestampMonotonic are now published by services
units. This timestamp is taken as the very last operation before
handing off control to invoked binaries. This information is
available for other unit types that fork off processes (i.e. mount,
swap, socket units), but currently only via "systemd-analyze dump".
* An additional timestamp is now taken by the service manager when a
system shutdown operation is initiated. It can be queried via D-Bus
during the shutdown phase. It's passed to the following service
manager invocation on soft reboots, which will then use it to log the
overall "grey-out" time of the soft reboot operation, i.e. the time
when the shutdown began until the system is fully up again.
* "systemctl status" will now display the invocation ID in its usual
output, i.e. the 128bit ID uniquely assigned to the current runtime
cycle of the unit. The ID has been supported for a long time, but is
now more prominently displayed, as it is a very useful handle to a
specific invocation of a service.
* systemd now generates a new "taint" string "unmerged-bin" for systems
that have /usr/bin/ and /usr/sbin/ separate. It's generally
recommended to make the latter a symlink to the former these days.
* A new systemd.crash_action= kernel command line option has been added
that configures what to do after the system manager (PID 1) crashes.
This can also be configured through CrashAction= in systemd.conf.
* "systemctl kill" now supports --wait which will make the command wait
until the signalled services terminate.
Journal:
* systemd-journald can now forward journal entries to a socket
(AF_INET, AF_INET6, AF_UNIX, or AF_VSOCK). The socket can be
specified in journald.conf via a new option ForwardToSocket= or via
the 'journald.forward_to_socket' credential. Log records are sent in
the Journal Export Format. A related setting MaxLevelSocket= has been
added to control the maximum log levels for the messages sent to this
socket.
* systemd-journald now also reads the journal.storage credential when
determining where to store journal files.
* systemd-vmspawn gained a new --forward-journal= option to forward the
virtual machine's journal entries to the host. This is done over a
AF_VSOCK socket, i.e. it does not require networking in the guest.
* journalctl gained option '-i' as a shortcut for --file=.
* journalctl gained a new -T/--exclude-identifier= option to filter
out certain syslog identifiers.
* journalctl gained a new --list-namespaces option.
* systemd-journal-remote now also accepts AF_VSOCK and AF_UNIX sockets
(so it can be used to receive entries forwarded by systemd-journald).
* systemd-journal-gatewayd allows restricting the time range of
retrieved entries with a new "realtime=[<since>]:[<until>]" URL
parameter.
* systemd-cat gained a new option --namespace= to specify the target
journal namespace to which the output shall be connected.
* systemd-bsod gained a new option --tty= to specify the output TTY
Device Management:
* /dev/ now contains symlinks that combine by-path and by-{label,uuid}
information:
/dev/disk/by-path/<path>/by-<label|uuid|…>/<label|uuid|…>
This allows distinguishing partitions with identical contents on
multiple storage devices. This is useful, for example, when copying
raw disk contents between devices.
* systemd-udevd now creates persistent /dev/media/by-path/ symlinks for
media controllers. For example, the uvcvideo driver may create
/dev/media0 which will be linked as
/dev/media/by-path/pci-0000:04:00.3-usb-0:1:1.0-media-controller.
* A new unit systemd-udev-load-credentials.service has been added
to pick up udev.conf drop-ins and udev rules from credentials.
* 'udevadm test' and 'udevadm test-builtin' commands now do not change
any settings; sysfs attributes, sysctls, udev database and so on.
E.g. 'udevadm test-builtin net_setup_link /sys/class/net/INTERFACE'
does not change any interface settings, but only prints which .link
file matches the interface. So, even privileged users can safely
invoke the commands.
* An allowlist/denylist may be specified to filter which sysfs
attributes are used when crafting network interface names. Those
lists are stored as hwdb entries
ID_NET_NAME_ALLOW_<sysfsattr>=0|1
and
ID_NET_NAME_ALLOW=0|1.
The goal is to avoid unexpected changes to interface names when the
kernel is updated and new sysfs attributes become visible.
* A new unit tpm2.target has been added to provide a synchronization
point for units which expect the TPM hardware to be available. A new
generator "systemd-tpm2-generator" has been added that will insert
this target whenever it detects that the firmware has initialized a
TPM, but Linux hasn't loaded a driver for it yet.
* systemd-backlight now properly supports numbered devices which the
kernel creates to avoid collisions in the leds subsystem.
* systemd-hwdb update operation can be disabled with a new environment
variable SYSTEMD_HWDB_UPDATE_BYPASS=1.
systemd-hostnamed:
* systemd-hostnamed now exposes the machine ID and boot ID via
D-Bus. It also exposes the hosts AF_VSOCK CID, if available.
* systemd-hostnamed now provides a basic Varlink interface.
* systemd-hostnamed exports the full data in os-release(5) and
machine-info(5) via D-Bus and Varlink.
* hostnamectl now shows the system's product UUID and hardware serial
number if known.
Network Management:
* systemd-networkd now provides a basic Varlink interface.
* systemd-networkd's ARP proxy support gained a new option to configure
a private VLAN variant of the proxy ARP supported by the kernel under
the name IPv4ProxyARPPrivateVLAN=.
* systemd-networkd now exports the NamespaceId and NamespaceNSID
properties via D-Bus and Varlink. (which expose the inode and NSID of
the network namespace the networkd instance manages)
* systemd-networkd now supports IPv6RetransmissionTimeSec= and
UseRetransmissionTime= settings in .network files to configure
retransmission time for IPv6 neighbor solicitation messages.
* networkctl gained new verbs 'mask' and 'unmask' for masking networkd
configuration files such as .network files.
* 'networkctl edit --runtime' allows editing volatile configuration
under /run/systemd/network/.
* The implementation behind TTLPropagate= network setting has been
removed and the setting is now ignored.
* systemd-network-generator will now pick up .netdev/.link/.network/
networkd.conf configuration from system credentials.
* systemd-networkd will now pick up wireguard secrets from
credentials.
* systemd-networkd's Varlink API now supports enumerating LLDP peers.
* .link files now support new Property=, ImportProperty=,
UnsetProperty= fields for setting udev properties on a link.
* The various .link files that systemd ships for interfaces that are
supposed to be managed by systemd-networkd only now carry a
ID_NET_MANAGED_BY=io.systemd.Network udev property ensuring that
other network management solutions honouring this udev property do
not come into conflict with networkd, trying to manage these
interfaces.
* .link files now support a new ReceivePacketSteeringCPUMask= setting
for configuring which CPUs to steer incoming packets to.
* The [Network] section in .network files gained a new setting
UseDomains=, which is a single generic knob for controlling the
settings of the same name in the [DHCPv4], [DHCPv6] and
[IPv6AcceptRA].
* The 99-default.link file we ship by default (that defines the policy
for all network devices to which no other .link file applies) now
lists "mac" among AlternativeNamesPolicy=. This means that network
interfaces will now by default gain an additional MAC-address based
alternative device name. (i.e. enx…)
systemd-nspawn:
* systemd-nspawn now provides a /run/systemd/nspawn/unix-export/
directory where the container payload can expose AF_UNIX sockets to
allow them to be accessed from outside.
* systemd-nspawn will tint the terminal background for containers in a
blueish color. This can be controller with the new --background=
switch or the new $SYSTEMD_TINT_BACKGROUND environment variable.
* systemd-nspawn gained support for the 'owneridmap' option for --bind=
mounts to map the target directory owner from inside the container to
the owner of the directory bound from the host filesystem.
* systemd-nspawn now supports moving Wi-Fi network devices into a
container, just like other network interfaces.
systemd-resolved:
* systemd-resolved now reads RFC 8914 EDE error codes provided by
upstream DNS services.
* systemd-resolved and resolvectl now support RFC 9460 SVCB and HTTPS
records, as well as RFC 2915 NAPTR records.
* resolvectl gained a new option --relax-single-label= to allow
querying single-label hostnames via unicast DNS on a per-query basis.
* systemd-resolved's Varlink IPC interface now supports resolving
DNS-SD services as well as an API for resolving raw DNS RRs.
* systemd-resolved's .dnssd DNS_SD service description files now
support DNS-SD "subtypes" via the new SubType= setting.
* systemd-resolved's configuration may now be reloaded without
restarting the service. (i.e. "systemctl reload systemd-resolved" is
now supported)
SSH Integration:
* An sshd_config drop-in to allow ssh keys acquired via userdbctl (for
example expose by homed accounts) to be used for authorization of
incoming SSH connections. This uses the AuthorizedKeysCommand stanza
of sshd_config. Note that sshd only allows a single command to be
configured this way, hence this drop-in might conflict with other
uses of the logic. It is possible to chainload another, similar tool
of another subsystem via the --chain switch of userdbctl, to support
both in parallel. See the "INTEGRATION WITH SSH" section in
userdbctl(1) for details on this. Our recommendation how to combine
other subsystem's use of the SSH authorized keys logic with systemd's
userbctl functionality however is to implement the APIs described
here: https://systemd.io/USER_GROUP_API – in that case this newly
added sshd_config integration would just work and do the right thing
for all backends.
* A small new unit generator "systemd-ssh-generator" has been added. It
checks if the sshd binary is installed. If so, it binds it via
per-connection socket activation to various sockets depending on the
execution context:
• If the system is run in a VM providing AF_VSOCK support, it
automatically binds sshd to AF_VSOCK port 22.
• If the system is invoked as a full-OS container and the container
manager pre-mounts a directory /run/host/unix-export/, it will
bind sshd to an AF_UNIX socket /run/host/unix-export/ssh. The
idea is the container manager bind mounts the directory to an
appropriate place on the host as well, so that the AF_UNIX socket
may be used to easily connect from the host to the container.
• sshd is also bound to an AF_UNIX socket
/run/ssh-unix-local/socket, which may be to use ssh/sftp in a
"sudo"-like fashion to access resources of other local users.
• Via the kernel command line option "systemd.ssh_listen=" and the
system credential "ssh.listen" sshd may be bound to additional,
explicitly configured options, including AF_INET/AF_INET6 ports.
In particular the first two mechanisms should make dealing with local
VMs and full OS containers a lot easier, as SSH connections will
*just* *work* from the host – even if no networking is available
whatsoever.
systemd-ssh-generator optionally generates a per-connection
socket activation service file wrapping sshd. This is only done if
the distribution does not provide one on its own under the name
"[email protected]". The generated unit only works correctly if the SSH
privilege separation ("privsep") directory exists. Unfortunately
distributions vary wildly where they place this directory. An
incomprehensive list:
• /usr/share/empty.sshd/ (new fedora)
• /var/empty/
• /var/empty/sshd/
• /run/sshd/ (debian/ubuntu?)
If the SSH privsep directory is placed below /var/ or /run/ care
needs to be taken that the directory is created automatically at boot
if needed, since these directories possibly or always come up
empty. This can be done via a tmpfiles.d/ drop-in. You may use the
"sshdprivsepdir" meson option provided by systemd to configure the
directory, in case you want systemd to create the directory as needed
automatically, if your distribution does not cover this natively.
Recommendations to distributions, in order to make things just work:
• Please provide a per-connection SSH service file under the name
"[email protected]".
• Please move the SSH privsep dir into /usr/ (so that it is truly
immutable on image-based operating systems, is strictly under
package manager control, and never requires recreation if the
system boots up with an empty /run/ or /var/).
• As an extension of this: please consider following Fedora's lead
here, and use /usr/share/empty.sshd/ to minimize needless
differences between distributions.
• If your distribution insists on placing the directory in /var/ or
/run/ then please at least provide a tmpfiles.d/ drop-in to
recreate it automatically at boot, so that the sshd binary just
works, regardless in which context it is called.
* A small tool "systemd-ssh-proxy" has been added, which is supposed to
act as counterpart to "systemd-ssh-generator". It's a small plug-in
for the SSH client (via ProxyCommand/ProxyUseFdpass) to allow it to
connect to AF_VSOCK or AF_UNIX sockets. Example: "ssh vsock/4711"
connects to a local VM with cid 4711, or "ssh
unix/run/ssh-unix-local/socket" to connect to the local host via the
AF_UNIX socket /run/ssh-unix-local/socket.
systemd-boot and systemd-stub and Related Tools:
* TPM 1.2 PCR measurement support has been removed from systemd-stub.
TPM 1.2 is obsolete and – due to the (by today's standards) weak
cryptographic algorithms it only supports – does not actually provide
the security benefits it's supposed to provide. Given that the rest
of systemd's codebase never supported TPM 1.2, the support has now
been removed from systemd-stub as well.
* systemd-stub will now measure its payload via the new EFI
Confidential Computing APIs (CC), in addition to the pre-existing
measurements to TPM.
* confexts are loaded by systemd-stub from the ESP as well.
* kernel-install gained support for --root= for the 'list' verb.
* bootctl now provides a basic Varlink interface and can be run as a
daemon via a template unit.
* systemd-measure gained new options --certificate=, --private-key=,
and --private-key-source= to allow using OpenSSL's "engines" or
"providers" as the signing mechanism to use when creating signed
TPM2 PCR measurement values.
* ukify gained support for signing of PCR signatures via OpenSSL's
engines and providers.
* ukify now supports zboot kernels.
* systemd-boot now supports passing additional kernel command line
switches to invoked kernels via an SMBIOS Type #11 string
"io.systemd.boot.kernel-cmdline-extra". This is similar to the
pre-existing support for this in systemd-stub, but also applies to
Type #1 Boot Loader Specification Entries.
* systemd-boot's automatic SecureBoot enrollment support gained support
for enrolling "dbx" too (Previously, only db/KEK/PK enrollment was
supported). It also now supports UEFI "Custom" and "Audit" modes.
* The pcrlock policy is saved in an unencrypted credential file
"pcrlock.<entry-token>.cred" under XBOOTLDR/ESP in the
/loader/credentials/ directory. It will be picked up at boot by
systemd-stub and passed to the initrd, where it can be used to unlock
the root file system.
* systemd-pcrlock gained an --entry-token= option to configure the
entry-token.
* systemd-pcrlock now provides a basic Varlink interface and can be run
as a daemon via a template unit.
* systemd-pcrlock's TPM nvindex access policy has been modified, this
means that previous pcrlock policies stored in nvindexes are
invalidated. They must be removed (systemd-pcrlock remove-policy) and
recreated (systemd-pcrlock make-policy). For the time being
systemd-pcrlock remains an experimental feature, but it is expected
to become stable in the next release, i.e. v257.
* systemd-pcrlock's --recovery-pin= switch now takes three values:
"hide", "show", "query". If "show" is selected the automatically
generated recovery PIN is shown to the user. If "query" is selected
then the PIN is queried from the user.
* sd-stub gained support for the new ".ucode" PE section in UKIs, that
may contain CPU microcode data. When control is handed over to the
Linux kernel this data is prepended to the set of initrds passed.
systemd-run/run0:
* systemd-run is now a multi-call binary. When invoked as 'run0', it
provides as interface similar to 'sudo', with all arguments starting
at the first non-option parameter being treated the command to invoke
as root. Unlike 'sudo' and similar tools, it does not make use of
setuid binaries or other privilege escalation methods, but instead
runs the specified command as a transient unit, which is started by
the system service manager, so privileges are dropped, rather than
gained, thus implementing a much more robust and safe security
model. As usual, authorization is managed via Polkit.
* systemd-run/run0 will now tint the terminal background on supported
terminals: in a reddish tone when invoking a root service, in a
yellowish tone otherwise. This may be controlled and turned off via
the new --background= switch or the new $SYSTEMD_TINT_BACKGROUND
environment variable.
* systemd-run gained a new option '--ignore-failure' to suppress
command failures.
Command-line tools:
* 'systemctl edit --stdin' allows creation of unit files and drop-ins
with contents supplied via standard input. This is useful when creating
configuration programmatically; the tool takes care of figuring out
the file name, creating any directories, and reloading the manager
afterwards.
* 'systemctl disable --now' and 'systemctl mask --now' now work
correctly with template units.
* 'systemd-analyze architectures' lists known CPU architectures.
* 'systemd-analyze --json=…' is supported for 'architectures',
'capability', 'exit-status'.
* 'systemd-tmpfiles --purge' will purge (remove) all files and
directories created via tmpfiles.d configuration.
* systemd-id128 gained new options --no-pager, --no-legend, and
-j/--json=.
* hostnamectl gained '-j' as shortcut for '--json=pretty' or
'--json=short'.
* loginctl now supports -j/--json=.
* resolvectl now supports -j/--json= for --type=. | true | true | true | The systemd System and Service Manager . Contribute to systemd/systemd development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2015-03-25 00:00:00 | https://repository-images.githubusercontent.com/32873313/aa4c3b00-128e-11ea-8853-4f32296220f0 | object | github.com | GitHub | null | null |
36,803,251 | https://99percentinvisible.org/article/guerrilla-drive-ins-mobile-urban-movie-theaters-animate-disused-spaces/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,717,569 | https://www.huckmag.com/art-and-culture/film-2/the-enduring-legacy-of-wild-style-hip-hops-first-ever-film/ | The enduring legacy of Wild Style, hip hop’s first ever film | null | # The enduring legacy of Wild Style, hip hop’s first ever film
- Text by Miss Rosen
Back in 1978, artist Charlie Ahearn saw a couple of vibrant murals in the handball courts of the Smith Projects in New York’s Lower East Side. The word “LEE” appeared across them in big bold letters. Ahearn was intrigued, and quickly realised it was the work of Lee Quinones, one of graffiti’s greatest writers.
A year later, Ahearn met Lee and Fab 5 Freddy during the historic *Times Square Show.* The trio immediately started collaborating. At the time, the words “wild style” were on everybody’s lips – it was the name for the colorful, hyper-stylised letterforms dominating graffiti that most people could not read.
Simultaneously, hip hop music was sizzling in the clubs and parks, as the first generation of DJs spun breakbeats while MCs tore up the mic and b-boys rocked the floor. As all of this was happening on his doorstep in New York, Ahearn decided to turn it into *Wild Style –* the first ever hip hop feature film.
Now, to celebrate the 35th anniversary of the film that launched the culture around the world, SummerStage will host a reunion concert and film screening at the very location the movie’s climactic final scene was filmed. On Thursday, August 9, Grandmaster Caz will host a night of live performances by Cold Crush Brothers, GrandWizzard Theodore & the Fantastic 5, Double Trouble, and a special surprise from the Chief Rocker Busy Bee Starski, followed by a screening of the film.
“In *Wild Style*, it was all outlaw parties,” Ahearn says, with a laugh. “The Amphitheater was painted illegally. We performed there illegally in both 1981 and 1982, and we had the place mobbed with people mostly from Alphabet City, which was notorious. The shows were done with giant lights, giant speakers, and no permissions. That was hip hop and people should not forget that.”
In fact, *Wild Style* is more than the story of characters in the script – it’s a document of the way the people themselves were living. The famed scene with Grandmaster Flash DJing in the kitchen was actually shot in his own apartment. The stick-up kids were cast on the spot, provided their own weapons, and ad-libbed the scene. Double Trouble’s famed “Stoop Rap” was about the Funky Four’s breakup, which was happening in real time.
“What has impressed me the most about the film is that it comes very close to what we did back in the days,” GrandWizzard Theodore, the inventor of the scratch, remembers. “Charlie Ahearn captured the essence of hip hop and the things that went on in our lives other than music – like single-parent homes, going to school for so many years and still not learning anything, just being in the street.”
“When Busy Bee Starski first started bringing Charlie around, Charlie was taking a lot of pictures. Everyone was trying to figure out why he was taking so many pictures, but I was looking ahead and realised one day people were going to be looking back at this. He was documenting our lives.”
Busy Bee laughs when he thinks back on the first time he met Ahearn in 1980. “I thought Charlie was a policeman,” he says. “Where I come from, in the ghetto back in the ’70s, there were no white guys coming around looking for black guys unless they were the police. Charlie was looking for me and I started ducking him until he finally caught up to me. He told me who he was and what he wanted to do and I was like, ‘Huhh? What? You ain’t the police?’ and we started laughing. After that, I acted like I knew him for 30 years.”
That sense of belonging is what made *Wild Style* iconic from its very release, making it not only a hit in the United States, but also across the world. For the first time, international audiences were seeing DJs rock a party, MCs turn it up, B-boys battle, and graffiti writers bomb.
And the cherry on top of it all is the original soundtrack laid down in the studio, featuring Chris Stein of Blondie on guitar. 12 songs were cut and two copies a piece given out to Flash, Theodore, Charlie Chase, Tony Tone, and DJ AJ. Then the MCs got to work and created a masterpiece, with classic cuts like “Down by Law” and “Gangbusters.”
Over the years, the film has been the source of inspiration and influence for countless artists, dancers, and musicians. The iconic logo, designed by ZEPHYR and painted by SHARP and REVOLT in Riverside Park in the dead of night, has been remade by countless companies, while the music from the film has been sampled by everyone from Nas, Missy Elliott, and Gang Starr to the Beastie Boys, the Roots, and Cypress Hill.
“Charlie Ahearn *is* hip hop,” adds Busy Bee, fondly. “He kept the culture going across the earth. We were the first MCs, breakdancers, graffiti writers, beatboxers, and DJS to ever go to Japan. We took it to another world so that today, there is no place can go and not see hip hop. We took over the planet.”
*(Header image: Kase 2, Busy Bee, Fab 5 Freddy, and friends at the cheeba spot, 1980. Photo © Charlie Ahearn.)*
**Follow Miss Rosen on Twitter.**
**Enjoyed this article? Like Huck on Facebook or follow us on Twitter.**
## Latest on Huck
## Celebrating 20 years of The Mighty Boosh
A new exhibition takes a look behind the scenes of the iconic show two decades after its BBC3 premiere.
Written by: Isaac Muk
## We Run Mountains: Black Trail Runners tackle Infinite Trails
Soaking up the altitude and adrenaline at Europe’s flagship trail running event, high in the Austrian Alps, with three rising British runners of colour.
## The organisation levelling the playing field in the music industry
Founded in 2022, The Name Game is committed to helping female, non-binary and trans people navigate the industry.
Written by: Djené Kaba
## Vibrant, rebellious portraits of young Cubans
A new photobook captures the young people redefining Cuban identity amidst increased economic and political turbulence on the Caribbean island.
Written by: Isaac Muk
## How one photographer documented her own, ever-changing image
In her new photobook ‘A women I once knew’, Rosalind Fox Solomon charts the process of getting older through a series of stark self portraits taken over the course of decades.
Written by: Isaac Muk
## Eddie Vedder on Kelly Slater
Read an excerpt from the Pearl Jam legend’s introduction to a new book on the surfing icon, documented by photographer Todd Glaser.
Written by: Eddie Vedder | true | true | true | null | 2024-10-12 00:00:00 | 2018-08-08 00:00:00 | null | null | null | null | null | null |
4,332,896 | http://dhruvbird.blogspot.com/2012/08/sql-query-execution-optimization.html | SQL Query Execution Optimization opportunities missed: Part-1 | Dhruv Matani | An example of one such query is the quintessential pagination query. Assume you have a schema for comments that looks like this:
CREATE TABLE COMMENTS(post_id INT, comment_id INT, comment_text TEXT, poster_id INT, posted_at TIMESTAMP, PRIMARY KEY (comment_id), UNIQUE KEY (post_id, posted_at, comment_id) );
Now, you display the first page (say the most recent 10 comments) for a post (with id=33) using a query such as:
SELECT * FROM COMMENTS WHERE post_id=33 ORDER BY posted_at DESC LIMIT 0,10;
The
`LIMIT 0,10`syntax fetches at most 10 comments from offset 0 in the result set.
The most natural way to fetch the 2nd page would be to say:
SELECT * FROM COMMENTS WHERE post_id=33 ORDER BY posted_at DESC LIMIT 10,10;
Similarly, you can get the 3rd page by saying:
SELECT * FROM COMMENTS WHERE post_id=33 ORDER BY posted_at DESC LIMIT 20,10;
and so on...
You must have noticed that in case
`LIMIT 1000,10`is specified, a naive execution of the query plan involves fetching all the 1000 rows before the 10 interesting rows are returned by the database. In fact, most databases, will do just that. There is in fact a better way to execute such queries.
(Let's leave aside the discussion of whether pagination queries should be implemented this way. The observant reader will notice that we have an index on
`(post_id, posted_at,`, and that can be used to fetch the next 10 results given the
**comment_id**)`post_id`,
`posted_at`, and
`comment_id`of the immediately preceding comment).
We know that most (of not all) databases use B-Trees to store indexed information. It is easy to augment the B-Tree to hold information such as
*"how many elements under this node (including the node itself)?"*This information alone will let us very quickly (in O(log n) I/O look-ups) locate the node of interest. Suppose we want the entry from the COMMENTS table at offset 1000 with post_id=33, we will first perform a look-up for the first entry with post_id=33. Once we find this key, we can quickly (in O(1) time) determine how many entries are less than the located entry (since we already have this information cached at each node). Let the found node be at offset OFF1. Subsequently, we can query the B-Tree to find the entry that has an offset of OFF1 + 1000 (since we have the cumulative counts cached for every value in the B-Tree node!).
For example, consider the Binary Tree shown below:
The alphabets in the top half of every node are the keys and the number in the bottom half of the node is the count of the number of nodes in the sub-tree rooted at that node.
We can answer the following query on this tree in O(log n) time:
SELECT KEY FROM BINARY_TREE WHERE KEY > D ORDER BY KEY LIMIT 3,2;
i.e. We want to fetch the 5 smallest keys greater than 'D', but we want only the last 2 from this set. i.e. The 2 greatest keys that are a subset of the set containing the 5 smallest keys greater than 'D' (read that a few times to get a hang of it).
We proceed as follows:
- Find D
- Find Rank(D) = 4
- Find the KEY such that Rank(K) = Rank(D)+3 = 7. This happens to be the KEY 'G'
- Perform an in-order traversal to find the in-order successor of the node that we are on till we either run out of nodes or we have walked 2 nodes (whichever comes first)
Rank(Node) = SubtreeSize(Node->Left) + 1 [if Node->Left exists] Rank(Node) = 1 [otherwise]
Hence, we can see that we are able to satisfy this query in time
`O(log n + Result_Set_Size)`, where
`n`is the number of elements in the Binary Tree.
## 2 comments:
You know of any DB which provide these features with their indexes or is it a new proposal. Is this what you have doing in Fb, if of course it is not confidential :P.
BTW is it not true that MySQL creates a temporary table for order by clause? What about other Db's?
> You know of any DB which provide these features
> with their indexes or is it a new proposal.
This is a new proposal.
> Is this what you have doing in Fb, if of course it
> is not confidential :P.
All opinions mentioned on this blog are my own, and not of my employers. Further, I can neither confirm nor deny that :-p
> BTW is it not true that MySQL creates a temporary table for
> order by clause? What about other Db's?
MySql is quite a dumb SQL execution engine, and is not able to perform many optimizations. For example, if you do a group by with an order by (even on a proper prefix of the same field), it will use filesort and a temporary even thought it doesn't need to if the column you are trying to sort & group by is already indexed. See this page on how MySql uses internal temporary tables for an enumerated set of situations where MySql may use a temporary table.
Post a Comment | true | true | true | There are some common queries that are used almost everywhere, but they seem to be not executed very cleverly by most (all??) SQL execution ... | 2024-10-12 00:00:00 | 2012-08-02 00:00:00 | null | blogspot.com | dhruvbird.blogspot.com | null | null |
|
5,568,195 | http://www.minimallyminimal.com/blog/america-elect-preview-2 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,801,751 | https://www.axios.com/2022/11/29/sam-bankman-fried-100000-ftx-cftc-regulation | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,496,084 | http://theinvisiblethings.blogspot.com/2012/01/thoughts-on-deepsafe.html | Thoughts on DeepSafe | Joanna Rutkowska | Several people asked me recently what I
though about DeepSafe.
So, below I present my opinion...
First, for any AV system (or Host IPS,
or Personal Firewall, etc) to work effectively, there are three problems that must be addressed:
- How to protect the AV agent (code and data) from tampering (from the rest of the OS)?
- How can the AV agent get reliable access to (sensitive pieces of) the system memory and registers, and/or provide reliable memory protection for the (sensitive pieces of) the OS.
- What are those "sensitive pieces of” memory that should be monitored or protected?
From reading various PR materials, it
seems like the #1 above is the primary differentiation factor for
DeepSafe (DS). So, let's consider this problem in the context of e.g.
a Windows OS. In order to protect its code and data, DS uses, as it
is heavily advertised, Intel VT-x virtualization technology. Now,
that sounds really secure -- after all what can be more secure than a
hardware virtualization, right? ;)
But VT-x (including EPT) is only about
CPU virtualization, which in our case translates to protecting the DS
memory and registers from CPU-originating accesses. But, as every
regular to this blog knows, there is also another method of accessing
memory on any PC system, and this is through DMA transactions from
devices. The OS (so also the kernel malware) is free to program one
of the many devices in the system to issue DMA reads or writes to
*any*physical memory it wants...
Now, in order to protect some portion
of the system memory (DRAM, cache) against DMA accesses, we have the
Intel VT-d technology... So, one would think that DS must be also
using VT-d in order to protect itself.
Very well, let's assume then that the
DeepSafe is not a total ripoff,
and that it implements also VT-d protection for its agent, although I
haven't found this mentioned in any of the public papers or press
materials I found on the web...
This, however, would be a bit complex
to do correctly, because the OS (so, also the kernel malware) still
has a full control over the chipset (MCH), which is the entity...
that controls the VT-d.
Now, in order to
prevent the OS (or the kernel malware) from playing with the chipset
for fun and profit, and e.g. disabling VT-d protection, DS would have
to virtualize the chipset.
If you look at some consumer VMMs, such
as VMware or Xen/Qemu, you would see that they all virtualize the
chipset for their guests (of course), but that the chipset they
provide this way is some kind of an ancient Pentium MCH. I don't
think any of the consumers would be especially happy if they found
out that after installing DS on their brand new 2012 laptop, Windows
suddenly see a Pentium-era chipset... And this is not without a
reason – chipsets, specifically MCHs, are one of the most complex
devices, perhaps only beaten by GPUs in this category. There are
virtually hundreds of configuration registers exposed by the chipset,
some of them control the VT-d, some other control system memory maps
and permissions, PCIe configuration, and many other things that I
even have no idea about, and this all makes virtualizing the chipset
a very challenging task.
So, it's either that McAfee and Intel
found some interesting way of how to securely virtualize the chipset
while preserving all of its (very rich) functionality, or that
they... don't bother with VT-d protection and chipset virtualization
at all, assuming that even without VT-d, DeepSafe is good enough and
“rises the bar” anyway (sarcasm intended).
(Can somebody from McAfee or Intel
confirm in the comments below what does DP really do?)
Anyway, let's assume they
*do*have VT-d protection and they*do*virtualize the chipset somehow...
Now, we're moving on to the #2 point
from the list of tasks above -- about the reliable
memory access or reliable protection.
So, let say that the DS agent decided
that some part of the system memory, e.g. the IDT table, is
*sensitive*and should be monitored/protected. So it sets up EPT traps to trigger an VT-x/EPT intercept on any access to that memory (or IDT base register), in order to find kernel malware that tried to modify IDT. That sounds really nice, but what if the malware uses DMA to modify IDT? DS would not be able to catch such access! (So far we considered the, hypothetical, use of VT-d only to protect the DS agent code).
One might think that DS is programming
VT-d to sandbox each and every device in the system (so including
GPU, USB controllers, NICs, SATA, etc) so they
*never*be allowed to touch any of those sensitive parts of the system, such as IDT. Let's assume they do it this way...
And here we've arrived to the last
point from the list at the beginning: which of the system memory
constitutes those "sensitive pieces" that should be
protected/monitored? IDT? Sure. What about all the code sections of
the all the kernel modules? Yes. Are we fine now? Well, no, because
the malware can hook some pointers other than the well known IDT.
Some public NDIS data structure? Ok, we can add those to the
protected areas. But, what about some undocumented NDIS structures?
And this is just NDIS subsystem, one of the many subsystems in the
Windows kernel... When we think about it, it should be intuitively
obvious that in a general purpose Operating System like Windows, it
is not possible (at least for 3
rdparty) to make a satisfactory list of all the sensitive pieces of memory that should be monitored/protected, in order to detect all the system compromises.
Greg Hoglund, Jamie Butler, Alex
Tereshkin, and myself, have been researching this area actively in
the early years of this millennium. In addition to the Alex's paper
linked above, you might also check out one of my last slides from
this
period.
I don't think anything has changed
since that time. It was also the reason why I gave up on writing
Windows compromise detectors, or forensic tools, and moved on to
researching other ways to secure OSes, which finally gave birth to
Qubes OS, many years later.
So, back to DS -- in order to provide a
somehow satisfactory protection level for your general purpose OS,
such as Windows, it would need to:
- Use VT-d to protect its own memory,
- Virtualize the chipset, at least some (sensitive) parts of it,
- Program VT-d permissions for each device to exclude all the sensitive areas in the system that should be protected, and also protect one device from DMAing into/from another device memory (e.g. NIC stealing GPU framebuffer, or inserting instructions to the GPU instruction buffer, or keystrokes to USB controller buffer). Ideally, this could be done by programming VT-d to grant each device only access to its own DMA buffer, but as far as I know, this would be very hard to implement, if not impossible for a 3rd party, on a Windows OS (in contrast to Linux, which mostly support this model). Please correct me, if the recent Windows version allows for such use of VT-d.
- Finally, and the most hard thing to solve, how to define all the "sensitive pieces of memory" in the system that should be protected and/or monitored? Although this is a somehow more generic problem, not specific to DS, but applying to any A/V, HIPS, or forensic tool.
So, is DeepSafe another piece of crap not worth any special attention, or has McAfee
and Intel came up with some novel methods, e.g. for chipset
virtualization, and other problems? Unless I see some technical info to backup the latter, I would have to assume,
unfortunately, the former. But I would like to be mistaken – after
all DeepSafe seems to be just a new incarnation of my Bluepill ;)
## 9 comments:
Hi,
I'm a assemnbly programmer and operating systems developer ,
I like to have full control of the hardware and i am bored and tired of all this security.
If governments, the military and any other institution wants
such assurance, Intel should produce models of processors and chipsets targeted to these needs
without affecting the,desktop models, where nobody want to steal the latest mp3s that you downloaded :-)
Much of the security of a PC is operating system dependent ... chipset and processors are becoming
more complicated with unnecessary things like virtualization ..
Before I could easily handle all the hardware
including SMI (System Management Interrupt),
now has become more difficult to take control
over SMI,Intel has added new MSR registers in the new chipset.
I spend the money for the computer to use it,
not to run a shit code like the bios, or stupid high level windows and linux.
BIOS is not a useless (or obsolete) piece of code as most people think -- it handles essential tasks, such as DRAM initialization, which is a very complex task, requiring probably thousands lines of code on modern chipsets. This is probably the price we pay for ultra fast DRAMs we all got used so easily.
But another thing is e.g. the SMI handler, where OEMs often put lots of USELESS CRAP, and this should definitely be somehow banned. Anybody willing to sue an OEM for putting Tetris into your SMI?
Vapourware comes to mind when I look at Deepsafe.
Ross Anderson author of the classic 'Security Engineering hit the nail on the head when it come to Windows in particular in the article, 'Security and your mother's Linux box'.
"LXF: OK then, what steps should an ordinary citizen take to improve their data security?
RA: Buy a Linux box or a Mac. I bought my wife a Mac, last time the Windows box got filled up with loads of spyware."
RA "While to be fair to them they have improved they have too much legacy to ever really fix themselves."
"http://www.techradar.com/news/computing/pc/security-and-your-mother-s-linux-box-496204?artc_pg=2
http://www.techradar.com/news/computing/pc/security-and-your-mother-s-linux-box-496204?artc_pg=2
There is really nothing that makes Linux or Mac (except iOS) any more secure than Windows. All of those OSes are inherently insecure, architecturally. They are only
saferthan Windows, because of the much smaller user base. iOS is a different story, though -- I consider it to be significantly safer, architecturally (but not necessarily implementation-wise).[continued]
2) Another question that comes to mind is how the Deepsafe software would know it is indeed running at the bare metal. A rookit could, through nested virtualization, fool Deepsafe into thinking it was in control even though it wasn't. Bluepill is a proof of this concept.
One defence against this might be Trusted Computing technologies... such a bluepill
attack would leave a trace in the PCR registers. However, many/most PC's don't have TPM's so for those it wouldn't be an option. Further, even for PC's with TPM's, the rootkit could still fool Deepsafe into thinking it was running on bare metal (there are many ways, one could be to virtualize the TPM, and issuing it a certificate from a "fake vendor", and - at runtime - patching Deepsafe into trusting this certificate... or it could make Deepsafe bypass the checks altogether). The fact that the system was compromised can only be seen by an external, trusted computer who can carry out a challenge-response protocol against the host computer's TPM (using the TPM_QUOTE facility) which would ultimately be able to reveal the forgery. I guess the Deepsafe software could be made to contact a remote system (maybe as part of the regular AV signture update) which could use the TPM_QUOTE mechanism - but the rootkit could patch Deepsafe to bypass this facility also, while still ensuring the user (in the "Windows security center") that everything was fine and dandy! One way to provide some protection against this would be to make the servers sent out an email to the owner of the computer in case of missing/unsuccessful authentication from the machine in question since this would provide for an out-of-band notification of the exposure (if the user could read this from a phone etc.). However, I can see some practical problems in this mechanism but they might not be impossible to overcome.
[continued]
So does Deepsafe provide any benefit at all? Maybe, maybe not. If we assume a PC with a TPM, and that Deepsafe does remote attestation to the AV servers who can send an out-of-band notification to the owner in case of compromise, they could theoretically realize
the property of "being able to guarentee that the security software is running as a hypervisor". The server would be able to setup a secure tunnel end-to-end with the AV software to get info on how much has been scanned, how much has been found etc. All this info could be relayed to the user out-of-band. This would be a new property that was not possible before. The question is then, how much is it worth? In light of Johannas points and my own points on the difficulty of detecting viruses even if you have "full memory and I/O access", I don't think this is the panacea of antivirus software. But I still think it's nice to have this extra property since AV's are in general pretty good at detecting viruses even if they aren't perfect. So I still think this is a step in the right direction if Intel doesn't oversell the benefits.
On a side note, as far as I've been able to read, Deepsafe doesn't rely on any new CPU features. It seems to only rely on the existing VMX etc. features. This rasises the question why McAfee (or another vendor) couldn't have done this without being bought by Intel? After all, VMX is fully documented. Intel could just have entered into a collaboration with McAfee and help them built this tech into the CPU. What is preventing
other AV vendors from doing this (patents? by Intel? If so, Intel could have taken out those patents and licensed them to all vendors, potentially earning more than through buying ;cAfee). This makes me think there might be more to this Deepsafe tech than we think (I had executed the collaboration to result in some sort of 'AV-features-in-the-CPU' type of thing).
It could also have been realized by other means than VMX, for instance through Intel AMT. If the point is to remotely inventory the antivirus, AMT seems like an excellent choice since a lot of groundwork in those respects have already been done by Intel. Another possibility could be pure software virtualization which can also be pretty low-overhead.
Hi Joanna,
thanks for your thoughts! I too have wondered about how Deepsafe could possibly work and add security over what is possible now. The following is just speculation and is
based on the assumption that Deepsafe is simply a "AV-in-a-hypervisor" solution.
Regarding your own points, I agree that Deepsafe couldn't know how to protect all operating system vectors (see below). But it would probably at least be able to protect itself from being overwritten (even by DMA etc.) so it could stay in control (assuming it was running on the bare metal, see pt 2 below). Also,
other VM vendors are able to make hypervisors that can protect themselves from the guest. The real question is -- how much can Deepsafe protect the guest, even if it can guarentee that it itself can not be compromied?
1)
I've been thinking of the problem more in the context of malware detection, not memory protection which you primarily discuss. However, malware detection has its own set of problems that it would seem
challenging for Deepsafe to address. While running at the bare metal provides ultimate power
to inspect I/O and memory and in principle control all software that is executed,
it also presents the following problem: How can Deepsafe "know" what the guest software
(i.e. the normal operating system/software stack) is up to? It could scan e.g. network
and disk I/O for viruses but what if this is encrypted? What if the user opens an
encrypted Truecrypt volume and executes the file from there? Deepsafe I/O monitoring
wouldn't be able to detect a virus flowing over the I/O in this case. Or how about if a virus is downloaded over a https:// connection? Another strategy could be for Deepsafe to do memory scanning. After all,
it would have unrestricted access to the physical address space. Still, this is hardly
fool-proof. Some viruses are known to rewrite their own x86 machine code and can produce
an infinity of variants. Other viruses encrypt themself and dynamically generates a
decoder which is only used when the virus activates - making its time in cleartext in memory very brief and unlikely to be caught by periodic memory scanning. Maybe Deepsafe could be very smart and through clever use of the paging virtualization facilities scan all pages containing code on the first access - and then every time they are rewritten (or if the page tables are updated to contain new executable pages). This
would still not help it catch the metamorphic viruses.
If Deepsafe were to implement a more behavior-based type of detection it becomes even more difficult. How can it hook into the proper vectors in Windows? It would have to, as an "outsider" look at the running Windows kernel, drivers, page tables, try to make sense of it all and do some brain surgery on it. Doesn't seem like a recipe for stability in the face of the many versions, fixes, drivers etc. It could be made more stable if it could have a piece of helper software running on the "inside" (like a kind of VMWare tools) which could help it interact with windows etc. put this immediately raises the question whether you can trust any information gathered by such
a helper, since he's after all running in the untrusted space you are trying to safeguard!
But, as every regular to this blog knows, there is also another method of accessing memory on any PC system, and this is through DMA transactions from devices.
Good point.
To put this in context how many rootkits exploit this method to compromise a system?
Thanks
@Anonymous_asking_stupid_questions:
How many? How do you count/distinguish them? By different hash value of their installer code? Or in-memory code fingerprint? In any case I can generate easily as many variants as you want...
Post a Comment | true | true | true | Several people asked me recently what I though about DeepSafe . So, below I present my opinion... First, for any AV system (or Host... | 2024-10-12 00:00:00 | 2012-01-21 00:00:00 | null | null | blogspot.com | theinvisiblethings.blogspot.com | null | null |
18,076,523 | https://aeon.co/essays/why-fake-miniatures-depicting-islamic-science-are-everywhere | Why fake miniatures depicting Islamic science are everywhere | Aeon Essays | Nir Shafir | As I prepared to teach my class ‘Science and Islam’ last spring, I noticed something peculiar about the book I was about to assign to my students. It wasn’t the text – a wonderful translation of a medieval Arabic encyclopaedia – but the cover. Its illustration showed scholars in turbans and medieval Middle Eastern dress, examining the starry sky through telescopes. The miniature purported to be from the premodern Middle East, but something was off.
Besides the colours being a bit too vivid, and the brushstrokes a little too clean, what perturbed me were the telescopes. The telescope was known in the Middle East after Galileo developed it in the 17th century, but almost no illustrations or miniatures ever depicted such an object. When I tracked down the full image, two more figures emerged: one also looking through a telescope, while the other jotted down notes while his hand spun a globe – another instrument that was rarely drawn. The starkest contradiction, however, was the quill in the fourth figure’s hand. Middle Eastern scholars had always used reed pens to write. By now there was no denying it: the cover illustration was a modern-day forgery, masquerading as a medieval illustration.
The fake miniature depicting Muslim astronomers is far from an isolated case. One popular image floating around Facebook and Pinterest has worm-like demons cavorting inside a molar. It claims to illustrate the Ottoman conception of dental cavities, a rendition of which has now entered Oxford’s Bodleian Library as part of its collection on ‘Masterpieces of the non-Western book’. Another shows a physician treating a man with what appears to be smallpox. These contemporary images are in fact not ‘reproductions’ but ‘productions’ and even fakes – made to appeal to a contemporary audience by claiming to depict the science of a distant Islamic past.
From Istanbul’s tourist shops, these works have ventured far afield. They have have found their way into conference posters, education websites, and museum and library collections. The problem goes beyond gullible tourists and the occasional academic being duped: many of those who study and publicly present the history of Islamic science have committed themselves to a similar sort of fakery. There now exist entire museums filled with reimagined objects, fashioned in the past 20 years but intended to represent the venerable scientific traditions of the Islamic world.
The irony is that these fake miniatures and objects are the product of a well-intentioned desire: a desire to integrate Muslims into a global political community through the universal narrative of science. That wish seems all the more pressing in the face of a rising tide of Islamophobia. To be clear, Muslims always conducted science, despite the claims of Islamophobes to the contrary, but often it wasn’t visually expressed in a way that we find easy to recognise today. But what happens when we start fabricating objects for the tales we want to tell? Why do we reject the real material remnants of the Islamic past for their confected counterparts? What exactly is the picture of science in Islam that are we hoping to find? These fakes reveal more than just a preference for fiction over truth. Instead, they point to a larger problem about the expectations that scholars and the public alike saddle upon the Islamic past and its scientific legacy.
There aren’t many books left in the old booksellers’ market in Istanbul today – but there are quite a few fake miniatures, sold to the tourists flocking to the Grand Bazaar next door. Some of these miniatures show images of ships or monsters, while others prompt a juvenile giggle with their display of sexual acts. Often, they’re accompanied by some gibberish Arabic written in a shaky hand. Many, perhaps the majority, are depictions of science in the Middle East: a pharmacist selling drugs to turbaned men, a doctor castrating a hermaphrodite, a group of scholars gazing through a telescope or gathering around a map.
To the discerning eye, most of the miniatures these men sell are recognisably fake. The artificial pigments are too bright, the subject matter too crude. Unsurprisingly, they still find willing buyers among local and foreign tourists. Some images, on occasion, state that they are modern creations, with the artist signing off with a recent date in the Islamic calendar. Others are more duplicitous. The forgers tear pages out of old manuscripts and printed books, and paint over the text to give the veneer of old writing and paper. They can even stamp fake ownership seals onto the image.
With these additions, the miniatures quickly become difficult to identify as fraudulent once they leave the confines of the market and make their way on to the internet. Stock photo services in particular play a key role in disseminating these images, making them readily available to use in presentations and articles in blogs and magazines. From there, the pictures move on to the main platforms of our vernacularised visual culture: Instagram, Facebook, Pinterest, Google. In this digital environment, even experts on the Islamic world can mistake these images for the authentic and antique.
The forger transforms a scholar raising a sextant to his eye into a man using a telescope
The internet itself has become a source of fantastic inspiration for forgers. The drawing supposedly depicting the Ottoman view of dental cavities, for example, emerged after a similar picture of an 18th-century French ivory surfaced on the internet. Other forgers simply copy well-known miniatures, such as the illustration of the short-lived observatory in 16th-century Istanbul, in which turbaned men take measurements with a variety of instruments on a table. This miniature – reliably located in the Rare Books Library of Istanbul University – is found in a Persian chronicle praising Sultan Murad III, who ordered the observatory built in 1574, and subsequently had it demolished a few years afterwards.
Even if its imitations look crude, they still find audiences – such as those who visit the 2013 ‘Science in Islam’ exhibition website at the Museum of the History of Science in Oxford. The site, which aims to educate secondary-school children, took the image from a similar site run by the Whipple Museum of the History of Science at the University of Cambridge – which in turn acquired it a year earlier from a dealer in Istanbul, according to the museum’s own records. Meanwhile, another well-respected institution, the Wellcome Collection in London, specialises in objects from the history of medicine; it includes several poorly copied miniatures demonstrating Islamic models of the body, written over with a bizarre pseudo-Arabic and with no given source.
A few images, though, are invented from whole cloth, such as a depiction of a man with what appears to be smallpox, nervously consulting with a pharmacist and doctor. More troubling still are the images that artists alter to match our own expectations. The picture on the cover of the book I was going to assign my students, with men looking at the night sky through a telescope, borrows from the figures in the Istanbul observatory miniature. However, the forger easily transforms a scholar raising a sextant to his eye, to measure the angular distance between astronomical bodies, into a man using a telescope in the same pose. It is a subtle change but it alters the meaning of the image significantly – pasting in an instrument of which we have no visual depictions in Islamic sources, but that we readily associate with the act of astronomy today.
In the corner of Gülhane Park in Istanbul, down the hill from the former Ottoman palace and Hagia Sofia, lies the Museum for the History of Science and Technology in Islam (*İslam Bilim ve Teknoloji Tarihi Müzesi*). A visitor begins with astronomical instruments – astrolabes and quadrants (thankfully, no telescopes). As you move through the displays, the exhibits shift from instruments of war and optics to examples of chemistry and mechanics, becoming increasingly fantastical with each room. Glass cages of beakers follow alembics in elaborate contraptions. At the end, one reaches the section on engineering. Here, you find the bizarre machines of Ismail al-Jazari, a 12th-century scholar often called the Muslim father of engineering. His contraptions resemble medieval versions of Rube Goldberg machines: think of a water clock in the shape of a *mahout*, sitting on top of an elephant or other pieces.
There’s only one catch. All the objects on display are actually reproductions or completely imagined objects. None of the objects is older than a decade or two, and indeed there are no historical objects in the museum at all. Instead, the astrolabes and quadrants, for example, are recreated from pieces in other museums. The war machines and the giant astronomical instruments are typically scaled-down models that can fit in a medium-sized room. The intricate chemistry contraptions, of which no copy has ever been found in the Middle East, are created solely to populate the museum.
By itself, this conjuring act isn’t necessarily a problem. Some of the pieces are genuinely rare, and others might not exist today but are useful to see recreated in models and miniatures. What makes this museum unique is its near total refusal to collect actual historical objects. The museum never explicitly addresses or justifies the fact that its entire collection is composed of recreations; it simply presents them in glass display cases, with no attempt to situate them in a narrative about the history of the Middle East, other than simply stating the dates and location of their originals.
Both the fakes and the museum are meant to evoke wonder in the viewers today
The origins of the bulk of the museum’s collection becomes clearer when you look at the photographs behind the displays: many objects were recreated from the illustrations of medieval manuscripts containing similar-looking devices. The most famous of these are the extraordinary images of al-Jazari’s contraptions, taken from his book *The Book of Knowledge of Ingenious Mechanical Devices*. While the machines should work, in theory, none has been known to survive. It might even be that their designer didn’t intend them to be built in the first place.
Just what is the role of a museum, specifically a history museum, that contains no genuine historical objects? Istanbul’s museum of Islamic science isn’t an isolated case. The same approach marks the Sabuncuoğlu History of Medicine Museum in Amasya in northern Turkey, as well as the Leonardo da Vinci museum in Milan, which brings to life the feverish mechanisms that the inventor sketched out in the pages of his notebooks.
Unlike the fake miniatures, these institutions weren’t built with the purpose of duping unsuspecting tourists and museums. The man behind the Islamic science museum in Istanbul is the late Fuat Sezgin, formerly at the University of Frankfurt. He was a respected scholar who compiled and published multiple sources on Islamic science. But his project shares certain key qualities with the fake miniatures. They create objects that adhere to our contemporary understandings of what ‘doing science’ looks like, and treat images of Islamic science as if they are literal and direct representations of objects and people that existed in the past.
Most importantly, perhaps, both the fakes and the museum are meant to evoke wonder in the viewers today. There is nothing inherently wrong with wonder, of course; it can spur viewers to question and investigate the natural world. Zakariya al-Qazwini, the 13th-century author who described the world’s curious and spectacular phenomena in his book *Wonders of Creation*, defined wonder as that ‘sense of bewilderment a person feels because of his inability to understand the cause of a thing’. Princes used to read the heavily illustrated books of al-Jazari in this manner, not as practical engineering manuals, but as descriptions of devices that were beyond their comprehension. And we still look at al-Jazari’s recreated items with a sense of awe, even if we now grasp their mechanics – just that, today, we marvel at the fact that they were made by Muslims.
What drives the spread of these images and objects is the desire to use some totemic vision of science to redeem Islam – as a religion, culture or people – from the Islamophobia of recent times. Equating science and technology with modernity is common enough. Before the current political toxicity took hold, I would have taught a class on *Arab* Science, rather than *Islam* and Science. Yet in a world that’s all too willing to vilify Islam as the antithesis of civilisation, it seems better to try and uphold a message that science is a global project in which all of humanity has participated.
This embracing sentiment sits behind ‘1,001 Inventions’, a travelling exhibit on Islamic science that has frequented many of the world’s museums, and has now become a permanent, peripatetic entity. The feel-good motto reads: ‘Discover a Golden Age, Inspire a Better Future’. To non-Muslims, this might suggest that the followers of Islam are rational beings after all, capable of taking part in a shared civilisation. To Muslim believers, meanwhile, it might imply that a lost world of technological mastery was indeed available to them, had they remained on the straight path. In this way, ‘1,001 Inventions’ draws an almost direct line between reported flight from the top of Galata Tower in 17th-century Istanbul and 20th-century Moon exploration.
With these ideals in mind, do the ends justify the means? Using a reproduction or fake to draw attention to the rich and oft-overlooked intellectual legacy of the Middle East and South Asia might be a small price to pay for widening the circle of cross-cultural curiosity. If the material remains of the science do not exist, or don’t fit the narrative we wish to construct, then maybe it’s acceptable to imaginatively reconstruct them. Faced with the gap between our scant knowledge of the actual intellectual endeavours of bygone Muslims, and the imagined Islamic past upon which we’ve laid our weighty expectations, we indulge in the ‘freedom’ to recreate. Textbooks and museums rush to publish proof of Muslims’ scientific exploits. In this way, wittingly and unwittingly, they propagate images that they believe exemplify an idealised version of Islamic science: those telescopes, clocks, machines and medical instruments that cry ‘modernity!’ to even the most casual or skeptical observer.
However, there is a dark side to this progressive impulse. It is an offshoot of a creeping, and paternalistic, tendency to reject the *real* pieces of Islamic heritage for its reimagined counterparts. Something is lost when we reduce the Islamic history of science to a few recognisably modern objects, and go so far as to summon up images from thin air. We lose sight of important traditions of learning that were *not* visually depicted, whether artisanal or scholastic. We also leave out those domains later deemed irrational or unmodern, such as alchemy and astrology.
This selection is not just a question of preferences, but also of priorities. Instead of spending millions of dollars to build and house these reimagined productions, museums could have bought, collected and gathered actual objects. Until recently, for instance, Rebul Pharmacy in Istanbul displayed its own private collection of historical medical instruments – whereas the Museum for the History of Science and Technology in Islam chose to craft new ones. A purposeful choice has therefore been made to ignore existing objects, because what *does* remain doesn’t lend itself to the narrative that the museum wishes to tell.
The false miniatures are painted on the ripped-out pages of centuries-old manuscripts to add to their historicity
Perhaps there’s a worry that the actual remnants of Islamic science simply can’t arouse the necessary wonder; perhaps they can’t properly reveal that Muslims, too, created works of recognisable genius. Using actual artefacts to achieve this end might demand more of viewers, and require a different and more involved mode of explanation. But failing to embrace this challenge means we lose an opportunity to expand the scope of what counted as genius or reflected wonder in the Islamic past. This flattening of time and space impoverishes audiences and palliates their prejudices, without their knowledge and even while posing as enrichment.
We’re still left with the question, though, of the harm done by the proliferation of these reimagined images and objects. When I’ve raised it with colleagues, some have argued that, even if these works are inauthentic, at least they invite students to learn about the premodern Middle East. The sentiment would be familiar to the historian Anthony Grafton, who has observed that the line between the forger and critic is extremely thin. Each sets out, with many of the same tools, to make the past relevant according to the changing circumstances of the present. It’s just that, while the forger dresses new objects in the clothes of the past to fit our current concerns, the critic explains that today’s circumstances differ from those of the past, and retains and discards certain aspects as she sees fit.
Grafton ultimately sides with the critic: the forger, he says, is fundamentally ‘irresponsible; however good his ends and elegant his techniques, he lies. It seems inevitable, then, that a culture that tolerates forgery will debase its own intellectual currency, sometimes past redemption.’ As fakes and fictions enter our digital bloodstream, they start to replace the original images, and transform our baseline notions of what actually was the science of the past. In the case of the false miniatures, many are painted on the ripped-out pages of centuries-old manuscripts to add to their historicity, literally destroying authentic artefacts to craft new forgeries.
In an era when merchants of doubt and propagators of fake news manipulate public discourse, recommitting ourselves to transparency and critique seems like the only solution. Certainly, a good dose of these virtues is part of any cure. But in all these cases, as with the museum, it’s never quite clear who bears responsibility for the deception. We often wish to discover a scheming mastermind behind every act of forgery, whether the Russian state or a disgruntled pseudo-academic – exploiting the social bonds of our trust, and whose fraud can be rectified only by a greater authority. The responsibility to establish truth, however, doesn’t only lie in the hands of the critics and forgers, but also in our own actions as consumers and disseminators. Each time we choose to share an image online, or patronise certain museums, we lend them credibility. Yet, the solution might also demand more than a simple reassertion of the value of truth over fiction, of facts over lies. After all, every work of history, whether a book or a museum, is also partially an act of fiction in its attempt to recount a past that we can no longer access.
A mile away from the museum of Islamic science in Istanbul, nestled in the alleys of the Çukurcuma neighbourhood, resides another museum of invented objects and tales. This one, though, is dedicated not to Islamic scientific inventions but to an author’s melancholic vision of love and, as it happens, Istanbul’s material past. The Museum of Innocence is the handiwork of the Turkish Nobel Prize-winning novelist Orhan Pamuk, whose collected and created objects form the skeleton upon which his 2008 book of the same name is built. Its protagonist, Kemal, slowly leads the reader and the museum-goer through his aborted relationship with his beloved, Füsun. Each chapter corresponds to one of the museum’s small dioramas, which exhibit a collection of objects from the novel. Vintage restaurant cards, old *rakı* bottles and miniature ceramic dogs to be placed atop television sets are delicately arranged in little displays, often with Pamuk’s own paintings as a backdrop.
Behind the museum, though, lies a fictional narrative – and that very fact destabilises our expectations of what the objects in a museum can and should do. Did Pamuk write the novel and then collect objects to fit it, or vice versa? It’s never entirely clear which came first. Pamuk’s opus confronts us with a question: do we tell stories from the objects we collect, or do we collect the objects to tell the stories we desire? The different approaches are, in fact, two sides of the same coin. We collect materials that adhere to our imagined stories, and we craft our narratives according to the objects and sources at hand.
The Museum of Innocence occupies a special place on a spectrum of possibility about how we interact with history. At opposite ends of this spectrum sit the fake miniatures and the fantastical objects of Islamic science, respectively. The miniatures circulate on the internet on their own, often removed from any narrative and divorced from their original sources, open to any interpretation that a viewer sees fit. By contrast, the constructed objects in museums of Islamic science have been consciously brought into this world to serve a defensive narrative of Muslim genius – a narrative that the museums’ founders believed they couldn’t extract from the actual historical objects.
Pamuk’s museum, though, strikes a balance. As one stands in front of Pamuk’s exhibits of pocket watches and photographs of beauty pageants, one slowly examines the objects, imagining how they were used, perhaps listening to a recording of Pamuk’s stories to animate them. It is through his display cases, paintings and writing that the objects come to life. Yet, viewers also see the bottles of *rakı* and other ephemera outside the confines of Pamuk’s narrative. He displays a commitment to the objects themselves, and lets them tell their tale without holding a naive belief in their objective power. This approach grants Pamuk’s museum an intellectual honesty lacking in Sezgin’s museum of Islamic science.
By neglecting actual historical objects, and championing their reimagined counterparts, we efface the past
What is ultimately missing from the fake miniatures, and from the Museum for the History of Science and Technology in Islam, are the very lives of the individuals that fill Pamuk’s museum. Faced with fantasy or forgery, we are left to stand in awe of the telescopes and alembics, marvelling that Muslims built them, but knowing little of the actual artisans and scholars, Muslim and non-Muslim alike. In these lives lies the true history of science in Islamic world: a midwife’s preparation of herbs; a hospital doctor’s list of medicines for the pious poor; an astrologer’s horoscope for an aspiring lieutenant; an imam’s astronomical measurements for timing the call to prayer; a logician’s trial of a new syllogism; a silversmith’s metallurgic experimentation; an encyclopaedist’s classification of plants; or a judge’s algebraic calculations for dividing an inheritance. These lives are not easily researched, as demonstrated by the anaemic state of the field. However, by refusing to collect and display actual historical objects, and instead championing their reimagined counterparts, we efface these people of the past.
Focusing on these lives requires some fiction, to be sure. A museum or book would have to embrace the absences and gaps in our knowledge. Instead of shyly nudging the actual objects out of view, and filling the lacunae with fabrications, it would need to bring actual historical artefacts to the fore. It might take inspiration from the Whipple Museum and even collect forgeries of scientific instruments as important cultural objects in their own right. Yes, we might have to abandon the clickbaity pictures of turbaned astronomers with telescopes that our image-obsessed culture seems to crave. We would have to adapt a different vision of science and of visual culture, a subtler one that does not reduce scientific practice to a few emblems of modernity. But perhaps this is what it means to cultivate a ‘sense of bewilderment’, to use al-Qazwini’s phrase – a new sense of wonder that elicits marvel from the lives of women and men in the past. That would be a genuinely fresh form of seeing; an acknowledgement that something can be valuable, even when we do not recognise it.
*The writer wishes to thank the following people for their help in tracking down the origins of some of the fake miniatures: Elias Muhanna, the author of the book at the beginning of the essay; Josh Nall, the curator of the Whipple Museum of History of Science in Cambridge; and Christiane Gruber, a professor in the history of art at the University of Michigan.* | true | true | true | Fake miniatures depicting Islamic science have found their way into the most august of libraries and history books. How? | 2024-10-12 00:00:00 | 2018-09-11 00:00:00 | article | aeon.co | Aeon Magazine | null | null |
|
6,235,394 | http://www.mittal.vc/2013/08/18/9-critical-fundraising-tips-for-first-time-startup-founders/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,656,483 | http://freepascal.org/news.var | Free Pascal | null | # What's new
If you aren't subscribed to the mailing list, you can read all the important news here.
### Previous news (continued from home page)
-
*November 28th, 2017* -
FPC version 3.0.4 has been released!
This version is a point update to 3.0 and contains bugfixes and updates packages, some of which are high priority.
There is a list of changes that may break backward compatibility. You can also have a look at the FPC 3.0.4 documentation.
Downloads are available at the download section.
-
*February 15th, 2017* -
FPC version 3.0.2 has been released!
This version is a point update to 3.0 and contains bugfixes and updates packages
There is a list of changes that may break backward compatibility. You can also have a look at the FPC 3.0.2 documentation.
Downloads are available at the download section.
-
*November 25th, 2015* -
FPC version 3.0.0 "Pestering Peacock" has been released!
This version includes support for several new platforms, codepage-aware strings and an integrated Pascal source-repository.
We have the traditional lists of new features and changes that may break backward compatibility. Finally, you can view the FPC 3.0.0 documentation.
Downloads are available at the download section. -
*October 21th, 2015* -
FPC 3.0.0-rc2 has been released!
The most important change since the first release candidate is a change in the unicode resource-string handling. There is also a new Windows to Android cross-compiler installer.
You can help improve the upcoming 3.0.0 release by downloading and testing this release candidate. If you want, you can report what you have done on the wiki, but first you may want to check the known issues. Next, we have the traditional lists of new features and of changes that may break backward compatibility. Finally, you can also preview the FPC 3.0.0 documentation.
Downloads are available at:
-
*August 25th, 2015* -
FPC 3.0.0-rc1 has been released!
You can help improve the upcoming 3.0.0 release by downloading and testing this release candidate. If you want, you can report what you have done on the wiki, but first you may want to check the known issues. Next, we have the traditional lists of new features and of changes that may break backward compatibility. Finally, you can also preview the FPC 3.0.0 documentation.
Downloads are available at:
-
*May 29th, 2015* - Support for the Linux/AArch64 platform is now also available in svn trunk, thanks to patches provided by Edmund Grimley Evans.
-
*February 24, 2015* - Support for the iOS/AArch64 platform has been added to svn trunk. More information can be found in the announcement posted to the fpc-devel mailing list.
-
*October 15, 2014* - Some FPC users have observed issues with delivery of e-mails from FPC (and other) mailing lists. Yahoo has decided to implement a DMARC policy, which effectively disallows the use of e-mail accounts hosted by them to subscribe to mailing lists. Unfortunately, none of the proposed solutions for mailing lists are usable at this point in time.
- Note that not only Yahoo (and AOL) e-mail accounts are affected, but also e-mail accounts hosted by other companies that have implemented DMARC policy support, because they have to respect the Yahoo's DMARC policy (blocking mailing lists) and thus they reject e-mails sent from Yahoo addresses to the mailing lists. Addresses from such DMARC-compliant providers may be in turn also be disabled and/or unsubscribed by the mailing list software when their mail servers start rejecting e-mails sent from Yahoo addresses to the mailing lists. Among others, such providers include e.g. Microsoft (hosting e.g. outlook.com and hotmail.com addresses).
-
*March 11, 2014* -
FPC 2.6.4 has been released! Free Pascal 2.6.4 is a point release from the 2.6.0 fixes branch. Some highlights are:
- Packages:
- lots and lots fixes and improvements for fcl-db
- web and json packages synchronized
- improvements to the chmcmd compiler
- several fixes for winunits (and winceunits)
- Documentation:
- many additions
- fpjson documented
- Downloads are available from the download page (mirrors should follow soon). Some archives are still being uploaded.
- A list of changes that may require changes to existing code is also available.
-
*February 23, 2013* -
FPC 2.6.2 has been released! 2.6.2 is a fixes update from the 2.6.x branch.
The new features include, amongst others:
- Compiler:
- improvements and fixes for the ARM architecture
- Packages:
- new package fpindexer (indexing engine)
- support for observer pattern added to fcl-base (and base classes in RTL)
- lots and lots fixes and improvements for fcl-db
- support for JSON dataset added among others
- fixes and improvements for fcl-passrc (and fpdoc)
- updates for PTCPas and gtk2
- fpmkunit improvements (better support for future switch to fpmake)
- several fixes for x11
- several fixes for winunits (and winceunits)
- Platforms:
- improvements to the NativeNT target (newly introduced as alpha in 2.6.0)
- many fixes for OpenBSD and NetBSD (considered in beta state now)
- internal ELF writer supported for more BSD targets
- fixes and improvements for gba and nds
- Downloads are available from the download page (mirrors should follow soon). Some archives are still being uploaded.
- A list of changes that may require changes to existing code is also available.
-
*February 13, 2013* - The FreePascal team is pleased to announce official support for native Android targets in the trunk SVN repository.
- In addition to the existing Android support using the Java VM target, you can now use the FreePascal compiler to generate native executables and libraries. You can now speed up your performance critical code on x86 and ARM CPUs writing in Object Pascal.
- We hope that the Android target will attract new and old developers. It may still be a little rough on the edges. We appreciate your feedback and further contributions.
- Read more about how to use native Android support at Android Wiki page.
-
*October 21, 2012* - Recently, considerable progress has been reached with regard to support of new CPU architectures. This includes not only new support for MIPS processors running in both little endian and big endian modes (contributed mostly by Florian, Pierre and external contributors like Fuxin Zhang), but most notably also revived support for Motorola 68000 family. M68k was originally the second architecture supported by FPC even before release 1.0, but this target has been dormant since the transition to FPC 2.x and now is resurrected mostly by Sven. The compiler can be natively compiled for M68000 and M68020 (although not necessarily fully working there yet) and support for modern Coldfire CPUs is now in mostly working state too. Some functionality still needs finishing (e.g. threadvars which impacts StdIO among others), but created binaries already run successfully in QEMU. As of now, the goal is to support m68k-linux and maybe also m68k-embedded. Contributors for any other operating systems like Amiga, AROS or even classic Mac OS are - as usual - welcome.
-
*March 23, 2012* - The Free Pascal and Lazarus wiki has been moved to a new server. Also the wiki software has been upgraded to the latest Mediawiki version. Because porting the custom Free Pascal skin to the new version was too time consuming the default monoskin is used now. Therefore you will notice changes in its appearance.
-
*January 1, 2012* -
FPC 2.6.0 has been released! 2.6.0 is a major new version, which adds many
post-Delphi 7 language features and adds or improves the support for various
platforms. The new features include, amongst others:
- Objective-Pascal dialect, supported on all Mac OS X and iOS targets
- Delphi compatibility mode improvements
- nested types, class variables and class local constants
- advanced records syntax (no constructors yet)
- (for..in) enumerators in records
- class and record helpers
- generic records, arrays and procedural types
- Delphi-compatibility of general generics syntax improved
- scoped enumerations
- custom messages for "deprecated" directive
- ability to use "&" for escaping keywords
- New ARM code generator features
- ARM VFPv2 and VFPv3 floating point unit support
- Thumb-2 support (embedded targets only)
- The rtl and packages also got a lot of attention, see the release manifest.
- Downloads are available from the download page (mirrors should follow soon).
- A list of changes that may require changes to existing code is also available.
-
*August 20, 2011* - The Free Pascal Compiler now can generate byte code for a Java Virtual Machine.
- The code generator works and supports most Pascal language constructs. The FPC backend for the Java Virtual Machine (JVM) generates Java byte code that conforms to the specifications of the JDK 1.5 (and later). While not all FPC language features work when targeting the JVM, most do and we have done our best to introduce as few differences as possible.
- More information about the JVM backend can be found on the wiki.
-
*May 30, 2011* - A book on programming lazarus is available: "Lazarus Complete Guide".
- It is the translation of the earlier German edition by C&L and is published by the Dutch pascal user group. Several of the Lazarus/Free Pascal developers participated in the creation of this book. It can be ordered on-line here.
-
*May 22, 2011* -
A new release 2.4.4 is available from our sites. 2.4.4 is the second and probably last fixes release from the 2.4 series.
Improvements, amongst others:
- Many improvements to the XML units
- Many improvements to the database units.
- Especially sqlite got quite some fixes.
- Many improvements to the chm units.
- Including a commandline CHM compiler
- Many improvements to fppkg and fpmake for another round of testing.
- Fixes for multi-threading support in OS/2 RTL.
- Downloads are available from the download page (mirrors should follow soon).
- A list of changes that may require changes to existing code is available here.
-
*November 12, 2010* -
A new release 2.4.2 is available from our sites. 2.4.2 is the first fixes release from the 2.4 series.
Improvements, amongst others:
- Delphi 2006 like for..in support
- Support for sealed and abstract class modifiers,
- New targets:
- 64-bit FreeBSD (x86_64)
- Many improvements and fixes to the XML, database and CHM packages
- Long term bug in OS/2 implementation of unit Video finally fixed which among others allows inclusion of the text-mode IDE (without debugger) for this platform as part of the distribution again.
- Many compiler bugfixes and more than half an year of library updates (since 2.4.0)
- A list of changes that may require changes to existing code is available here.
-
*January 1, 2010* -
Happy New Year! A new major version 2.4.0 has been released. The 2.4.x series adds, among others:
- Delphi like resources for all platforms,
- Dwarf debug information improvements,
- Several new targets
- 64-bit Mac OS X (x86_64/ppc64)
- iPhone (Mac OS X/Arm)
- Haiku (from the BeOS family)
- Improved ARM EABI support
- Whole program optimization
- Many compiler bugfixes and half an year of library updates (since 2.2.4)
A list of changes that may require changes to existing code is available here. -
*November 9, 2009* -
The first FPC 2.4.0 release candidate has been posted, please give your feedback! While FPC 2.4.0 will
primarily offer under-the-hood changes and bug fixes, the current svn trunk has seen
quite some work recently on the new features front:
- For..in-loops are now supported (including some FPC-specific extensions).
- The compiler now understands sealed and abstract classes, and final methods.
- Together with the Mac Pascal community, we have designed and implemented a basic Objective-Pascal dialect for directly interfacing with Objective-C on Mac OS X (including header translations of several Cocoa frameworks).
- The Mac OS X interfaces have been updated to their Mac OS X 10.6 state (including 64 bit and iPhoneOS support).
-
*September 17, 2009* -
*(The previously posted information about Mac OS X 10.6 compatibility was unfortunately incorrect, which is why it was removed).*FPC 2.2.4 has been tested with Mac OS X 10.6 (Snow Leopard) and generally works fine. There is however an issue when compiling dynamic libraries with FPC under Mac OS X 10.6 due to a bug in the Xcode 3.2 linker. Unforuntately, there is no easy fix when using FPC 2.2.4. The full discussion can be found in this bug report, with a summary in the last comment. -
*August 20, 2009* - The 2009 International Olympiad in Informatics has been won by the 14 years old Henadzi Karatkevich using Free Pascal. For this contest only the gcc and Free Pascal compilers were allowed. Lazarus was available as editor.
-
*June 25, 2009* - During the last months a lot of work on the embedded support of Free Pascal has been done. FPC can be used now to program microcontrollers without any operating system. The current status, an explanation how to use it and the supported controllers (only a few so far) can be found at the FPC Wiki.
-
*April 12, 2009* - The new stable version 2.2.4 has been released. Downloads are available from the download page (mirrors should follow soon). This is mostly a bugfix version, although some new features have been backported as well. A list of changes that may require changes to existing code is available here. With this release we also want to test our new package system. More information about this test can be found here
-
*February 14, 2009* - Computer & Literatur Verlag has translated the Free Pascal manuals to German and bound them in a book. The book also contains the reference guide for the 17 most important units distributed with Free Pascal. It should be available in book shops in the German-speaking countries in Europe.
-
*January 17, 2009* -
The FPC team is happy to announce the first widely distributed beta of the
*FPC iPhone SDK Integration Kit*, which allows you to compile Pascal code for the iPhone and iPod Touch. It supports both the Simulator and the real devices, and includes an Xcode template with an OpenGL ES demo. It requires an Intel Mac with FPC 2.2.2 (or a later FPC 2.2.x) and the iPhone SDK 2.x installed. Please visit the wiki page for more information and the download link. -
*August 11, 2008* - The new stable version 2.2.2 is released. Downloads are available from the download page (mirrors should follow soon). This is mostly a bugfix version, although some new features have been backported as well. Some code suspected of Borland copyright infringement was replaced with a cleanroom implementation. A list of changes that may require changes to existing code is available here.
-
*September 10, 2007* - OS-News has published an article about the new FPC compiler and cross-platform development. A Dutch version is available on our Wiki.
-
The Free Pascal Compiler team is pleased to announce the release of FPC 2.2.0!
An overview of most changes is available here, but some highlights are:
- Architectures: PowerPC/64 and ARM support
- Platforms: Windows x64, Windows CE, Mac OS X/Intel, Game Boy Advance, and Nintendo DS support
- Linker: fast and lean internal linker for Windows platforms
- Debugging: Dwarf support and the ability to automatically fill variables with several values to more easily detect uninitialised uses
- Language: support for interface delegation, bit packed records and arrays and support for COM/OLE variants and dispinterfaces
- Infrastructure: better variants support, multiple resource files support, widestrings are COM/OLE compatible on Windows, improved database support
- Downloads are available at https://www.freepascal.org/download.html
-
*May 20, 2007* -
After years of development the new fpc version 2.2.0, version
*2.1.4*aka*2.2.0-beta*is released. The beta will be available for about two months whereafter 2.2.0 will be released. We ask all our users to test this release, and report bugs on the bug-tracker. If you want to know if your bug is already solved, you can look it up in mantis, or try one of the daily snapshots, based on the fixes_2_2 branch. So please help us make version 2.2.0 the most stable version of Free Pascal to date. List of changes can be found here. The release notes are also available. Please note that there are some intentional incompatibilities with previous versions, for an overview click here. -
*March 28, 2007* - MORFIK has released version 1.0.0.7 of its WebOS AppsBuilder. It is the first version of AppsBuilder that uses FPC to create the backend.
-
*February 1, 2007* - The Pascal Game Development annual contest is starting. This years theme is "Multiplexity": write a game that combines multiple game genres. Can you write a game in Free Pascal? Then sign up now!
-
*January 27, 2007* - MSEGUI and MSEIDE version 1.0 has been released. MSEIDE is a Rapid Application Development tool to build graphical Windows and Linux applications using the MSEGUI user interface framework. The Free Pascal team wishes the MSEGUI/MSEIDE developers their congratulations and best wishes for this milestone.
-
*January 15, 2007* - The Pascal Game Development annual contest will start on 1 February. Can you write a game with Free Pascal? You might win several prizes. More information will follow.
-
*December 24, 2006* - A book about Free Pascal has been published in Hungary. The 270 pages book teaches the Pascal language from the start and also covers the advanced language features.
-
*December 14, 2006* - Ido Kanner will be giving an FPC lecture at HAIFUX, which is a Linux club at the Technion University in Haifa, on Monday, January 15, 2007. This lecture will be repeated at Telux, a (University) Linux club in Tel Aviv.
-
*November 25-26, 2006* - Lazarus and FPC will be on the HCC in Utrecht, Netherlands in the HCC Pascal booth.
-
*September 27, 2006* - Lazarus and FPC will be on the Systems 2006 in Munich in October in hall A3 booth 542. We will try to be there on all 5 days. You can find more information about the Systems 2006 here.
-
*September 25, 2006* - Francesco Lombardi is writing an extensive guide how to develop games on the Game Boy Advance using Free Pascal.
-
*September 20, 2006* - In addition to the originally published builds for release 2.0.4, powerpc-macos and x86_64-linux .deb packages have been made available (thanks to Olle Raab and Stefan Kisdaroczi). As usually, go to the download page to select your nearest mirror.
-
*August 28, 2006* - Long awaited release 2.0.4 is finally out (go here to select the nearest mirror), bringing you lots of fixes and some enhancements (remember this is primarily a bug-fix release for 2.0.x series, whereas new development is happening in 2.1.x branch) over the previous released version 2.0.2 (or even 2.0.0, because builds for more platforms than in version 2.0.2 are available this time). List of changes can be found here.
-
*August 10, 2006* - The Free Pascal compiler (version 2.1.1) first compiled itself on AmigaOS 4.0 (PowerPC).
-
*July 19 2006* - We are approaching new release (2.0.4) in our bug-fixing branch. Release candidate 2 cycle is running at the moment, final release is expected during August.
-
*June 1 2006* - Francesco Lombardi has a released snapshot of his Gameboy Advance Free Pascal port, download it here.
-
*April 2006 summary* - The first WIN64 Snapshot has been uploaded to the FTP site.
-
*March 2006 summary* - Lots of new progress in March, but too busy coding to update the website, so this entry will be a summary of progress on the main (2.1.1) branch:
- Thomas Schatzl is making good progress with the linux 64-bit PowerPC port. A snapshot is here.
- Peter did a Titanic work, and crafted an internal linker for win32 and win64, reducing linking times tremendously. For such a complex new subsystem, it is already quite stable.
- DWARF debugging info support is slowly starting to work. Stabs will be phased out in time.
- Florian just showed a first "Hello world" program for Win64. This is remarkable since GCC and the binutils don't even support this target. (Internal linker!)
- Jonas reported that he has ported to Darwin/i386 with remarkable little effort. Snapshots are expected in the coming weeks.
-
*February 15, 2006* - An FPC port for Solaris/Sparc has been created. Get a snapshot here.
-
*February 7, 2006* - Francesco Lombardi is making great progres porting Free Pascal to the Game Boy Advance. Checkout this forum thread on Pascal Game Development to see some screenshots.
-
*February 6, 2006* -
*January 10, 2006* - The Pascal Game Development Annual Competition is about to start. Can you code a game in Free Pascal? Then join the competition!
-
*December 8, 2005* - FPC 2.0.2 is ready for download. 2.0.2 is mainly a bug fix release for 2.0.0. The whatsnew.txt can be found here.
-
*September 22, 2005* - The Pixel image editor is one of the projects which show the power of FPC: Pavel Kanzelsberger made an image editing program using FPC which works on 8 platforms and which beat even programs like GIMP, PaintShop Pro and PhotoImpact according to a recent test of a Czech Computer magazin. Today, version 1.0 beta 5 was released.
-
*August 22, 2005* - ARM port of Free Pascal can now be used to develop games for the Gameboy Advance. See the Pascal Game Development site for more information.
-
*August 18, 2005* - Free Pascal can be installed on Fedora from the Fedora Extras. To do so,
add Extras to your Yum-repository (see here for instructions)
and then install using
yum install fpc
Documentation and src-package for Lazarus are available withyum install fpc-doc yum install fpc-src
-
*16 May 2005* - OSNews features an article written by Free Pascal developer Daniël Mantione today.
-
*15 May 2005* -
Free Pascal
**2.0**is released! Go get it at one of the mirrors! This is the first stable release of the development branch we started five years ago. The 2.0 release is technically vastly superior to version 1.0 and has everything in it to belong to the select group of big compilers. -
*30 March 2005* - The task of porting the Free Pascal Compiler to Linux/PowerPC64 has been added to IBM's Linux On Power contest, which means you can earn a PowerMac dual G5/2GHz by completing this port! More information in this post to the fpc-devel mailing list.
-
*24 February 2005* - A second release candidate for 2.0 has been released. It has been released as version 1.9.8. Read more. Get it.
-
*10 February 2005* - Because of an IP address change the freepascal.org domain was not reachable for a day. The DNS changes will take a couple of days until things settle down again.
-
*1 January 2005* - The FPC development team wishes you a happy new year and announces the first 2.0 release candidate. It has been released as version 1.9.6. Read more.
-
*1 January 2005* - For the first time there is a Free Pascal beta release for classic Mac OS.
-
*5 November 2004* - There is now also arm/linux cross compiler available. You can download the i386/linux to arm/linux cross compiler snapshot from here.
-
*3 November 2004* - The x86_64 (aka. AMD64) compiler is progressing nicely so we created a snapshot. You can download the x86-64/Linux snapshot here.
*22 September 2004*- Today the Sparc compiler compiled itself on a Sparcstation 5 and a UltraSparc, both running Linux.
-
**Update:**You can download a Sparc/Linux snapshot here. -
*6 June 2004* - Today the PowerPC compiler first compiled itself on a Pegasos II/G4 computer running MorphOS.
-
*31 May 2004* - A third public beta for 2.0 has released as version 1.9.4. PowerPC is stable and has now also support for Mac OS X.
-
*2 May 2004* -
The first
**64-bit**port has arrived. Tonight, FPC compiled itself for the first time on a 64-bit system. The system was of the AMD64 type. -
*16 March 2004* - The missing compiler versions for 1.0.10 are now uploaded; these consists of the AmigaOS, Solaris, QNX and BeOS compilers. Sorry for the late delay (Carl).
-
*30 January 2003* - The quest continues: The 1.9.3 compiler runs on the ARM processor. The Zaurus is now capable of running FPC and FPC compiled programs!
-
*11 January 2004* - A second public beta for 2.0 has released as version 1.9.2, improvements all around, but powerpc is coming along more than nicely, so there is a linux/powerpc release. This release is also the first where the x86 codegenerator has register parameters.
-
*5 November 2003* - A first public beta, taken from the development branch, has been released. To celebrate that, the version has been upped to 1.9. For now only full archives for the Go32V2, win32, FreeBSD and Linux platforms on the intel (x86) architecture are available We hope the number of platforms and architectures will steadily expand during the 1.9.x beta series, which will culminate ultimately in the 2.0 release.
-
*21 October 2003* - The work on the first 2.0 beta is progressing nicely and a first release is scheduled for 1st November. However, this first beta will be available only for linux-i386, win32-i386 and freebsd-i386. Preparing beta releases for more OSes would take too much time of the core developers. Of course, any volounteer is welcome to help us to prepare beta releases for other OSes. Beta releases of linux-powerpc and linux-sparc will be released a few weeks later.
- To avoid confusion: this will be the first release of the 1.1 development branch compiler. The packages, compiler etc. will get the version number 1.9.x. As soon as the final release is released, the version will be changed to 2.0.0.
-
*25 September 2003* - Merging of the new register allocator has largely finished, and focus is shifting to get a first 2.0 beta for testing purposes out later this year. The Future Plans (Roadmap) page has been updated with some details about what to expect for the 2.0 series of compilers.
-
*1 September 2003* - www.freepascal.org took part in an online demonstration against the introduction of software patents in Europe. The protest page can still be seen here. Note that while the situation is not as bad as it looks there, it is a realistic possibility. The mentioned patent is real, and the currently proposed directive would make it (and many other already - illegally - granted trivial software patents) enforceable in Europe. However, thanks to the massive protest, the vote has been delayed till 22nd September and several politicians are opening their eyes. Thanks for your understanding!
-
*11 July 2003* -
Finally, the long awaited successor to 1.0.6 is out. It is called 1.0.10, and is a (mostly) fixes release.
The reason for skipping 1.0.8 is that the release process took too long, and temporary files have been
exposed too long on the FTP site, so the FPC team decided to make the final release 1.0.10.
This release is expected to be the absolute last release in the 1.0 fixbranch. Development will now be completely focused on the main branch (1.1) where significant progress has been made lately (SPARC and PPC ports).
-
*7 July 2003* - Today, "Hello world!" worked for the first time under Linux/SPARC. This means the SPARC code generator is now minimally working!
-
*25 May 2003* - Yesterday, "make cycle" worked for the first time under Linux/PPC. This means the PowerPC code generator is now fairly stable. Someone (Olle Raab) is working on a classic Mac OS Run Time Library and the Darwin RTL is being worked on as well. Hopefully we'll have something distributable in the next few weeks!
-
*24 January 2003* - The mailing lists are now working again.
-
*22 January 2003* - The mailing lists are currently out of order because the server which runs the mailing lists has a problem. We'll take this opportunity to update this server on Friday, so the mailing lists will be back at weekend.
-
*19 January 2003* - A 1.0.7 compiler snapshot for classic Amiga is now available from the freepascal ftp site.
-
*12 January 2003* - Debugging of the 1.0.7 compiler is coming along nicely. Normally, a release candidate for 1.0.8 should be done very soon. Also, with the 1.0.8 release, linux-m68k and Amiga-m68k versions of the compiler will also be released.
- The 1.1 compiler is also coming along nicely, currently a new register allocator scheme is being designed to help optimize register usage.
-
*17 October 2002* - A linux-m68k snapshot (version 1.0.7) of the compiler is now available here. An amiga-m68k snapshot (version 1.0.7) should follow soon.
-
*24 September 2002* - During the last days, the 1.0.x compiler compiled itself for the first time. The job was done on a 50 MHz Mac IIci (68030), under NetBSD, and the compilation took over 3 hours.
-
**It seems that the multiplatform FPC compiler is finally starting to become reality.** -
*23 September 2002* - The PalmOS port of the compiler has been removed, as it is not in a usable state.
-
*6 September 2002* - The PowerPC port is finally progressing nicely. Under Linux, we can already get a "Hello world" on screen (followed by a number of "RunTime Error" messages and a kernel crash :), but we're making progress. The Darwin RTL has also been started.
-
*10 July 2002* - There were problems with the 1.0.6 installers for both OS/2 (Warp 3.0 or earlier), and Windows (Win95/98/Me). These are now fixed in the FPC 1.0.6 distributions. Sorry for the inconveniences.
-
*24 May 2002*- There were problems with the FPC 1.0.6 linux RPM, where the 1.0.6 beta build was actually released as an official release. On the command-line if ppc386 -i gives out Free Pascal Compiler version 1.0.6-beta, you do not have the latest official release of the linux compiler. Please re-install the linux RPM version which is located on the FTP site (that one should really now be the 1.0.6 release). Sorry for the inconveniences.
-
*4 May 2002* - QNX version has been released, based on the 1.0.6a source. 1.0.6a is similar to version 1.0.6, except for patches applied to make the QNX version compile.
-
*30 April 2002* - After some weeks of pre-release testing, 1.0.6 is finally released.
-
*27 Februrary 2002* - The Freepascal site was down for a few days, due to an ISP change.
-
*17 December 2001* - Solaris-intel port of Free Pascal is completed. Snapshots (v1.0.x) can be downloaded in the development section.
- System unit of BeOS is now called system instead of sysbeos.
-
*15 November 2001* - New FAQ which is more comprehensive.
-
*19 September 2001* - Web sites updated (removed invalid links and some general cleanup).
- Freepascal version 1.0.6 will be released very soon (with a much more stable IDE and updated documentation).
- We are working hard on stabilizing the Motorola 680x0 port (the compiler can compile cycle itself on linux-m68k).
- Version 1.1 of Freepascal is still being worked on, stay tuned for more information.
-
*23 August 2001* - The m68k version has been updated, a beta version for the PalmOS is available here.
- A short description can be found here.
-
*21 May 2001* - The website has a new layout. Because the 1.0.4 version is quite stable there is no new release and we're working mainly on the 1.1 development branch.
-
*31 December 2000* - Version 1.0.4 of the Free Pascal Compiler has been officially released. Hit the download link and select a mirror close to you to download your copy.
-
*7 November 2000* - A
**beta**FreeBSD version 1.0.2 is now available for download. - A post 1.0.2 snapshot that addresses some problems with terminals is available on the FTP site.
-
*19 October 2000* - The OS/2 version 1.0.2 is now also available for download.
-
*12 October 2000* - Version 1.0.2 of the Free Pascal Compiler has been officially released. Hit the download link and select a mirror close to you to download your copy.
-
*12 September 2000* - A community site has been set up. For now there are online discussion forums, in the future we will expand the community site.
-
*3 September 2000* - The (online) HTML documentation now also contains the examples, just like the PDF documentation.
-
*31 July 2000* - We've updated the Dos version of the installer (which is also included in the OS/2 and DosW32 packages) to fix the reported crashes under OS/2 and WindowsNT. Also the false "Error -2 -- Disk full?" errors have been fixed. If you had problems with the old install.exe, you can download a new one from the download page. See also this FAQ question.
-
*12 July 2000* - Version 1.00 of the Free Pascal Compiler has been officially released. Hit the download link and select a mirror close to you to download your copy.
-
*16 April 2000* - The FreeBSD compiler compiled itself for the first time. From now on, development FreeBSD snapshots will be uploaded to the FreePascal ftp site when important features are implemented or fixed.
-
*25 February 2000* - Version 0.99.14a has been released for Dos, Win32. It contains fixes
for the readln and graph bugs. And the installer problems have been
fixed.
Get it at one of the mirrors (USA mirrors of the FTP site only can be found in the links section). -
*22 February 2000* **C&L**ships the first edition of the German translation of the free pascal manuals. German users can order the book from from the C&L website.-
*7 February 2000* -
There was a bug in the compiler which caused the graph unit of the Dos version
to crash on startup of any graphical program if you didn't turn on smart linking.
The solutions are to always use smart linking when compiling graph programs
(use the -XX command line option) or to get a snapshot (or wait for the next release).
Some other things already fixed/added since the 0.99.14 release:
- bug in the installer which caused false insufficient diskspace alerts
- bug in readln which causes empty lines to be returned when the end of the text buffer is reached (and in some other cases, even if there are none) (already in the OS/2 release, because it came out later)
- bug in the Linux graph unit which caused a crash on many systems and only B&W output on others
- addition of a lineinfo unit, which, when your program crashes, adds file/line number information behind the printed addresses (also for the heaptrace output)!!
-
*4 February 2000* - The OS/2 version of Free Pascal 0.99.14 has been released today! Go get it at one of the mirrors! (USA mirrors of the FTP site only can be found in the links section).
-
*31 January 2000* - A fixed version of install.sh for the linux tar installation is available here. This version will fix the problem with the wrong symlink and sources.tar typo.
-
*27 January 2000* - Version 0.99.14 (aka 1.0beta4) has been released for Dos, Win32 and Linux! Go get it at one of the mirrors! (USA mirrors of the FTP site only can be found in the links section).
-
*7 January 2000* - As you may have noticed, it's been a long time since the news has been updated. In the mean time, development has continued though! Soon, we'll release version 0.99.14 of the compiler which will be a final candidate for version 1.0. Some weeks later, we'll (finally) release version 1.0.
- As always, you can get the latest compiler and RTL in the form of a snapshot in the development section if you want to see what we've done since the last official release.
- A new section has been added: programmer tools. It contains documentation for all helper programs included with FPC. Check it out!
-
*26 July 1999* - Bugfix version 0.99.12b has been released for Dos,Win32 and Linux. You can download it here.
- For linux the source rpm's and debian sources are also available on ftp and download page.
- Default documentation is now in PDF format which looks much better.
-
*25 June 1999* - Version 0.99.12 (aka 1.0beta3) has been released! Go get it at one of the mirrors! (USA mirrors of the FTP site only can be found in the links section).
- The Free Pascal newsletter 06/99 is available.
-
*9 April 1999* - Snapshots for the IDE are now available from the development page.
- If you want to use the daily source packages you also need the "base" package, installation notes are included in that package.
-
*15 January 1999* - 0.99.10 for OS/2 & Dos released
-
*23 December 1998* - Released Version 0.99.10 (aka 1.0 beta 2).
- The IDE is progressing nicely and will be included with the next version. | true | true | true | null | 2024-10-12 00:00:00 | 2023-01-01 00:00:00 | null | null | null | null | null | null |
31,992,822 | https://papereconomy.substack.com/p/a-real-mean-reversion | A Real Mean Reversion | SoldAtTheTop | # A Real Mean Reversion
### The housing run-up post-COVID panic was so absurd that even highly qualified buyers with loans made using traditionally sound lending standards are in the cross-hairs as prices simply revert to mean.
One of the major differences between our current housing boom and the epic housing run-up and associated crash that ultimately led to the “Great Recession” is in the supposed quality of buyers, or more precisely, the underwriting standards lenders use when making home loans as well as the soundness of the loan products and features they offer.
Without a doubt, lending standards by 2005/06/07 had become completely detached from reality with 100% loan-to-value, no-interest, reverse amortization, no-income verification, sub-prime being some of the various means by which the housing finance system kept the party going (loan volume) even after prices had risen well out of reach for many prospective home buyers.
In the aftermath of the epic financial meltdown that ensued, there was a notable effort by Federal regulators to understand what went wrong, and further, to take measures to prevent such egregious activities in the future.
The Dodd-Frank “Wall Street Reform and Consumer Protection” Act had as a provision the “Mortgage Reform and Anti-Predatory Lending Act” which sought in part to tighten up the process of mortgage origination in order to:
It is the purpose of this section and section 129C to assure that consumers are offered and receive residential mortgage loans on terms that reasonably reflect their ability to repay the loans and that are understandable and not unfair, deceptive or abusive.
To that end there are the following two important sub-sections (NOTE: Dodd-Frank is an enormous act with many provisions; these are just two small samples of items focused specifically on mortgage origination standards):
Subtitle B: Minimum Standards for Mortgages
IN GENERAL.—In accordance with regulations prescribed by the Board, no creditor may make a residential mortgage loan unless the creditor makes a reasonable and good faith determination based on verified and documented information that, at the time the loan is consummated, the consumer has a reasonable ability to repay the loan, according to its terms, and all applicable taxes, insurance (including mortgage guarantee insurance), and assessments.
This section goes on to detail tighter rules for regulating, multiple loans (borrowers has subordinate loans on same property), income verification, no-interest options, negative amortization options and other lending phenomena that contributed to the mortgage mania of that era.
Subtitle F: Appraisal Activities
IN GENERAL.—A creditor may not extend credit in the form of a subprime mortgage to any consumer without first obtaining a written appraisal of the property to be mortgaged prepared in accordance with the requirements of this section.
This section goes on to detail tighter rules for regulating the appraisal process of a property that will be financed using a “high-risk” subprime loan.
Taken together, these two sub-sections point to one major flaw in the legislative financial regulatory process in that it is at best always backward looking, specifically addressing practices that occurred in the past that had led to financial crisis.
Whether there is any sense to even trying to regulate future financial activities, is not the topic of this post, but suffice to say, the Dodd-Frank mortgage origination related provisions focused heavily on the activities that led to the Great Recession and didn’t attempt to regulate other novel methods that might be employed in future speculative episodes.
One area that I have taken an interest in is in the non-conforming “jumbo” loan market and particularly in my own market of Boston where throughout 2020, 2021 and on into 2022, I have witnessed some truly reckless behavior on the part of borrowers and lenders.
Following up on my prior post on the subject, I dug a little deeper into the homes that I have highlighted previously to better understand the nature of the debt that financed these reckless purchases.
From a traditional mortgage lending perspective, these purchases appear pretty sound given that the average loan-to-value (LTV) was about 70% (i.e. 70% loan, 30% deposit), all mortgages were simple 30-year fixed rate loans (no risky options or even a single ARM) and I assume that all incomes were verified.
But considering that each of these buyers over-bid the listing price by on average a whopping 30%, one might draw a different conclusion.
First, below is a * simple average model* for pricing the Boston housing market that I developed using the S&P CoreLogic Case Shiller Home Price Index (CSI) for Boston.
This model, superimposed on a Zillow price history, simply uses the CSI to calculate the price trend line with 2016-2019 based on actual annual price changes and 2020-2022 based on an average annual price change from the trailing eight (8) years.
My general premise with this model is that 2020 through 2022 represented an absurdly anomalous period where lending rates were exceptionally low and buyers (as a result of COVID-19 hysteria) were simply not acting prudently.
As you can see from the model, there is a significant risk of “mean reversion” for the nation’s severely overheated housing markets as interest rates rise and prices fall meeting a more fundamental balance that, once materialized, will call into question the financial soundness of the loans backing up these properties.
If, given the emerging headwinds for the macro economy and housing specifically, we see a 25% pullback in home prices in Boston (not inconceivable given the ferocity of the price appreciation in just the last two years as well as the scale of prior pullbacks of about 16% after the late-80s S&L crisis and 18% after the Great Recession), then these homes would all have an average LTV of about 92%, a much less sound financial footing for these buyers and lenders.
Further, many of the homes could fall even more sharply given how far off the simple average price trend (the model) the buyers pushed the purchase with their manic bidding.
The following is a selection of three homes from my original post, annotated with the simple average price model as well as the mortgage details to illustrate the point:
Note that I calculated the LTV as they stood on the day of settlement as well as what the LTVs would be after both a 25% pullback in prices as well as with respect to the simple average price model forecast of the more fundamentals-based value.
The loophole in the lending standards for this cycle appears to have been that home appraisals were either unable to accurately account for the outlandish price appreciation occurring in the market or were simply roundly ignored given the fact that borrowers came to the table with 20%-30% down-payments and had good credit histories.
Prudent lenders would have evaluated the appraisal to determine a more fundamental valuation and either demanded more down-payment from the bid-happy borrowers to account for the additional risk or simply rejected the transactions outright.
For example, for the first property above, the fundamental value of the home at the time of purchase was probably closer to about $1,300,000 but it was bid up to $1,720,000 or about $420,000 over its fundamental price.
If the lender had been more prudent, they would have either walked away from the transaction or demanded that the borrower come up with about $800,000 for the deposit instead of the meager $400,000 that the borrower actually paid.
This would have resulted in a loan of about $920,000 against an $800,000 deposit for a very healthy 53% LTV which would have been well fortified against any oncoming economic crisis-driven home price decline.
Obviously, the lender didn’t perform this level of due-diligence because the borrow could not afford the additional deposit and all parties involved (lender, broker, lawyers, buyer and seller) simply wanted to stack up another closed transaction.
I suspect that my small sample is clearly demonstrating something important about the housing market dynamics this cycle, namely that there is real, yet to be fully realized, systemic risk coming from the prime lending market particularly for the performance of privately funded “jumbo” loans.
Great research. Another potential issue that I would love to hear the numbers on: many short-term-rental (i.e. airbnb) buyers used hard money and private lending to purchase properties - thus feeding the "cash" buying frenzy. I've seen offers to get approved for a large DSCR loan with nothing but an airDNA rentalizer estimate, and there are numerous facebook groups and so-called "mentors" for hire connecting up these amateur investor/hoteliers with services to set up their LLCs and get bridge loans for the down payments. Dodd Frank carved out provisions for real business investors to take on risker loans for property assets, but it appears folks have figured out they'll just teach yesterday's sub-prime buyers how to be "real business investors," at least on paper. Not to mention, these are businesses reliant on: 1) residentially zoned properties and the whims of municipal governments to decide how or if they can be used for hospitality purposes, and 2) that the travel industry and demand for short-term rentals somehow remains robust through a continuing pandemic, inflation, political unrest, highest than ever gas prices, airline pilot shortages, and a likely recession...
Congrats on the MarketWatch mention | true | true | true | The housing run-up post-COVID panic was so absurd that even highly qualified buyers with loans made using traditionally sound lending standards are in the cross-hairs as prices simply revert to mean. | 2024-10-12 00:00:00 | 2022-07-05 00:00:00 | article | substack.com | Paper Economy | null | null |
|
6,198,883 | http://www.bbc.co.uk/news/technology-23665490 | City of London calls halt to smartphone tracking bins | Joe Miller | # City of London calls halt to smartphone tracking bins
- Published
**The City of London Corporation has asked a company to stop using recycling bins to track the smartphones of passers-by.**
Renew London had fitted devices into 12 "pods", which feature LCD advertising screens, to collect footfall data by logging nearby phones.
Chief executive Kaveh Memari said the company had "stopped all trials in the meantime".
The corporation has taken the issue to the Information Commissioner's Office.
The action follows concerns raised by privacy campaign group Big Brother Watch, after details of the technology used in the bins, external emerged in the online magazine Quartz.
Mr Memari told the BBC that the devices had only recorded "extremely limited, encrypted, aggregated and anonymised data" and that the current technology was just being used to monitor local footfall, in a similar way as a web page monitors traffic.
He added that more capabilities could be developed in the future, but that the public would be made aware of any changes.
The bins, which are located in the Cheapside area of central London, log the media access control (MAC) address of individual smartphones - a unique identification code carried by all devices that can connect to a network.
A spokesman for the City of London Corporation said: "Irrespective of what's technically possible, anything that happens like this on the streets needs to be done carefully, with the backing of an informed public."
## Legal 'grey area'
Mr Memari insisted that the bins were just "glorified people-counters in the street" and that his company held no personal information about the smartphone owners.
While the collection of anonymous data through MAC addresses is legal in the UK, the practice has been described as a "grey area".
The UK and the EU have strict laws about mining personal data using cookies, which involves effectively installing a small monitoring device on people's phones or computers, but the process of tracking MAC codes leaves no trace on individuals' handsets.
Websites or companies wanting to use cookies to tracks users' habits have to ask for permission. By monitoring MAC addresses, which just keeps a log of each time a wi-fi enabled device connects to another device, they can work around this requirement.
Presence Orb, the company that provides the tracking technology to Renew London, calls its service "a cookie for the real world".
## 'Data and revenue'
Nick Pickles, director of Big Brother Watch, said: "I am pleased the City of London has called a halt to this scheme, but questions need to be asked about how such a blatant attack on people's privacy was able to occur in the first place.
"Systems like this highlight how technology has made tracking us much easier, and in the rush to generate data and revenue there is not enough of a deterrent for people to stop and ensure that people are asked to give their consent before any data is collected."
Reacting to the City of London Corporation's call, an Information Commissioner's Office spokesperson said: "Any technology that involves the processing of personal information must comply with the Data Protection Act.
"We are aware of the concerns being raised over the use of these bins and will be making inquiries to establish what action, if any, is required."
- Published30 July 2013
- Published21 June 2013 | true | true | true | A company using technology in recycling bins to track smartphones has been asked to stop by the City of London Corporation. | 2024-10-12 00:00:00 | 2013-08-12 00:00:00 | article | bbc.com | BBC News | null | null |
|
23,339,801 | https://mender.io/blog/a-quick-dive-into-k3s-and-the-mender-update-module-for-kubernetes | A quick dive into k3s and the Mender Update Module for Kubernetes | Mender | Fabio Tranchitella | All of you know about Kubernetes, the open-source container-orchestration system for automating application deployment, scaling, and management. Over time, it is becoming the de-facto standard to automate the deployment, scaling, and operations of container-based workloads in the cloud. The Cloud-Native Computing Foundation (CNCF) is responsible for the certification of the Kubernetes distributions to provide a compatible environment across different cloud-blade and on-premises installations.
One of the most interesting Kubernetes distributions is k3s from Rancher, a lightweight certified Kubernetes distribution specifically built for IoT and Edge computing. Still retaining full Kubernetes compatibility and an impressive feature set, the single binary which provides k3s is less than 40MB and is available for both ARM64 and ARMv7.
k3s provides the ability to IoT and Edge computing engineers to deploy and manage container-based applications into their fleet of devices describing their application workload using the Kubernetes manifesto files, a collection of YAML files describing the desired state of the deployment. It's a future-proof, feature-richer, and more expressive alternative to manage containerized applications in the field using docker-compose, the ubiquitous tool used by developers approaching the Docker world.
By default, it uses an sqlite3 database as the lightweight backend storage. It optionally supports other backend storage, namely, etcd, MySQL and PostgreSQL, which are less suitable for embedded IoT and Edge devices, so they are out of scope for this article.
Curiosity about the naming: we abbreviate Kubernetes as k8s (k and s with eight letters in the middle, making it a ten characters word). When developing k3s, Rancher had the goal of creating a Kubernetes distribution that could run with half the resources needed to run the stock Kubernetes. As half of ten is five, they named k3s (k and s with three letters in the middle, making it a five characters word). Nobody knows what the three middle letters are, though. :)
### Getting started with k3s
Installing k3s on a device, as small as a Raspberry PI or as big as a 32 CPU virtual server running in the cloud, is a matter of executing the following command:
```
$ curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.2+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
```
In less than a minute, our single-node Kubernetes cluster is ready to run our containers. We can use the stock Kubernetes kubectl CLI get the list of pods, the smallest deployable units of computing you can start, running:
```
$ kubectl get pods
No resources found in default namespace.
```
### Install and upgrade applications in k3s using Mender
Mender is our robust and secure over-the-air software updater for IoT devices. We support the installation and upgrade of containerized workloads running on Kubernetes, including k3s, through the Kubernetes Update module.
The easiest way to try the Mender OTA platform is by signing up for the managed service. The Mender Starter plan offers a three-month evaluation period free of charge.
### Package k8s/k3s configuration as Mender artifacts
First of all, let's define a sample application workload which runs the stock nginx Docker image by creating the file `nginx-deployment.yaml`
as follows:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
```
We can now use the `k8s-artifact-gen`
utility to generate a Mender Artifact, which we'll be able to deploy using the Mender OTA platform, from the Kubernetes manifest file defined above:
```
$ wget https://raw.githubusercontent.com/mendersoftware/mender-update-modules/master/k8s/module-artifact-gen/k8s-artifact-gen
$ chmod 775 k8s-artifact-gen
$ ./k8s-artifact-gen \
-n nginx-sample-app \
-t device_type \
-o nginx-sample-app.mender \
nginx-deployment.yaml
Artifact nginx-sample-app.mender generated successfully:
Mender artifact:
Name: nginx-sample-app
Format: mender
Version: 3
Signature: no signature
Compatible devices: '[device_type]'
Provides group:
Depends on one of artifact(s): []
Depends on one of group(s): []
State scripts:
Updates:
0:
Type: k8s
Provides: Nothing
Depends: Nothing
Metadata: Nothing
Files:
name: nginx-deployment.yaml
size: 316
modified: 2020-05-27 06:17:57 +0200 CEST
checksum: 529d78bd4aac18e2343426fa6267960a74f3c5a450ab04f32c4acd42efb9bdfe
```
**Please note**: you have to replace the string `device_type`
with the device type you set for your own device, for example `beaglebone`
or `raspberrypi3`
. For more information about setting the device type, please refer to the official Mender documentation.
The file `nginx-sample-app.mender`
is the Mender Artifact that embeds the Kubernetes manifest file(s) that you can deploy into your k3s devices using the Mender OTA platform.
You can upload the artifact using the Mender user interface or using the `mender-cli`
, running:
```
$ mender-cli login --server https://hosted.mender.io --username [email protected]
Password:**********
login successful
$ mender-cli artifacts upload nginx-sample-app.mender --server https://hosted.mender.io
Processing uploaded file. This may take around one minute.
upload successful
```
### Install the Mender Client and the k8s Update Module on the device
To deploy and upgrade Kubernetes manifests on a device running Kubernetes, you need to install the Mender client. You can follow the instructions in our documentation page to Install the Mender client.
We are going to use the k8s Update module to deploy our application workload, described in the Mender jargon as Application updates. As full system updates are out of scope for this article, we can easily install the Mender client using a Debian package as described in the Install Mender using the Debian package documentation page.
Once the Mender client is installed and authenticated the installed the Mender client and authenticated it to connect to our Mender server, either on-premises or our hosted Mender service,. we are ready to install the k8s Update module on the device:
```
$ mkdir -p /usr/share/mender/modules/v3
$ wget -P /usr/share/mender/modules/v3 https://raw.githubusercontent.com/mendersoftware/mender-update-modules/master/k8s/module/k8s
$ chmod +x /usr/share/mender/modules/v3/k8s
```
### Deploy a Kubernetes update using Mender
At this point, we can create a deployment from the Mender graphical user interface for our device, and the Mender client will apply the Kubernetes manifest file using the k8s Update module.
Internally, the k8s Update module uses the `kubectl`
CLI to apply the manifesto files bundled in the Mender artifact.
We can finally verify the deployment is in place on our device:
```
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 74s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-kzq66 1/1 Running 0 83s
```
### Conclusion
Containers are gaining a lot of traction in the embedded world. The most common and well-known path, to deploy containerized applications in IoT and Edge devices is using either Docker directly or docker-compose. Still, Kubernetes represents a valuable alternative for use cases where, despite the limited resources, a full container-orchestrator can provide developers a stateful environment to run platform-agnostic workloads.
#### Recent articles
## How OTA updates enhance software-defined vehicles
## Enhancing maritime security and connectivity: The critical role of OTA updates in fleet management
## How to leverage over-the-air (OTA) updates for NVIDIA Jetson Platform Services
## Learn why leading companies choose Mender
Discover how Mender empowers both you and your customers with secure and reliable over-the-air updates for IoT devices. Focus on your product, and benefit from specialized OTA expertise and best practices. | true | true | true | All of you know about Kubernetes, the open-source container-orchestration system for automating application deployment, scaling, and management. Over time, | 2024-10-12 00:00:00 | 2020-05-28 00:00:00 | null | article | mender.io | Northern.tech AS | null | null |
10,883,691 | http://wcm1.web.rice.edu/plain-text-citations.html | Not found ... | null | The resource you requested cannot be found. Please be certain the URL is correct, or contact the owner of the resource if you need more information about where it resides. | true | true | true | null | 2024-10-12 00:00:00 | 2008-01-01 00:00:00 | null | null | null | null | null | null |
15,780,740 | https://github.com/Rich-Harris/degit#degit-straightforward-project-scaffolding | GitHub - Rich-Harris/degit: Straightforward project scaffolding | Rich-Harris | **degit** makes copies of git repositories. When you run `degit some-user/some-repo`
, it will find the latest commit on https://github.com/some-user/some-repo and download the associated tar file to `~/.degit/some-user/some-repo/commithash.tar.gz`
if it doesn't already exist locally. (This is much quicker than using `git clone`
, because you're not downloading the entire git history.)
*Requires Node 8 or above, because async and await are the cat's pyjamas*
`npm install -g degit`
The simplest use of degit is to download the master branch of a repo from GitHub to the current working directory:
```
degit user/repo
# these commands are equivalent
degit github:user/repo
degit [email protected]:user/repo
degit https://github.com/user/repo
```
Or you can download from GitLab and BitBucket:
```
# download from GitLab
degit gitlab:user/repo
degit [email protected]:user/repo
degit https://gitlab.com/user/repo
# download from BitBucket
degit bitbucket:user/repo
degit [email protected]:user/repo
degit https://bitbucket.org/user/repo
# download from Sourcehut
degit git.sr.ht/user/repo
degit [email protected]:user/repo
degit https://git.sr.ht/user/repo
```
The default branch is `master`
.
```
degit user/repo#dev # branch
degit user/repo#v1.2.3 # release tag
degit user/repo#1234abcd # commit hash
```
If the second argument is omitted, the repo will be cloned to the current directory.
`degit user/repo my-new-project`
To clone a specific subdirectory instead of the entire repo, just add it to the argument:
`degit user/repo/subdirectory`
If you have an `https_proxy`
environment variable, Degit will use it.
Private repos can be cloned by specifying `--mode=git`
(the default is `tar`
). In this mode, Degit will use `git`
under the hood. It's much slower than fetching a tarball, which is why it's not the default.
Note: this clones over SSH, not HTTPS.
`degit --help`
- Private repositories
Pull requests are very welcome!
A few salient differences:
- If you
`git clone`
, you get a`.git`
folder that pertains to the project template, rather than your project. You can easily forget to re-init the repository, and end up confusing yourself - Caching and offline support (if you already have a
`.tar.gz`
file for a specific commit, you don't need to fetch it again). - Less to type (
`degit user/repo`
instead of`git clone --depth 1 [email protected]:user/repo`
) - Composability via actions
- Future capabilities — interactive mode, friendly onboarding and postinstall scripts
You can also use degit inside a Node script:
```
const degit = require('degit');
const emitter = degit('user/repo', {
cache: true,
force: true,
verbose: true,
});
emitter.on('info', info => {
console.log(info.message);
});
emitter.clone('path/to/dest').then(() => {
console.log('done');
});
```
You can manipulate repositories after they have been cloned with *actions*, specified in a `degit.json`
file that lives at the top level of the working directory. Currently, there are two actions — `clone`
and `remove`
. Additional actions may be added in future.
```
// degit.json
[
{
"action": "clone",
"src": "user/another-repo"
}
]
```
This will clone `user/another-repo`
, preserving the contents of the existing working directory. This allows you to, say, add a new README.md or starter file to a repo that you do not control. The cloned repo can contain its own `degit.json`
actions.
```
// degit.json
[
{
"action": "remove",
"files": ["LICENSE"]
}
]
```
Remove a file at the specified path.
- zel by Vu Tran
- gittar by Luke Edwards
MIT. | true | true | true | Straightforward project scaffolding. Contribute to Rich-Harris/degit development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2017-07-31 00:00:00 | https://opengraph.githubassets.com/27aa2e1c410c7e8fa020a1d10ebe8ec10f0d0919cacee383dceaeb7db972839d/Rich-Harris/degit | object | github.com | GitHub | null | null |
876,931 | http://www.justin.tv/startupbootcamp | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,768,779 | https://www.cnn.com/2020/04/02/world/coronavirus-earth-seismic-noise-scn-trnd/index.html | The coronavirus pandemic is making Earth vibrate less | CNN | Harmeet Kaur | Once-crowded city streets are now empty. Highway traffic has slowed to a minimum. And fewer and fewer people can be found milling about outside.
Global containment measures to combat the spread of the coronavirus have seemingly made the world much quieter. Scientists are noticing it, too.
Around the world, seismologists are observing a lot less ambient seismic noise – meaning, the vibrations generated by cars, trains, buses and people going about their daily lives. And in the absence of that noise, Earth’s upper crust is moving just a little less.
Thomas Lecocq, a geologist and seismologist at the Royal Observatory in Belgium, first pointed out this phenomenon in Brussels.
Brussels is seeing about a 30% to 50% reduction in ambient seismic noise since mid-March, around the time the country started implementing school and business closures and other social distancing measures, according to Lecocq. That noise level is on par with what seismologists would see on Christmas Day, he said.
## Less noise means seismologists can detect smaller events
The reduction in noise has had a particularly interesting effect in Brussels: Lecocq and other seismologists are able to detect smaller earthquakes and other seismic events that certain seismic stations wouldn’t have registered.
Take, for example, the seismic station in Brussels. In normal times, Lecocq said, it’s “basically useless.”
Seismic stations are typically set up outside urban areas, because the reduced human noise makes it easier to pick up on subtle vibrations in the ground. The one in Brussels, however, was built more than a century ago and the city has since expanded around it.
The daily hum of city life means that the station in Brussels wouldn’t typically pick up on smaller seismic events. Seismologists would instead rely on a separate borehole station, which uses a pipe deep in the ground to monitor seismic activity.
“But for the moment, because of the city’s quietness, it’s almost as good as the one on the bottom,” Lecocq said.
Seismologists in other cities are seeing similar effects in their own cities.
Paula Koelemeijer posted a graph on Twitter showing how noise in West London has been affected, with drops in the period after schools and social venues in the United Kingdom closed and again after a government lockdown was announced.
Celeste Labedz, a PhD student at the California Institute of Technology, posted a graph showing an especially stark drop in Los Angeles.
Still, seismologists say the reduction in noise is a sobering reminder of a virus that has sickened more than one million people, killed tens of thousands and brought the normal rhythms of life to a halt.
## It shows people are heeding lockdown rules
Lecocq said the graphs charting human noise are evidence that people are listening to authorities’ warnings to stay inside and minimize outside activity as much as possible.
“From the seismological point of view, we can motivate people to say, ‘OK look, people. You feel like you’re alone at home, but we can tell you that everyone is home. Everyone is doing the same. Everyone is respecting the rules,’” he said.
The data can also be used to identify where containment measures might not be as effective, said Raphael De Plaen, a postdoctoral researcher at Universidad Nacional Autónoma de México.
“That could be used in the future by decision makers to figure out, ‘OK, we’re not doing things right. We need to work on that and make sure that people respect that because this is in the interest of everyone.’” | true | true | true | Around the world, seismologists are observing a lot less ambient seismic noise – meaning, the vibrations generated by cars, trains, buses and people going about their daily lives. And in the absence of that noise, Earth’s upper crust is moving just a little less. | 2024-10-12 00:00:00 | 2020-04-02 00:00:00 | article | cnn.com | CNN | null | null |
|
16,178,667 | https://theblog.adobe.com/15-rules-every-ux-designer-know/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,903,362 | https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html | Google Research: Themes from 2021 and Beyond | null | # Google Research: Themes from 2021 and Beyond
January 11, 2022
Posted by Jeff Dean, Senior Fellow and SVP of Google Research, on behalf of the entire Google Research community
## Quick links
Over the last several decades, I've witnessed a lot of change in the fields of machine learning (ML) and computer science. Early approaches, which often fell short, eventually gave rise to modern approaches that have been very successful. Following that long-arc pattern of progress, I think we'll see a number of exciting advances over the next several years, advances that will ultimately benefit the lives of billions of people with greater impact than ever before. In this post, I’ll highlight five areas where ML is poised to have such impact. For each, I’ll discuss related research (mostly from 2021) and the directions and progress we’ll likely see in the next few years.
## Trend 1: More Capable, General-Purpose ML Models
Researchers are training larger, more capable machine learning models than ever before. For example, just in the last couple of years models in the language domain have grown from billions of parameters trained on tens of billions of tokens of data (e.g., the 11B parameter T5 model), to hundreds of billions or trillions of parameters trained on trillions of tokens of data (e.g., dense models such as OpenAI’s 175B parameter GPT-3 model and DeepMind’s 280B parameter Gopher model, and sparse models such as Google’s 600B parameter GShard model and 1.2T parameter GLaM model). These increases in dataset and model size have led to significant increases in accuracy for a wide variety of language tasks, as shown by across-the-board improvements on standard natural language processing (NLP) benchmark tasks (as predicted by work on neural scaling laws for language models and machine translation models).
Many of these advanced models are focused on the single but important modality of written language and have shown state-of-the-art results in language understanding benchmarks and open-ended conversational abilities, even across multiple tasks in a domain. They have also shown exciting capabilities to generalize to new language tasks with relatively little training data, in some cases, with few to no training examples for a new task. A couple of examples include improved long-form question answering, zero-label learning in NLP, and our LaMDA model, which demonstrates a sophisticated ability to carry on open-ended conversations that maintain significant context across multiple turns of dialog.
A dialog with LaMDA mimicking a Weddell seal with the preset grounding prompt, “Hi I’m a weddell seal. Do you have any questions for me?” The model largely holds down a dialog in character. (Weddell Seal image cropped from Wikimedia CC licensed image.) |
Transformer models are also having a major impact in image, video, and speech models, all of which also benefit significantly from scale, as predicted by work on scaling laws for visual transformer models. Transformers for image recognition and for video classification are achieving state-of-the-art results on many benchmarks, and we’ve also demonstrated that co-training models on both image data and video data can improve performance on video tasks compared with video data alone. We’ve developed sparse, axial attention mechanisms for image and video transformers that use computation more efficiently, found better ways of tokenizing images for visual transformer models, and improved our understanding of visual transformer methods by examining how they operate compared with convolutional neural networks. Combining transformer models with convolutional operations has shown significant benefits in visual as well as speech recognition tasks.
The outputs of generative models are also substantially improving. This is most apparent in generative models for images, which have made significant strides over the last few years. For example, recent models have demonstrated the ability to create realistic images given just a category (e.g., "irish setter" or "streetcar", if you desire), can "fill in" a low-resolution image to create a natural-looking high-resolution counterpart ("computer, enhance!"), and can even create natural-looking aerial nature scenes of arbitrary length. As another example, images can be converted to a sequence of discrete tokens that can then be synthesized at high fidelity with an autoregressive generative model.
Example of a cascade diffusion models that generate novel images from a given category and then use those as the seed to create high-resolution examples: the first model generates a low resolution image, and the rest perform upsampling to the final high resolution image. |
The SR3 super-resolution diffusion model takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise. |
Because these are powerful capabilities that come with great responsibility, we carefully vet potential applications of these sorts of models against our AI Principles.
Beyond advanced single-modality models, we are also starting to see large-scale multi-modal models. These are some of the most advanced models to date because they can accept multiple different input modalities (e.g., language, images, speech, video) and, in some cases, produce different output modalities, for example, generating images from descriptive sentences or paragraphs, or describing the visual content of images in human languages. This is an exciting direction because like the real world, some things are easier to learn in data that is multimodal (e.g., reading about something and seeing a demonstration is more useful than just reading about it). As such, pairing images and text can help with multi-lingual retrieval tasks, and better understanding of how to pair text and image inputs can yield improved results for image captioning tasks. Similarly, jointly training on visual and textual data can also help improve accuracy and robustness on visual classification tasks, while co-training on image, video, and audio tasks improves generalization performance for all modalities. There are also tantalizing hints that natural language can be used as an input for image manipulation, telling robots how to interact with the world and controlling other software systems, portending potential changes to how user interfaces are developed. Modalities handled by these models will include speech, sounds, images, video, and languages, and may even extend to structured data, knowledge graphs, and time series data.
Example of a vision-based robotic manipulation system that is able to generalize to novel tasks. Left: The robot is performing a task described in natural language to the robot as “place grapes in ceramic bowl”, without the model being trained on that specific task. Right: As on the left, but with the novel task description of “place bottle in tray”. |
Often these models are trained using self-supervised learning approaches, where the model learns from observations of “raw” data that has not been curated or labeled, e.g., language models used in GPT-3 and GLaM, the self-supervised speech model BigSSL, the visual contrastive learning model SimCLR, and the multimodal contrastive model VATT. Self-supervised learning allows a large speech recognition model to match the previous Voice Search automatic speech recognition (ASR) benchmark accuracy while using only 3% of the annotated training data. These trends are exciting because they can substantially reduce the effort required to enable ML for a particular task, and because they make it easier (though by no means trivial) to train models on more representative data that better reflects different subpopulations, regions, languages, or other important dimensions of representation.
All of these trends are pointing in the direction of training highly capable general-purpose models that can handle multiple modalities of data and solve thousands or millions of tasks. By building in sparsity, so that the only parts of a model that are activated for a given task are those that have been optimized for it, these multimodal models can be made highly efficient. Over the next few years, we are pursuing this vision in a next-generation architecture and umbrella effort called Pathways. We expect to see substantial progress in this area, as we combine together many ideas that to date have been pursued relatively independently.
Pathways: a depiction of a single model we are working towards that can generalize across millions of tasks. |
## Trend 2: Continued Efficiency Improvements for ML
Improvements in efficiency — arising from advances in computer hardware design as well as ML algorithms and meta-learning research — are driving greater capabilities in ML models. Many aspects of the ML pipeline, from the hardware on which a model is trained and executed to individual components of the ML architecture, can be optimized for efficiency while maintaining or improving on state-of-the-art performance overall. Each of these different threads can improve efficiency by a significant multiplicative factor, and taken together, can reduce computational costs, including CO2 equivalent emissions (CO2e), by orders of magnitude compared to just a few years ago. This greater efficiency has enabled a number of critical advances that will continue to dramatically improve the efficiency of machine learning, enabling larger, higher quality ML models to be developed cost effectively and further democratizing access. I’m very excited about these directions of research!
### Continued Improvements in ML Accelerator Performance
Each generation of ML accelerator improves on previous generations, enabling faster performance per chip, and often increasing the scale of the overall systems. Last year, we announced our TPUv4 systems, the fourth generation of Google’s Tensor Processing Unit, which demonstrated a 2.7x improvement over comparable TPUv3 results in the MLPerf benchmarks. Each TPUv4 chip has ~2x the peak performance per chip versus the TPUv3 chip, and the scale of each TPUv4 pod is 4096 chips (4x that of TPUv3 pods), yielding a performance of approximately 1.1 exaflops per pod (versus ~100 petaflops per TPUv3 pod). Having pods with larger numbers of chips that are connected together with high speed networks improves efficiency for larger models.
ML capabilities on mobile devices are also increasing significantly. The Pixel 6 phone features a brand new Google Tensor processor that integrates a powerful ML accelerator to better support important on-device features.
Left:TPUv4 board;Center:Part of a TPUv4 pod;Right:Google Tensor chip found in Pixel 6 phones.Our use of ML to accelerate the design of computer chips of all kinds (more on this below) is also paying dividends, particularly to produce better ML accelerators.
### Continued Improvements in ML Compilation and Optimization of ML Workloads
Even when the hardware is unchanged, improvements in compilers and other optimizations in system software for machine learning accelerators can lead to significant improvements in efficiency. For example, “A Flexible Approach to Autotuning Multi-pass Machine Learning Compilers” shows how to use machine learning to perform auto-tuning of compilation settings to get across-the-board performance improvements of 5-15% (and sometimes as much as 2.4x improvement) for a suite of ML programs on the same underlying hardware. GSPMD describes an automatic parallelization system based on the XLA compiler that is capable of scaling most deep learning network architectures beyond the memory capacity of an accelerator and has been applied to many large models, such as GShard-M4, LaMDA, BigSSL, ViT, MetNet-2, and GLaM, leading to state-of-the-art results across several domains.
### Human-Creativity–Driven Discovery of More Efficient Model Architectures
Continued improvements in model architectures give substantial reductions in the amount of computation needed to achieve a given level of accuracy for many problems. For example, the Transformer architecture, which we developed in 2017, was able to improve the state of the art on several NLP and translation benchmarks while simultaneously using 10x to 100x less computation to achieve these results than a variety of other prevalent methods, such as LSTMs and other recurrent architectures. Similarly, the Vision Transformer was able to show improved state-of-the-art results on a number of different image classification tasks despite using 4x to 10x less computation than convolutional neural networks.
### Machine-Driven Discovery of More Efficient Model Architectures
Neural architecture search (NAS) can automatically discover new ML architectures that are more efficient for a given problem domain. A primary advantage of NAS is that it can greatly reduce the effort needed for algorithm development, because NAS requires only a one-time effort per search space and problem domain combination. In addition, while the initial effort to perform NAS can be computationally expensive, the resulting models can greatly reduce computation in downstream research and production settings, resulting in greatly reduced resource requirements overall. For example, the one-time search to discover the Evolved Transformer generated only 3.2 tons of CO2e (much less than the 284t CO
2e reported elsewhere; see Appendix C and D in this joint Google/UC Berkeley preprint), but yielded a model for use by anyone in the NLP community that is 15-20% more efficient than the plain Transformer model. A more recent use of NAS discovered an even more efficient architecture called Primer (that has also been open-sourced), which reduces training costs by 4x compared to a plain Transformer model. In this way, the discovery costs of NAS searches are often recouped from the use of the more-efficient model architectures that are discovered, even if they are applied to only a handful of downstream uses (and many NAS results are reused thousands of times).
The Primer architecture discovered by NAS is 4x as efficient compared with a plain Transformer model. This image shows (in red) the two main modifications that give Primer most of its gains: depthwise convolution added to attention multi-head projections and squared ReLU activations (blue indicates portions of the original Transformer). NAS has also been used to discover more efficient models in the vision domain. The EfficientNetV2 model architecture is the result of a neural architecture search that jointly optimizes for model accuracy, model size, and training speed. On the ImageNet benchmark, EfficientNetV2 improves training speed by 5–11x while substantially reducing model size over previous state-of-the-art models. The CoAtNet model architecture was created with an architecture search that uses ideas from the Vision Transformer and convolutional networks to create a hybrid model architecture that trains 4x faster than the Vision Transformer and achieves a new ImageNet state of the art.
EfficientNetV2 achieves much better training efficiency than prior models for ImageNet classification. The broad use of search to help improve ML model architectures and algorithms, including the use of reinforcement learning and evolutionary techniques, has inspired other researchers to apply this approach to different domains. To aid others in creating their own model searches, we have open-sourced Model Search, a platform that enables others to explore model search for their domains of interest. In addition to model architectures, automated search can also be used to find new, more efficient reinforcement learning algorithms, building on the earlier AutoML-Zero work that demonstrated this approach for automating supervised learning algorithm discovery.
### Use of Sparsity
Sparsity, where a model has a very large capacity, but only some parts of the model are activated for a given task, example or token, is another important algorithmic advance that can greatly improve efficiency. In 2017, we introduced the sparsely-gated mixture-of-experts layer, which demonstrated better results on a variety of translation benchmarks while using 10x less computation than previous state-of-the-art dense LSTM models. More recently, Switch Transformers, which pair a mixture-of-experts–style architecture with the Transformer model architecture, demonstrated a 7x speedup in training time and efficiency over the dense T5-Base Transformer model. The GLaM model showed that transformers and mixture-of-expert–style layers can be combined to produce a model that exceeds the accuracy of the GPT-3 model on average across 29 benchmarks using 3x less energy for training and 2x less computation for inference. The notion of sparsity can also be applied to reduce the cost of the attention mechanism in the core Transformer architecture.
The BigBird sparse attention model consists of global tokens that attend to all parts of an input sequence, local tokens, and a set of random tokens. Theoretically, this can be interpreted as adding a few global tokens on a Watts-Strogatz graph. The use of sparsity in models is clearly an approach with very high potential payoff in terms of computational efficiency, and we are only scratching the surface in terms of research ideas to be tried in this direction.
Each of these approaches for improved efficiency can be combined together so that equivalent-accuracy language models trained today in efficient data centers are ~100 times more energy efficient and produce ~650 times less CO
2e emissions, compared to a baseline Transformer model trained using P100 GPUs in an average U.S. datacenter using an average U.S. energy mix. And this doesn’t even account for Google’s carbon-neutral, 100% renewable energy offsets. We’ll have a more detailed blog post analyzing the carbon emissions trends of NLP models soon.
## Trend 3: ML Is Becoming More Personally and Communally Beneficial
A host of new experiences are made possible as innovation in ML and silicon hardware (like the Google Tensor processor on the Pixel 6) enable mobile devices to be more capable of continuously and efficiently sensing their surrounding context and environment. These advances have improved accessibility and ease of use, while also boosting computational power, which is critical for popular features like mobile photography, live translation and more. Remarkably, recent technological advances also provide users with a more customized experience while strengthening privacy safeguards.
More people than ever rely on their phone cameras to record their daily lives and for artistic expression. The clever application of ML to computational photography has continued to advance the capabilities of phone cameras, making them easier to use, improving performance, and resulting in higher-quality images. Advances, such as improved HDR+, the ability to take pictures in very low light, better handling of portraits, and efforts to make cameras more inclusive so they work for all skin tones, yield better photos that are more true to the photographer’s vision and to their subjects. Such photos can be further improved using the powerful ML-based tools now available in Google Photos, like cinematic photos, noise and blur reduction, and the Magic Eraser.
In addition to using their phones for creative expression, many people rely on them to help communicate with others across languages and modalities in real-time using Live Translate in messaging apps and Live Caption for phone calls. Speech recognition accuracy has continued to make substantial improvements thanks to techniques like self-supervised learning and noisy student training, with marked improvements for accented speech, noisy conditions or environments with overlapping speech, and across many languages. Building on advances in text-to-speech synthesis, people can listen to web pages and articles using our Read Aloud technology on a growing number of platforms, making information more available across barriers of modality and languages. Live speech translations in the Google Translate app have become significantly better by stabilizing the translations that are generated on-the-fly, and high quality, robust and responsible direct speech-to-speech translation provides a much better user experience in communicating with people speaking a different language. New work on combining ML with traditional codec approaches in the Lyra speech codec and the more general SoundStream audio codec enables higher fidelity speech, music, and other sounds to be communicated reliably at much lower bitrate.
Everyday interactions are becoming much more natural with features like automatic call screening and ML agents that will wait on hold for you, thanks to advances in Duplex. Even short tasks that users may perform frequently have been improved with tools such as Smart Text Selection, which automatically selects entities like phone numbers or addresses for easy copy and pasting, and grammar correction as you type on Pixel 6 phones. In addition, Screen Attention prevents the phone screen from dimming when you are looking at it and improvements in gaze recognition are opening up new use cases for accessibility and for improved wellness and health. ML is also enabling new methods for ensuring the safety of people and communities. For example, Suspicious Message Alerts warn against possible phishing attacks and Safer Routing detects hard-braking events to suggest alternate routes.
Recent work demonstrates the ability of gaze recognition as an important biomarker of mental fatigue. |
Given the potentially sensitive nature of the data that underlies these new capabilities, it is essential that they are designed to be private by default. Many of them run inside of Android's Private Compute Core — an open source, secure environment isolated from the rest of the operating system. Android ensures that data processed in the Private Compute Core is not shared to any apps without the user taking an action. Android also prevents any feature inside the Private Compute Core from having direct access to the network. Instead, features communicate over a small set of open-source APIs to Private Compute Services, which strips out identifying information and makes use of privacy technologies, including federated learning, federated analytics, and private information retrieval, enabling learning while simultaneously ensuring privacy.
Federated Reconstruction is a novel partially local federated learning technique in which models are partitioned into global and local parameters. For each round of Federated Reconstruction training: (1) The server sends the current global parameters g to each user i; (2) Each user i freezes g and reconstructs their local parameters l; (3) Each user ii freezes l and updates ig to produce g; (4) Users’ ig are averaged to produce the global parameters for the next round.i |
These technologies are critical to evolving next-generation computation and interaction paradigms, whereby personal or communal devices can both learn from and contribute to training a collective model of the world without compromising privacy. A federated unsupervised approach to privately learn the kinds of aforementioned general-purpose models with fine-tuning for a given task or context could unlock increasingly intelligent systems that are far more intuitive to interact with — more like a social entity than a machine. Broad and equitable access to these intelligent interfaces will only be possible with deep changes to our technology stacks, from the edge to the datacenter, so that they properly support neural computing.
## Trend 4: Growing Impact of ML in Science, Health and Sustainability
In recent years, we have seen an increasing impact of ML in the basic sciences, from physics to biology, with a number of exciting practical applications in related realms, such as renewable energy and medicine. Computer vision models have been deployed to address problems at both personal and global scales. They can assist physicians in their regular work, expand our understanding of neural physiology, and also provide better weather forecasts and streamline disaster relief efforts. Other types of ML models are proving critical in addressing climate change by discovering ways to reduce emissions and improving the output of alternative energy sources. Such models can even be leveraged as creative tools for artists! As ML becomes more robust, well-developed, and widely accessible, its potential for high-impact applications in a broad array of real-world domains continues to expand, helping to solve some of our most challenging problems.
### Large-Scale Application of Computer Vision for New Insights
The advances in computer vision over the past decade have enabled computers to be used for a wide variety of tasks across different scientific domains. In neuroscience, automated reconstruction techniques can recover the neural connective structure of brain tissues from high resolution electron microscopy images of thin slices of brain tissue. In previous years, we have collaborated to create such resources for fruit fly, mouse, and songbird brains, but last year, we collaborated with the Lichtman Lab at Harvard University to analyze the largest sample of brain tissue imaged and reconstructed in this level of detail, in any species, and produced the first large-scale study of synaptic connectivity in the human cortex that spans multiple cell types across all layers of the cortex. The goal of this work is to produce a novel resource to assist neuroscientists in studying the stunning complexity of the human brain. The image below, for example, shows six neurons out of about 86 billion neurons in an adult human brain.
A single human chandelier neuron from our human cortex reconstruction, along with some of the pyramidal neurons that make a connection with that cell. Here’s an interactive version and a gallery of other interactive examples. Computer vision technology also provides powerful tools to address challenges at much larger, even global, scales. A deep-learning–based approach to weather forecasting that uses satellite and radar imagery as inputs, combined with other atmospheric data, produces weather and precipitation forecasts that are more accurate than traditional physics-based models at forecasting times up to 12 hours. They can also produce updated forecasts much more quickly than traditional methods, which can be critical in times of extreme weather.
Comparison of 0.2 mm/hr precipitation on March 30, 2020 over Denver, Colorado. Left:Ground truth, source MRMS.Center:Probability map as predicted by MetNet-2.Right:Probability map as predicted by the physics-based HREFmodel. MetNet-2 is able to predict the onset of the storm earlier in the forecast than HREF as well as the storm’s starting location, whereas HREF misses the initiation location, but captures its growth phase well.Having an accurate record of building footprints is essential for a range of applications, from population estimation and urban planning to humanitarian response and environmental science. In many parts of the world, including much of Africa, this information wasn’t previously available, but new work shows that using computer vision techniques applied to satellite imagery can help identify building boundaries at continental scales. The results of this approach have been released in the Open Buildings dataset, a new open-access data resource that contains the locations and footprints of 516 million buildings with coverage across most of the African continent. We’ve also been able to use this unique dataset in our collaboration with the World Food Programme to provide fast damage assessment after natural disasters through application of ML.
A common theme across each of these cases is that ML models are able to perform specialized tasks efficiently and accurately based on analysis of available visual data, supporting high impact downstream tasks.
### Automated Design Space Exploration
Another approach that has yielded excellent results across many fields is to allow an ML algorithm to explore and evaluate a problem’s design space for possible solutions in an automated way. In one application, a Transformer-based variational autoencoder learns to create aesthetically-pleasing and useful document layouts, and the same approach can be extended to explore possible furniture layouts. Another ML-driven approach automates the exploration of the huge design space of tweaks for computer game rules to improve playability and other attributes of a game, enabling human game designers to create enjoyable games more quickly.
A visualization of the Variational Transformer Network (VTN) model, which is able to extract meaningful relationships between the layout elements (paragraphs, tables, images, etc.) in order to generate realistic synthetic documents (e.g., with better alignment and margins). Other ML algorithms have been used to evaluate the design space of computer architectural decisions for ML accelerator chips themselves. We’ve also shown that ML can be used to quickly create chip placements for ASIC designs that are better than layouts generated by human experts and can be generated in a matter of hours instead of weeks. This reduces the fixed engineering costs of chips and lowers the barrier to quickly creating specialized hardware for different applications. We’ve successfully used this automated placement approach in the design of our upcoming TPU-v5 chip.
Such exploratory ML approaches have also been applied to materials discovery. In a collaboration between Google Research and Caltech, several ML models, combined with a modified inkjet printer and a custom-built microscope, were able to rapidly search over hundreds of thousands of possible materials to hone in on 51 previously uncharacterized three-metal oxide materials with promising properties for applications in areas like battery technology and electrolysis of water.
These automated design space exploration approaches can help accelerate many scientific fields, especially when the entire experimental loop of generating the experiment and evaluating the result can all be done in an automated or mostly-automated manner. I expect to see this approach applied to good effect in many more areas in the coming years.
### Application to Health
In addition to advancing basic science, ML can also drive advances in medicine and human health more broadly. The idea of leveraging advances in computer science in health is nothing new — in fact some of my own early experiences were in developing software to help analyze epidemiological data. But ML opens new doors, raises new opportunities, and yes, poses new challenges.
Take for example the field of genomics. Computing has been important to genomics since its inception, but ML adds new capabilities and disrupts old paradigms. When Google researchers began working in this area, the idea of using deep learning to help infer genetic variants from sequencer output was considered far-fetched by many experts. Today, this ML approach is considered state-of-the-art. But the future holds an even more important role for ML — genomics companies are developing new sequencing instruments that are more accurate and faster, but also present new inference challenges. Our release of open-source software DeepConsensus and, in collaboration with UCSC, PEPPER-DeepVariant, supports these new instruments with cutting-edge informatics. We hope that more rapid sequencing can lead to near term applicability with impact for real patients.
A schematic of the Transformer architecture for DeepConsensus, which corrects sequencing errors to improve yield and correctness. There are other opportunities to use ML to accelerate our use of genomic information for personalized health outside of processing the sequencer data. Large biobanks of extensively phenotyped and sequenced individuals can revolutionize how we understand and manage genetic predisposition to disease. Our ML-based phenotyping method improves the scalability of converting large imaging and text datasets into phenotypes usable for genetic association studies, and our DeepNull method better leverages large phenotypic data for genetic discovery. We are happy to release both as open-source methods for the scientific community.
The process for generating large-scale quantification of anatomical and disease traits for combination with genomic data in Biobanks. Just as ML helps us see hidden characteristics of genomics data, it can help us discover new information and glean new insights from other health data types as well. Diagnosis of disease is often about identifying a pattern, quantifying a correlation, or recognizing a new instance of a larger class — all tasks at which ML excels. Google researchers have used ML to tackle a wide range of such problems, but perhaps none of these has progressed farther than the applications of ML to medical imaging.
In fact, Google’s 2016 paper describing the application of deep learning to the screening for diabetic retinopathy, was selected by the editors of the Journal of the American Medical Association (JAMA) as one of the top 10 most influential papers of the decade — not just the most influential papers on ML and health, the most influential JAMA papers of the decade overall. But the strength of our research doesn’t end at contributions to the literature, but extends to our ability to build systems operating in the real world. Through our global network of deployment partners, this same program has helped screen tens of thousands of patients in India, Thailand, Germany and France who might otherwise have been untested for this vision-threatening disease.
We expect to see this same pattern of assistive ML systems deployed to improve breast cancer screening, detect lung cancer, accelerate radiotherapy treatments for cancer, flag abnormal X-rays, and stage prostate cancer biopsies. Each domain presents new opportunities to be helpful. ML-assisted colonoscopy procedures are a particularly interesting example of going beyond the basics. Colonoscopies are not just used to diagnose colon cancer — the removal of polyps during the procedure are the front line of halting disease progression and preventing serious illness. In this domain we’ve demonstrated that ML can help ensure doctors don’t miss polyps, can help detect elusive polyps, and can add new dimensions of quality assurance, like coverage mapping through the application of simultaneous localization and mapping techniques. In collaboration with Shaare Zedek Medical Center in Jerusalem, we’ve shown these systems can work in real time, detecting an average of one polyp per procedure that would have otherwise been missed, with fewer than four false alarms per procedure.
Another ambitious healthcare initiative, Care Studio, uses state-of-the-art ML and advanced NLP techniques to analyze structured data and medical notes, presenting clinicians with the most relevant information at the right time — ultimately helping them deliver more proactive and accurate care.
As important as ML may be to expanding access and improving accuracy in the clinical setting, we see a new equally important trend emerging: ML applied to help people in their daily health and well-being. Our everyday devices have powerful sensors that can help democratize health metrics and information so people can make more informed decisions about their health. We’ve already seen launches that enable a smartphone camera to assess heart rate and respiratory rate to help users without additional hardware, and Nest Hub devices that support contactless sleep sensing and allow users to better understand their nighttime wellness. We’ve seen that we can, on the one hand, significantly improve speech recognition quality for disordered speech in our own ASR systems, and on the other, use ML to help recreate the voice of those with speech impairments, empowering them to communicate in their own voice. ML enabled smartphones that help people better research emerging skin conditions or help those with limited vision go for a jog, seem to be just around the corner. These opportunities offer a future too bright to ignore.
### ML Applications for the Climate Crisis
Another realm of paramount importance is climate change, which is an incredibly urgent threat for humanity. We need to all work together to bend the curve of harmful emissions to ensure a safe and prosperous future. Better information about the climate impact of different choices can help us tackle this challenge in a number of different ways.
To this end, we recently rolled out eco-friendly routing in Google Maps, which we estimate will save about 1 million tons of CO
2emissions per year (the equivalent of removing more than 200,000 cars from the road). A recent case study shows that using Google Maps directions in Salt Lake City results in both faster and more emissions-friendly routing, which saves 1.7% of CO2emissions and 6.5% travel time. In addition, making our Maps products smarter about electric vehicles can help alleviate range anxiety, encouraging people to switch to emissions-free vehicles. We are also working with multiple municipalities around the world to use aggregated historical traffic data to help suggest improved traffic light timing settings, with an early pilot study in Israel and Brazil showing a 10-20% reduction in fuel consumption and delay time at the examined intersections.
With eco-friendly routing, Google Maps will show you the fastest route and the one that’s most fuel-efficient — so you can choose whichever one works best for you. On a longer time scale, fusion holds promise as a game-changing renewable energy source. In a long-standing collaboration with TAE Technologies, we have used ML to help maintain stable plasmas in their fusion reactor by suggesting settings of the more than 1000 relevant control parameters. With our collaboration, TAE achieved their major goals for their Norman reactor, which brings us a step closer to the goal of breakeven fusion. The machine maintains a stable plasma at 30 million Kelvin (don’t touch!) for 30 milliseconds, which is the extent of available power to its systems. They have completed a design for an even more powerful machine, which they hope will demonstrate the conditions necessary for breakeven fusion before the end of the decade.
We’re also expanding our efforts to address wildfires and floods, which are becoming more common (like millions of Californians, I’m having to adapt to having a regular “fire season”). Last year, we launched a wildfire boundary map powered by satellite data to help people in the U.S. easily understand the approximate size and location of a fire — right from their device. Building on this, we’re now bringing all of Google’s wildfire information together and launching it globally with a new layer on Google Maps. We have been applying graph optimization algorithms to help optimize fire evacuation routes to help keep people safe in the presence of rapidly advancing fires. In 2021, our Flood Forecasting Initiative expanded its operational warning systems to cover 360 million people, and sent more than 115 million notifications directly to the mobile devices of people at risk from flooding, more than triple our outreach in the previous year. We also deployed our LSTM-based forecast models and the new Manifold inundation model in real-world systems for the first time, and shared a detailed description of all components of our systems.
The wildfire layer in Google Maps provides people with critical, up-to-date information in an emergency. We’re also working hard on our own set of sustainability initiatives. Google was the first major company to become carbon neutral in 2007. We were also the first major company to match our energy use with 100 percent renewable energy in 2017. We operate the cleanest global cloud in the industry, and we’re the world’s largest corporate purchaser of renewable energy. Further, in 2020 we became the first major company to make a commitment to operate on 24/7 carbon-free energy in all our data centers and campuses worldwide. This is far more challenging than the traditional approach of matching energy usage with renewable energy, but we’re working to get this done by 2030. Carbon emission from ML model training is a concern for the ML community, and we have shown that making good choices about model architecture, datacenter, and ML accelerator type can reduce the carbon footprint of training by ~100-1000x.
## Trend 5: Deeper and Broader Understanding of ML
As ML is used more broadly across technology products and society more generally, it is imperative that we continue to develop new techniques to ensure that it is applied fairly and equitably, and that it benefits all people and not just select subsets. This is a major focus for our Responsible AI and Human-Centered Technology research group and an area in which we conduct research on a variety of responsibility-related topics.
One area of focus is recommendation systems that are based on user activity in online products. Because these recommendation systems are often composed of multiple distinct components, understanding their fairness properties often requires insight into individual components as well as how the individual components behave when combined together. Recent work has helped to better understand these relationships, revealing ways to improve the fairness of both individual components and the overall recommendation system. In addition, when learning from implicit user activity, it is also important for recommendation systems to learn in an unbiased manner, since the straightforward approach of learning from items that were shown to previous users exhibits well-known forms of bias. Without correcting for such biases, for example, items that were shown in more prominent positions to users tend to get recommended to future users more often.
As in recommendation systems, surrounding context is important in machine translation. Because most machine translation systems translate individual sentences in isolation, without additional surrounding context, they can often reinforce biases related to gender, age or other areas. In an effort to address some of these issues, we have a long-standing line of research on reducing gender bias in our translation systems, and to help the entire translation community, last year we released a dataset to study gender bias in translation based on translations of Wikipedia biographies.
Another common problem in deploying machine learning models is distributional shift: if the statistical distribution of data on which the model was trained is not the same as that of the data the model is given as input, the model’s behavior can sometimes be unpredictable. In recent work, we employ the Deep Bootstrap framework to compare the real world, where there is finite training data, to an "ideal world", where there is infinite data. Better understanding of how a model behaves in these two regimes (real vs. ideal) can help us develop models that generalize better to new settings and exhibit less bias towards fixed training datasets.
Although work on ML algorithms and model development gets significant attention, data collection and dataset curation often gets less. But this is an important area, because the data on which an ML model is trained can be a potential source of bias and fairness issues in downstream applications. Analyzing such data cascades in ML can help identify the many places in the lifecycle of an ML project that can have substantial influence on the outcomes. This research on data cascades has led to evidence-backed guidelines for data collection and evaluation in the revised PAIR Guidebook, aimed at ML developers and designers.
Arrows of different color indicate various types of data cascades, each of which typically originate upstream, compound over the ML development process, and manifest downstream. |
The general goal of better understanding data is an important part of ML research. One thing that can help is finding and investigating anomalous data. We have developed methods to better understand the influence that particular training examples can have on an ML model, since mislabeled data or other similar issues can have outsized impact on the overall model behavior. We have also built the Know Your Data tool to help ML researchers and practitioners better understand properties of their datasets, and last year we created a case study of how to use the Know Your Data tool to explore issues like gender bias and age bias in a dataset.
A screenshot from Know Your Data showing the relationship between words that describe attractiveness and gendered words. For example, “attractive” and “male/man/boy” co-occur 12 times, but we expect ~60 times by chance (the ratio is 0.2x). On the other hand, “attractive” and “female/woman/girl” co-occur 2.62 times more than chance. |
Understanding dynamics of benchmark dataset usage is also important, given the central role they play in the organization of ML as a field. Although studies of individual datasets have become increasingly common, the dynamics of dataset usage across the field have remained underexplored. In recent work, we published the first large scale empirical analysis of dynamics of dataset creation, adoption, and reuse. This work offers insights into pathways to enable more rigorous evaluations, as well as more equitable and socially informed research.
Creating public datasets that are more inclusive and less biased is an important way to help improve the field of ML for everyone. In 2016, we released the Open Images dataset, a collection of ~9 million images annotated with image labels spanning thousands of object categories and bounding box annotations for 600 classes. Last year, we introduced the More Inclusive Annotations for People (MIAP) dataset in the Open Images Extended collection. The collection contains more complete bounding box annotations for the *person* class hierarchy, and each annotation is labeled with fairness-related attributes, including perceived gender presentation and perceived age range. With the increasing focus on reducing unfair bias as part of responsible AI research, we hope these annotations will encourage researchers already leveraging the Open Images dataset to incorporate fairness analysis in their research.
Because we also know that our teams are not the only ones creating datasets that can improve machine learning, we have built Dataset Search to help users discover new and useful datasets, wherever they might be on the Web.
Tackling various forms of abusive behavior online, such as toxic language, hate speech, and misinformation, is a core priority for Google. Being able to detect such forms of abuse reliably, efficiently, and at scale is of critical importance both to ensure that our platforms are safe and also to avoid the risk of reproducing such negative traits through language technologies that learn from online discourse in an unsupervised fashion. Google has pioneered work in this space through the Perspective API tool, but the nuances involved in detecting toxicity at scale remains a complex problem. In recent work, in collaboration with various academic partners, we introduced a comprehensive taxonomy to reason about the changing landscape of online hate and harassment. We also investigated how to detect covert forms of toxicity, such as microaggressions, that are often ignored in online abuse interventions, studied how conventional approaches to deal with disagreements in data annotations of such subjective concepts might marginalize minority perspectives, and proposed a new disaggregated modeling approach that uses a multi-task framework to tackle this issue. Furthermore, through qualitative research and network-level content analysis, Google’s Jigsaw team, in collaboration with researchers at George Washington University, studied how hate clusters spread disinformation across social media platforms.
Another potential concern is that ML language understanding and generation models can sometimes also produce results that are not properly supported by evidence. To confront this problem in question answering, summarization, and dialog, we developed a new framework for measuring whether results can be attributed to specific sources. We released annotation guidelines and demonstrated that they can be reliably used in evaluating candidate models.
Interactive analysis and debugging of models remains key to responsible use of ML. We have updated our Language Interpretability Tool with new capabilities and techniques to advance this line of work, including support for image and tabular data, a variety of features carried over from our previous work on the What-If Tool, and built-in support for fairness analysis through the technique of Testing with Concept Activation Vectors. Interpretability and explainability of ML systems more generally is also a key part of our Responsible AI vision; in collaboration with DeepMind, we made headway in understanding the acquisition of human chess concepts in the self-trained AlphaZero chess system.
Explore what AlphaZero might have learned about playing chess using this online tool. |
We are also working hard to broaden the perspective of Responsible AI beyond western contexts. Our recent research examines how various assumptions of conventional algorithmic fairness frameworks based on Western institutions and infrastructures may fail in non-Western contexts and offers a pathway for recontextualizing fairness research in India along several directions. We are actively conducting survey research across several continents to better understand perceptions of and preferences regarding AI. Western framing of algorithmic fairness research tends to focus on only a handful of attributes, thus biases concerning non-Western contexts are largely ignored and empirically under-studied. To address this gap, in collaboration with the University of Michigan, we developed a weakly supervised method to robustly detect lexical biases in broader geo-cultural contexts in NLP models that reflect human judgments of offensive and inoffensive language in those geographic contexts.
Furthermore, we have explored applications of ML to contexts valued in the Global South, including developing a proposal for farmer-centered ML research. Through this work, we hope to encourage the field to be thoughtful about how to bring ML-enabled solutions to smallholder farmers in ways that will improve their lives and their communities.
Involving community stakeholders at all stages of the ML pipeline is key to our efforts to develop and deploy ML responsibly and keep us focused on tackling the problems that matter most. In this vein, we held a Health Equity Research Summit among external faculty, non-profit organization leads, government and NGO representatives, and other subject matter experts to discuss how to bring more equity into the entire ML ecosystem, from the way we approach problem-solving to how we assess the impact of our efforts.
Community-based research methods have also informed our approach to designing for digital wellbeing and addressing racial equity issues in ML systems, including improving our understanding of the experience of Black Americans using ASR systems**.** We are also listening to the public more broadly to learn how sociotechnical ML systems could help during major life events, such as by supporting family caregiving.
As ML models become more capable and have impact in many domains, the protection of the private information used in ML continues to be an important focus for research. Along these lines, some of our recent work addresses privacy in large models, both highlighting that training data can sometimes be extracted from large models and pointing to how privacy can be achieved in large models, e.g., as in differentially private BERT. In addition to the work on federated learning and analytics, mentioned above, we have also been enhancing our toolbox with other principled and practical ML techniques for ensuring differential privacy, for example private clustering, private personalization, private matrix completion, private weighted sampling, private quantiles, private robust learning of halfspaces, and in general, sample-efficient private PAC learning. Moreover, we have been expanding the set of privacy notions that can be tailored to different applications and threat models, including label privacy and user versus item level privacy.
A visual illustration of the differentially private clustering algorithm. |
### Datasets
Recognizing the value of open datasets to the general advancement of ML and related fields of research, we continue to grow our collection of open source datasets and resources and expand our global index of open datasets in Google Dataset Search. This year, we have released a number of datasets and tools across a range of research areas:
Datasets & Tools |
Description |
AIST++ | 3D keypoints with corresponding images for dance motions covering 10 dance genres |
AutoFlow | 40k image pairs with ground truth optical flow |
Balloon Learning Environment | A stratospheric balloon simulator and benchmark environment for RL algorithms |
C4_200M | A 200 million sentence synthetic dataset for grammatical error correction |
CIFAR-5M | Dataset of ~6 million synthetic CIFAR-10–like images (RGB 32 x 32 pix) |
Crisscrossed Captions | Set of semantic similarity ratings for the MS-COCO dataset |
Disfl-QA | Dataset of contextual disfluencies for information seeking |
Distilled Datasets | Distilled datasets from CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, and SVHN |
EvolvingRL | 1000 top performing RL algorithms discovered through algorithm evolution |
GoEmotions | A human-annotated dataset of 58k Reddit comments labeled with 27 emotion categories |
H01 Dataset | 1.4 petabyte browsable reconstruction of the human cortex |
Know Your Data | Tool for understanding biases in a dataset |
Lens Flare | 5000 high-quality RGB images of typical lens flare |
More Inclusive Annotations for People (MIAP) | Improved bounding box annotations for a subset of the person class in the Open Images dataset |
Mostly Basic Python Problems | 1000 Python programming problems, incl. task description, code solution & test cases |
NIH ChestX-ray14 dataset labels | Expert labels for a subset of the NIH ChestX-ray14 dataset |
Open Buildings | Locations and footprints of 516 million buildings with coverage across most of Africa |
Optical Polarization from Curie | 5GB of optical polarization data from the Curie submarine cable |
Readability Scroll | Scroll interactions of ~600 participants reading texts from the OneStopEnglish corpus |
RLDS | Tools to store, retrieve & manipulate episodic data for reinforcement learning |
Room-Across-Room (RxR) | Multilingual dataset for vision-and-language navigation in English, Hindi and Telugu |
Soft Attributes | ~6k sets of movie titles annotated with single English soft attributes |
TimeDial | Dataset of multiple choice span-filling tasks for temporal commonsense reasoning in dialog |
ToTTo | English table-to-text generation dataset with a controlled text generation task |
Translated Wikipedia Biographies | Dataset for analysis of common gender errors in NMT for English, Spanish and German |
UI Understanding Data for UIBert | Datasets for two UI understanding tasks, AppSim & RefExp |
WikiFact | Wikipedia & WikiData–based dataset to train relationship classifiers and fact extraction models |
WIT | Wikipedia-based Image Text dataset for multimodal multilingual ML |
## Research Community Interaction
To realize our goal for a more robust and comprehensive understanding of ML and related technologies, we actively engage with the broader research community. In 2021, we published over 750 papers, nearly 600 of which were presented at leading research conferences. Google Research sponsored over 150 conferences, and Google researchers contributed directly by serving on program committees and organizing workshops, tutorials and numerous other activities aimed at collectively advancing the field. To learn more about our contributions to some of the larger research conferences this year, please see our recent conference blog posts. In addition, we hosted 19 virtual workshops (like the 2021 Quantum Summer Symposium), which allowed us to further engage with the academic community by generating new ideas and directions for the research field and advancing research initiatives.
In 2021, Google Research also directly supported external research with $59M in funding, including $23M through Research programs to faculty and students, and $20M in university partnerships and outreach. This past year, we introduced new funding and collaboration programs that support academics all over the world who are doing high impact research. We funded 86 early career faculty through our Research Scholar Program to support general advancements in science, and funded 34 faculty through our Award for Inclusion Research Program who are doing research in areas like accessibility, algorithmic fairness, higher education and collaboration, and participatory ML. In addition to the research we are funding, we welcomed 85 faculty and post-docs, globally, through our Visiting Researcher program, to come to Google and partner with us on exciting ideas and shared research challenges. We also selected a group of 74 incredibly talented PhD student researchers to receive Google PhD Fellowships and mentorship as they conduct their research.
As part of our ongoing racial equity commitments, making computer science (CS) research more inclusive continues to be a top priority for us. In 2021, we continued expanding our efforts to increase the diversity of Ph.D. graduates in computing. For example, the CS Research Mentorship Program (CSRMP), an initiative by Google Research to support students from historically marginalized groups (HMGs) in computing research pathways, graduated 590 mentees, 83% of whom self-identified as part of an HMG, who were supported by 194 Google mentors — our largest group to date! In October, we welcomed 35 institutions globally leading the way to engage 3,400+ students in computing research as part of the 2021 exploreCSR cohort. Since 2018, this program has provided faculty with funding, community, evaluation and connections to Google researchers in order to introduce students from HMGs to the world of CS research. We are excited to expand this program to more international locations in 2022.
We also continued our efforts to fund and partner with organizations to develop and support new pathways and approaches to broadening participation in computing research at scale. From working with alliances like the Computing Alliance of Hispanic-Serving Institutions (CAHSI) and CMD-IT Diversifying LEAdership in the Professoriate (LEAP) Alliance to partnering with university initiatives like UMBC’s Meyerhoff Scholars, Cornell University’s CSMore, Northeastern University’s Center for Inclusive Computing, and MIT’s MEnTorEd Opportunities in Research (METEOR), we are taking a community-based approach to materially increase the representation of marginalized groups in computing research.
## Other Work
In writing these retrospectives, I try to focus on new research work that has happened (mostly) in the past year while also looking ahead. In past years’ retrospectives, I’ve tried to be more comprehensive, but this time I thought it could be more interesting to focus on just a few themes. We’ve also done great work in many other research areas that don’t fit neatly into these themes. If you’re interested, I encourage you to check out our research publications by area below or by year (and if you’re interested in quantum computing, our Quantum team recently wrote a retrospective of their work in 2021):
## Conclusion
Research is often a multi-year journey to real-world impact. Early stage research work that happened a few years ago is now having a dramatic impact on Google’s products and across the world. Investments in ML hardware accelerators like TPUs and in software frameworks like TensorFlow and JAX have borne fruit. ML models are increasingly prevalent in many different products and features at Google because their power and ease of expression streamline experimentation and productionization of ML models in performance-critical environments. Research into model architectures to create Seq2Seq, Inception, EfficientNet, and Transformer or algorithmic research like batch normalization and distillation is driving progress in the fields of language understanding, vision, speech, and others. Basic capabilities like better language and visual understanding and speech recognition can be transformational, and as a result, these sorts of models are widely deployed for a wide variety of problems in many of our products including Search, Assistant, Ads, Cloud, Gmail, Maps, YouTube, Workspace, Android, Pixel, Nest, and Translate.
These are truly exciting times in machine learning and computer science. Continued improvement in computers’ ability to understand and interact with the world around them through language, vision, and sound opens up entire new frontiers of how computers can help people accomplish things in the world. The many examples of progress along the five themes outlined in this post are waypoints in a long-term journey!
## Acknowledgements
*Thanks to Alison Carroll, Alison Lentz, Andrew Carroll, Andrew Tomkins, Avinatan Hassidim, Azalia Mirhoseini, Barak Turovsky, Been Kim, Blaise Aguera y Arcas, Brennan Saeta, Brian Rakowski, Charina Chou, Christian Howard, Claire Cui, Corinna Cortes, Courtney Heldreth, David Patterson, Dipanjan Das, Ed Chi, Eli Collins, Emily Denton, Fernando Pereira, Genevieve Park, Greg Corrado, Ian Tenney, Iz Conroy, James Wexler, Jason Freidenfelds, John Platt, Katherine Chou, Kathy Meier-Hellstern, Kyle Vandenberg, Lauren Wilcox, Lizzie Dorfman, Marian Croak, Martin Abadi, Matthew Flegal, Meredith Morris, Natasha Noy, Negar Saei, Neha Arora, Paul Muret, Paul Natsev, Quoc Le, Ravi Kumar, Rina Panigrahy, Sanjiv Kumar, Sella Nevo, Slav Petrov, Sreenivas Gollapudi, Tom Duerig, Tom Small, Vidhya Navalpakkam, Vincent Vanhoucke, Vinodkumar Prabhakaran, Viren Jain, Yonghui Wu, Yossi Matias, and Zoubin Ghahramani for helpful feedback and contributions to this post, and to the entire Research and Health communities at Google for everyone’s contributions towards this work.* | true | true | true | Posted by Jeff Dean, Senior Fellow and SVP of Google Research, on behalf of the entire Google Research community Over the last several decades, I'v... | 2024-10-12 00:00:00 | 2022-01-11 00:00:00 | Website | research.google | research.google | null | null |
|
28,270,152 | https://en.wikipedia.org/wiki/Currency_substitution | Currency substitution - Wikipedia | Authority control databases National Czech Republic | # Currency substitution
This article needs additional citations for verification. (January 2014) |
**Currency substitution** is the use of a foreign currency in parallel to or instead of a domestic currency.[1]
Currency substitution can be full or partial. Full currency substitution can occur after a major economic crisis, such as in Ecuador, El Salvador, and Zimbabwe. Some small economies, for whom it is impractical to maintain an independent currency, use the currencies of their larger neighbours; for example, Liechtenstein uses the Swiss franc.
Partial currency substitution occurs when residents of a country choose to hold a significant share of their financial assets denominated in a foreign currency. It can also occur as a gradual conversion to full currency substitution; for example, Argentina and Peru were both in the process of converting to the U.S. dollar during the 1990s.
## Name
[edit]"Dollarization", when referring to currency substitution, does not necessarily involve use of the United States dollar.[2] The major currencies used as substitutes are the US dollar and the euro.
## Origins
[edit]After the gold standard was abandoned at the outbreak of World War I and the Bretton Woods Conference following World War II, some countries sought exchange rate regimes to promote global economic stability, and hence their own prosperity. Countries usually peg their currency to a major convertible currency. "Hard pegs" are exchange rate regimes that demonstrate a stronger commitment to a fixed parity (i.e. currency boards) or relinquish control over their own currency (such as currency unions) while "soft pegs" are more flexible and floating exchange rate regimes.[3] The collapse of "soft" pegs in Southeast Asia and Latin America in the late 1990s led to currency substitution becoming a serious policy issue.[4]
A few cases of full currency substitution prior to 1999 had been the consequence of political and historical factors. In all long-standing currency substitution cases, historical and political reasons have been more influential than an evaluation of the economic effects of currency substitution.[5] Panama adopted the US dollar as legal tender after independence as the result of a constitutional ruling.[6] Ecuador and El Salvador became fully dollarized economies in 2000 and 2001 respectively, for different reasons.[5] Ecuador underwent currency substitution to deal with a widespread political and financial crisis resulting from massive loss of confidence in its political and monetary institutions. By contrast, El Salvador's official currency substitution was a result of internal debates and in a context of stable macroeconomic fundamentals and long-standing unofficial currency substitution. The eurozone adopted the euro (€) as its common currency and sole legal tender in 1999, which might be considered a variety of full-commitment regime similar to full currency substitution despite some evident differences from other currency substitutions.[7]
## Measures
[edit]There are two common indicators of currency substitution. The first measure is the share of foreign currency deposits (FCD) in the domestic banking system in the broad money including FCD. The second is the share of all foreign currency deposits held by domestic residents at home and abroad in their total monetary assets.[6]
## Types
[edit]**Unofficial currency substitution** or **de facto currency substitution** is the most common type of currency substitution. Unofficial currency substitution occurs when residents of a country choose to hold a significant share of their financial assets in foreign currency, even though the foreign currency is not legal tender there.[8] They hold deposits in the foreign currency because of a bad track record of the local currency, or as a hedge against inflation of the domestic currency.
**Official currency substitution** or **full currency substitution** happens when a country adopts a foreign currency as its sole legal tender, and ceases to issue the domestic currency. Another effect of a country adopting a foreign currency as its own is that the country gives up all power to vary its exchange rate. There are a small number of countries adopting a foreign currency as legal tender.
Full currency substitution has mostly occurred in Latin America, the Caribbean and the Pacific, as many countries in those regions see the United States Dollar as a stable currency compared to the national one.[9] For example, Panama underwent full currency substitution by adopting the US dollar as legal tender in 1904. This type of currency substitution is also known as **de jure currency substitution**.
Currency substitution can be used **semiofficially** (or officially, in bimonetary systems), where the foreign currency is legal tender alongside the domestic currency.[10]
In literature, there is a set of related definitions of currency substitution such as **external liability currency substitution**, **domestic liability currency substitution**, **banking sector's liability currency substitution** or **deposit currency substitution**, and **credit dollarization**. External liability currency substitution measures total external debt (private and public) denominated in foreign currencies of the economy.[10][11] Deposit currency substitution can be measured as the share of foreign currency deposits in the total deposits of the banking system, and credit currency substitution can be measured as the share of dollar credit in the total credit of the banking system.[12]
## Effects
[edit]### On trade and investment
[edit]One of the main advantages of adopting a strong foreign currency as sole legal tender is to reduce the transaction costs of trade among countries using the same currency.[13] There are at least two ways to infer this impact from data. The first is the significantly negative effect of exchange rate volatility on trade in most cases, and the second is an association between transaction costs and the need to operate with multiple currencies.[14] Economic integration with the rest of the world becomes easier as a result of lowered transaction costs and stabler prices.[2] Rose (2000) applied the gravity model of trade and provided empirical evidence that countries sharing a common currency engage in significantly increased trade among them, and that the benefits of currency substitution for trade may be large.[15]
Countries with full currency substitution can invoke greater confidence among international investors, inducing increased investments and growth. The elimination of the currency crisis risk due to full currency substitution leads to a reduction of country risk premiums and then to lower interest rates.[2] These effects result in a higher level of investment. However, there is a positive association between currency substitution and interest rates in a dual-currency economy.[16]
### On monetary and exchange rate policies
[edit]Official currency substitution helps to promote fiscal and monetary discipline and thus greater macroeconomic stability and lower inflation rates, to lower real exchange rate volatility, and possibly to deepen the financial system.[14] Firstly, currency substitution helps developing countries, providing a firm commitment to stable monetary and exchange rate policies by forcing a passive monetary policy. Adopting a strong foreign currency as legal tender will help to "eliminate the inflation-bias problem of discretionary monetary policy".[17] Secondly, official currency substitution imposes stronger financial constraint on the government by eliminating deficit financing by issuing money.[18] An empirical finding suggests that inflation has been significantly lower in economies with full currency substitution than nations with domestic currencies.[19] The expected benefit of currency substitution is the elimination of the risk of exchange rate fluctuations and a possible reduction in the country's international exposure. Currency substitution cannot eliminate the risk of an external crisis but provides steadier markets as a result of eliminating fluctuations in exchange rates.[2]
On the other hand, currency substitution leads to the loss of seigniorage revenue, the loss of monetary policy autonomy, and the loss of the exchange rate instruments. Seigniorage revenues are the profits generated when monetary authorities issue currency. When adopting a foreign currency as legal tender, a monetary authority needs to withdraw the domestic currency and give up future seigniorage revenue. The country loses the rights to its autonomous monetary and exchange rate policies, even in times of financial emergency.[2][20] For example, former chairman of the Federal Reserve Alan Greenspan has stated that the central bank considers the effects of its decisions only on the US economy.[21] In a full currency substituted economy, exchange rates are indeterminate and monetary authorities cannot devalue the currency.[22] In an economy with high currency substitution, devaluation policy is less effective in changing the real exchange rate because of significant pass-through effects to domestic prices.[2] However, the cost of losing an independent monetary policy exists when domestic monetary authorities can commit an effective counter-cyclical monetary policy, stabilizing the business cycle. This cost depends adversely on the correlation between the business cycle of the client country (the economy with currency substitution) and the business cycle of the anchor country.[13] In addition, monetary authorities in economies with currency substitution diminish the liquidity assurance to their banking system.[2][23]
### On banking systems
[edit]In an economy with full currency substitution, monetary authorities cannot act as lender of last resort to commercial banks by printing money. The alternatives to lending to the bank system may include taxation and issuing government debt.[24] The loss of the lender of last resort is considered a cost of full currency substitution. This cost depends on the initial level of unofficial currency substitution before moving to a full currency substituted economy. This relation is negative because in a heavily currency substituted economy, the central bank already fears difficulties in providing liquidity assurance to the banking system.[25] However, literature points out the existence of alternative mechanisms to provide liquidity insurance to banks, such as a scheme by which the international financial community charges an insurance fee in exchange for a commitment to lend to a domestic bank.[26]
Commercial banks in countries where saving accounts and loans in foreign currency are allowed may face two types of risks:
- Currency mismatch risk: Assets and liabilities on the balance sheets may be in different denominations. This may arise if the bank converts foreign currency deposits into local currency and lends in local currency or vice versa.
- Default risk: Arises if the bank uses the foreign currency deposits to lend in foreign currency.
[27]
However, currency substitution eliminates the probability of a currency crisis that negatively affects the banking system through the balance sheet channel. Currency substitution may reduce the possibility of systematic liquidity shortages and the optimal reserves in the banking system.[28] Research has shown that official currency substitution has played a significant role in improving bank liquidity and asset quality in Ecuador and El Salvador.[29]
## Determinants of the process
[edit]### The dynamics of the flight from domestic money
[edit]High and unanticipated inflation rates decrease the demand for domestic money and raise the demand for alternative assets, including foreign currency and assets dominated by foreign currency. This phenomenon is called the "flight from domestic money". It results in a rapid and sizable process of currency substitution.[30] In countries with high inflation rates, the domestic currency tends to be gradually displaced by a stable currency. At the beginning of this process, the store-of-value function of the domestic currency is replaced by the foreign currency. Then, the unit-of-account function of the domestic currency is displaced when many prices are quoted in a foreign currency. A prolonged period of high inflation will induce the domestic currency to lose its function as medium of exchange when the public carries out many transactions in foreign currency.[31]: 1
Ize and Levy-Yeyati (1998) examine the determinants of deposit and credit currency substitution, concluding that currency substitution is driven by the volatility of inflation and the real exchange rate. Currency substitution increases with inflation volatility and decreases with the volatility of the real exchange rate.[32]
### Institutional factors
[edit]The flight from domestic money depends on a country's institutional factors. The first factor is the level of development of the domestic financial market. An economy with a well-developed financial market can offer a set of alternative financial instruments denominated in domestic currency, reducing the role of foreign currency as an inflation hedge. The pattern of the currency substitution process also varies across countries with different foreign exchange and capital controls. In a country with strict foreign exchange regulations, the demand for foreign currency will be satisfied in the holding of foreign currency assets abroad and outside the domestic banking system. This demand often puts pressure on the parallel market of foreign currency and on the country's international reserves.[30] Evidence for this pattern is given in the absence of currency substitution during the pre-reform period in most transition economies, because of constricted controls on foreign exchange and the banking system.[31]: 13 In contrast, by increasing foreign currency reserves, a country might mitigate the shift of assets abroad and strengthen its external reserves in exchange for a currency substitution process. However, the effect of this regulation on the pattern of currency substitution depends on the public's expectations of macroeconomic stability and the sustainability of the foreign exchange regime.[30]
## Anchor currencies
[edit]### Australian dollar
[edit]- Kiribati (since 1943; also uses its own coins)
[33]: 17 - Nauru (since 1914; also issues non-circulating Nauran collector coins pegged to the Australian dollar)
[33]: 17[34] - Tuvalu (since 1892; also uses its own coins)
[33]: 17 - Zimbabwe (alongside the United States dollar, South African rand, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### Euro
[edit]- Andorra (formerly French franc and Spanish peseta; issued non-circulating Andorran diner coins; issues its own euro coins). Since 1278, Andorra has used its neighbours' currencies, at the time the Counties of Foix, in present-day France, and of Urgell, in Catalonia.
[33]: 17 - Cyprus (used in the Sovereign Base Areas of Akrotiri and Dhekelia; formerly used the Cypriot pound)
- France (used in the overseas territories of the French Southern Territories, Saint Barthélemy, and Saint Pierre and Miquelon. Euro is used in the French overseas and department region of Guadeloupe)
- French Polynesia (pegged to the CFP franc at a fixed exchange rate)
- New Caledonia (pegged to the CFP franc at a fixed exchange rate)
- Wallis and Futuna (pegged to the CFP franc at a fixed exchange rate)
- Kosovo (formerly German mark and Yugoslav dinar)
- Luxembourg (used the Luxembourgish franc from 1839 to 2001 and the Belgian franc at par; issues its own euro coins)
- Monaco (formerly French franc from 1865 to 2002 and Monégasque franc;
[33]: 17issues its own euro coins) - Montenegro (formerly German mark and Yugoslav dinar)
- North Korea (alongside the Chinese renminbi, North Korean won, and United States dollar)
[35] - San Marino (formerly Italian lira and Sammarinese lira; issues its own euro coins)
- Sovereign Military Order of Malta (issues non-circulating Maltese scudo coins at €0.24 = 1 scudo)
- Vatican City (formerly Italian lira and Vatican lira; issues its own euro coins)
- Zimbabwe (alongside the United States dollar, South African rand, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### Indian rupee
[edit]- Bhutan (alongside the Bhutanese ngultrum, pegged at par with the rupee)
- Nepal (alongside the Nepali rupee, pegged at ₹0.625)
- Zimbabwe (alongside the United States dollar, South African rand, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### New Zealand dollar
[edit]- Cook Islands (issues its own coins and some notes.)
- Niue (also issues its own non-circulating commemorative and collector coins minted at the New Zealand Mint, pegged to the New Zealand dollar.)
- Pitcairn Islands (also issues its own non-circulating commemorative and collector coins pegged to the New Zealand dollar.)
- Tokelau (also issues its own non-circulating commemorative and collector coins pegged to the New Zealand dollar.)
### Pound sterling
[edit]British Overseas Territories using the pound, or a local currency pegged to the pound, as their currency:
*British Antarctic Territory*(issues non-circulating collector coins for the British Antarctic Territory.)[36]- British Indian Ocean Territory (
*de jure*, U.S. dollar used*de facto*; also issues non-circulating collector coins for the British Indian Ocean Territory.)[37] - Falkland Islands (alongside the Falkland Islands pound)
- Gibraltar (alongside the Gibraltar pound)
- Saint Helena, Ascension and Tristan da Cunha (Tristan da Cunha; alongside the Saint Helena pound in Saint Helena and Ascension; also issues non-circulating collector coins for Saint Helena, Ascension and Tristan da Cunha.)
[38] - South Georgia and the South Sandwich Islands (alongside the Falkland Islands pound; also issues non-circulating collector coins for South Georgia and the South Sandwich Islands.)
[39]
The Crown Dependencies use a local issue of the pound as their currency:
- Guernsey (Guernsey pound)
- Alderney (issues non-circulating Alderney pound collector coins, backed by both the Pound sterling and Guernsey pound.)
[40]
- Alderney (issues non-circulating Alderney pound collector coins, backed by both the Pound sterling and Guernsey pound.)
- Isle of Man (Manx pound)
- Jersey (Jersey pound)
Under plans published in the Sustainable Growth Commission report by the Scottish National Party, an independent Scotland would use the pound as their currency for the first 10 years of independence. This has become known as sterlingisation.
Other countries:
- Zimbabwe (alongside the United States dollar, South African rand, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### South African rand
[edit]- Eswatini (alongside the Swazi lilangeni)
- Lesotho (alongside the Lesotho loti)
- Namibia (alongside the Namibian dollar)
- Zimbabwe (alongside the United States dollar, South African rand, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### United States dollar
[edit]#### Used exclusively
[edit]- British Virgin Islands (also issues non-circulating British Virgin Islands collector coins pegged to the U.S. dollar)
[41] - Caribbean Netherlands (since 1 January 2011)
- Marshall Islands (issued non-circulating collector coins of the Marshall Islands pegged to the U.S. dollar since 1986)
[42] - Micronesia (since 1944)
[33]: 17 - Palau (since 1944; issued non-circulating Palauan collector coins pegged to the U.S. dollar since 1992)
[33][43] - Turks and Caicos Islands (issued non-circulating Turks and Caicos Islands collector coins denominated in "Crowns" and pegged to the U.S. dollar since 1969)
[44]
#### Used partially
[edit]- Argentina (the United States dollar is used for major purchases such as buying properties)
- Bahamas (Bahamian dollar pegged at 1:1 but the United States dollar is accepted)
- Barbados (Barbadian dollar pegged at 2:1 but the United States dollar is accepted)
- Belize (Belizean dollar pegged at 2:1 but the United States dollar is accepted)
- Bermuda (Bermudian dollar pegged at 1:1 but the United States dollar is accepted)
- Cambodia (uses the Cambodian riel for many official transactions but most businesses deal exclusively in dollars for all but the cheapest items. Change is often given in a combination of U.S. dollars and Cambodian riel. ATMs yield U.S. dollars rather than Cambodian riel)
[45][46] - Canada (a modest amount of United States coinage circulates alongside the Canadian dollar and is accepted at par by most retailers, banks and coin redemption machines)
- Congo-Kinshasa (many institutions accept both the Congolese franc and U.S. dollars)
- Costa Rica (uses alongside the Costa Rican colón)
- East Timor (uses its own coins)
- Ecuador (since 2000; also uses its own coins)
[33]: 1 - El Salvador (both the U.S. dollar and bitcoin are legal tender) (see Bitcoin Law and Bitcoin in El Salvador)
[47] - Haiti (uses the U.S. dollar alongside its domestic currency, the gourde)
- Honduras (uses alongside the Honduran lempira)
[48] - Iraq (alongside the Iraqi dinar)
- Lebanon (alongside the Lebanese pound)
- Liberia (exclusively used the U.S. dollar during the early PRC period, but the National Bank of Liberia began issuing five dollar coins in 1982;
[33]: 3United States dollar still in common usage alongside the Liberian dollar) - North Korea (alongside the euro, North Korean won, and renminbi)
[35] - Panama (since 1904; also uses its own coins)
[33]: 6 - Somalia (alongside the Somali shilling)
- Somaliland (alongside the Somaliland shilling)
[49] - Uruguay
[50](the main currency is the Uruguayan peso) - Venezuela (alongside the Venezuelan bolívar; due to hyperinflation, USD is used for purchases such as buying electrical appliances, clothes, spare car parts, and food)
[51][52] - Vietnam (alongside the Vietnamese đồng)
- Zimbabwe (since 2020; alongside the South African rand, British pound, Botswana pula, Japanese yen, several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)
### Others
[edit]**Algerian dinar**: Sahrawi Arab Democratic Republic (*de facto*independent state, recognized by 45 UN member states, but mostly occupied by Morocco; used in the Sahrawi refugee camps)**Botswana pula**: Zimbabwe (alongside several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)**Brunei dollar**: Singapore (alongside the Singapore dollar)**Canadian dollar**: Saint Pierre and Miquelon (alongside the euro)**Chinese renminbi**: Zimbabwe (alongside several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)**Colombian peso**: Venezuela (mainly in western states, alongside U.S. dollar)**Danish krone**:- Faroe Islands (also issues its own coins and some notes)
- Greenland
**Egyptian pound**: Palestine (Palestinian territories)**Hong Kong dollar**: Macao (alongside the Macanese pataca, pegged at $1.032)**Japanese yen**: Zimbabwe (alongside several other currencies and U.S. dollar-denominated bond coins and bond notes of the Real Time Gross Settlement (RTGS) dollar)**Jordanian dinar**: West Bank (alongside the New Israeli shekel)**Mauritanian ouguiya**: Sahrawi Arab Democratic Republic (*de facto*independent state, recognized by 45 UN member states, but mostly occupied by Morocco; used in the Sahrawi refugee camps)**Moroccan dirham**: Sahrawi Arab Democratic Republic (*de facto*independent state, recognized by 45 UN member states, but mostly occupied by Morocco; used in claimed areas under Moroccan control; issues the non-circulating Sahrawi peseta for collectors)**New Israeli shekel**: Palestine (Palestinian territories)**Russian ruble**:- Abkhazia (
*de facto*independent state, but recognized as a part of Georgia internationally; issues non-circulating collector coins (Abkhazian apsar) pegged to the Russian ruble)[53] - South Ossetia (
*de facto*independent state, but recognized as a part of Georgia internationally; issues non-circulating collector coins (South Ossetian zarin) pegged to the Russian ruble)[54]
- Abkhazia (
**Singapore dollar**: Brunei (alongside the Brunei dollar)**Swiss franc**:- Germany (uses in the exclave of Büsingen am Hochrhein, alongside the euro)
- Italy (uses in the enclave of Campione d'Italia, alongside the euro)
- Liechtenstein (also issues non-circulating collector coins)
**Turkish lira**: Northern Cyprus (*de facto*independent state, but recognized as a part of Cyprus by all UN member states except Turkey)
## See also
[edit]- Currency union
- Currency board
- Dedollarisation
- Domestic liability dollarization
- Petrocurrency
- Bitcoin, a cryptocurrency
- World currency
## References
[edit]### Footnotes
[edit]**^**New estimates of U.S. currency abroad, the domestic money supply and the unreported Economy Edgar L. Feige September 2011.- ^
**a****b****c****d****e****f**Berg, Andrew; Borensztein, Eduardo (2000). "The Pros and Cons of Full Dollarization".**g***IMF Working Paper; Full Dollarization*(/50). IMF. Retrieved 13 October 2011. **^**Yeyati (2003) at 1.**^**Rochon, Louis-Philippe (2003).*Dollarization Lessons from Europe and the Americas*. London and New York: Routledge. pp. 1. ISBN 9780415298780.- ^
**a**Yeyati (2003) at 3.**b** - ^
**a**Savastano at 7.**b** **^**Yeyati (2003) at 5.**^**Balino; Berensztein (1999). "Monetary Policy in Dollarized Economies".*IMF Occasional Paper 171*.**^**Mundell, R. A. (1961). "A Theory of Optimum Currency Areas".*American Economic Review*.**51**(4): 657–665. JSTOR 1812792.- ^
**a**Bogetic (200). "Official Dollarization: Current Experiences and Issues".**b***Cato Journal*.**20**(2): 179–213. **^**Berkmen, S. Pelin; Cavallo, Eduardo (2010). "Exchange Rate Policy and Liability currency substitution: What Do the Data Reveal about Causality?".*Review of International Economics*.**18**(5): 781–795. doi:10.1111/j.1467-9396.2010.00890.x. S2CID 154678349.**^**Pinon, Marco (2008).*Macroeconomic Implications of Financial currency substitution The Case of Uruguay*. Washington DC: International Monetary Fund. p. 22.- ^
**a**Alesina, Alberto; Barro (2001). "Dollarization".**b***The American Economic Review*.**91**(2): 381–385. doi:10.1257/aer.91.2.381. JSTOR 2677793. - ^
**a**Yeyati (2003) at 22.**b** **^**Rose, Andrew (2000). "One Money, One Market: the effect of common currencies on trade".*Economic Policy*.**15**(30): 8–0. doi:10.1111/1468-0327.00056.**^**Honohan, Patrick (2007). "Dollarization and Exchange Rate Fluctuations" (PDF).*World Bank Policy Research Working Paper*. Policy Research Working Papers (4172). doi:10.1596/1813-9450-4172. hdl:10986/7252.**^**Alesina, Alberto; Barro (2001). "Dollarization".*The American Economic Review*.**91**(2): 382. doi:10.1257/aer.91.2.381. JSTOR 2677793.**^**Yeyati (2003) at 23.**^**Edwards, Sebastian; Magendzo, I. Igal (2003). "Dollarization And Economic Performance: What Do We Really Know?".*International Journal of Finance and Economics*.**8**(4): 351–363. CiteSeerX 10.1.1.557.6231. doi:10.1002/ijfe.217.**^**Broda, Levy Yeyati, Christian, Eduardo (2003). "Endogenous deposit dollarization". Federal Reserve Bank of New York.`{{cite journal}}`
: Cite journal requires`|journal=`
(help)CS1 maint: multiple names: authors list (link)**^**Moreno-Bird, Juan Carlos (Fall 1999). "Dollarization in Latin America: Is it desirable?".*ReVista: Harvard Review of Latin America*. Archived from the original on 11 August 2011. Retrieved 27 June 2012.**^**John, Kareken; Wallace (1981). "On the Indeterminacy of Equilibrium Exchange Rates".*Quarterly Journal of Economics*.**96**(2): 207–222. doi:10.2307/1882388. JSTOR 1882388.**^**Yeyati, Eduardo Levy (2008). "Liquidity Insurance in a Financially Dollarized Economy". In Edwards; Garcia (eds.).*Financial Markets Volatility and Performance in Emerging Markets*. University of Chicago Press. pp. 185–218. ISBN 978-0-226-18495-1.**^**Bencivenga, Valerie; Huybens, Smith (2001). "Dollarization and the Integration of International Capital Markets: a Contribution to the Theory of Optimal Currency Areas".*Journal of Money, Credit and Banking*.**33**(2, Part 2): 548–589. doi:10.2307/2673916. JSTOR 2673916.**^**Broda, Christian; Yeyati (2001). "Dollarization and the Lender of Last Resort".*Book: Dollarization*: 100–131.**^**Yeyati (2003) at 31.**^**Kutan, Rengifo, Ozsoz, Ali, Erick, Emre. "Evaluating the Effects of Deposit Dollarization in Bank Profitability" (PDF). Fordham University Economics Department.`{{cite web}}`
: CS1 maint: multiple names: authors list (link)[*permanent dead link*]**^**Yeyati (2003) at 34.**^**"Federal Reserve Bank of Atlanta,*Official Dollarization and the Banking System in Ecuador and El Salvador*, 2006" (PDF). Archived from the original (PDF) on 19 October 2012. Retrieved 14 September 2012.- ^
**a****b**Savastano.**c** - ^
**a**Sahay, Ratna; Vegh, Carlos (September 1995). "Dollarization in Transition Economies: Evidence and Policy Implications".**b***IMF Working Paper No. 95/96*. SSRN 883243. **^**Catão, Luis; Terrrones, Marco E. (August 2000). "Determinants of Dollarization: The Banking Side".*IMF Working Paper No. 00/146*: 5. SSRN 879949.- ^
**a****b****c****d****e****f****g****h****i**Edwards, Sebastian (May 2001). "Dollarization and Economic Performance: An Empirical Investigation".**j***NBER Working Paper No. 8274*. doi:10.3386/w8274. **^**Catalog of the coins of Nauru Numista (https://en.numista.com). Retrieved on 2023-02-17.- ^
**a**Ruwitch, John; Park, Ju-min (2 June 2013). "Insight: North Korean economy surrenders to foreign currency invasion".**b***Reuters*. Changbai, China/Seoul. Retrieved 11 January 2017. **^**Catalog of the coins of the British Antarctic Territory Numista (https://en.numista.com). Retrieved on 2023-01-17.**^**Catalog of the coins of the British Indian Ocean Territory Numista (https://en.numista.com). Retrieved on 2023-01-17.**^**Catalog of the coins of Saint Helena, Ascension and Tristan da Cunha Numista (https://en.numista.com). Retrieved on 2023-01-17.**^**Catalog of the coins of South Georgia and the South Sandwich Islands Numista (https://en.numista.com). Retrieved on 2023-01-17.**^**Catalog of the coins of Alderney Numista (https://en.numista.com). Retrieved on 2023-01-17.**^**Catalog of the coins of the British Virgin Islands Numista (https://en.numista.com). Retrieved on 2022-09-03.**^**Catalog of the coins of the Marshall Islands Numista (https://en.numista.com). Retrieved on 2022-09-03.**^**Catalog of the coins of Palau Numista (https://en.numista.com). Retrieved on 2022-07-22.**^**Catalog of the coins of the Turks and Caicos Islands Numista (https://en.numista.com). Retrieved on 2022-09-03.**^**"Money & Cost". lonelyplanet.com. Archived from the original on 16 April 2014. Retrieved 13 December 2013.**^**Pilling, David; Peel, Michael (28 July 2014). "Cambodia: Wave of discontent".*Financial Times*. Archived from the original on 10 December 2022. Retrieved 11 January 2017.Dollars account for 90 per cent of money in Cambodia's banking system and almost the same proportion of cash used in everyday transactions, according to official estimates.
**^**Kharpal, Arjun (9 June 2021). "El Salvador becomes first country to adopt bitcoin as legal tender after passing law".*CNBC*. Retrieved 9 June 2021.**^**Carter, Chris (8 May 2013). "Economy: The effect of dollarisation in Honduras".*Pulsamerica*. Archived from the original on 13 January 2017. Retrieved 11 January 2017.**^**Aden, AK Muktar. "Overcoming Challenges in an Unrecognized Economy" (PDF).*Ministry of Finance Development, Somaliland*.**^**Pinon, Marco; Gelos, Gaston (28 August 2008). "Uruguay's Monetary Policy Effective Despite Dollarization".*IMF Survey Magazine*. Retrieved 4 March 2012.**^**Zerpa, Fabiola (5 November 2019). "Venezuela Is Now More Than 50% Dollarized, Study Finds".*Bloomberg*. Retrieved 9 November 2019.**^**"Maduro says 'thank God' for dollarization in Venezuela".*Reuters*. 17 November 2019. Retrieved 18 November 2019.**^**Catalog of the coins of Abkhazia Numista (https://en.numista.com). Retrieved on 2022-10-14.**^**Catalog of the coins of South Ossetia Numista (https://en.numista.com). Retrieved on 2022-10-14. | true | true | true | null | 2024-10-12 00:00:00 | 2003-04-25 00:00:00 | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
|
26,181,414 | https://www.uplabs.com/posts/pro-ui-kit-for-pro-designers | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,332,198 | http://recode.net/2016/03/19/steve-wozniak-and-palmer-luckey-virtual-reality-yes-augmented-reality-not-yet/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,024,237 | https://medium.com/@MindnStuff/work-less-to-improve-2b3c8eeffdf1 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,794,641 | http://defensetech.org/2014/12/24/helicopter-drone-makes-first-flight-from-navy-destroyer/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
34,550,050 | https://ia601406.us.archive.org/2/items/gov.uscourts.deb.188450/gov.uscourts.deb.188450.574.0.pdf | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,216,149 | http://www.dailygalaxy.com/my_weblog/2011/11/more-proof-the-universe-fine-tuned-for-life-scientists-find-antarctica-meteorites-contain-sssential-.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,057,640 | http://www.eetimes.com/author.asp?section_id=69&doc_id=1320638&print=yes | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,180,419 | https://rickwierenga.com/blog/philosophy/true-false.html | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
21,515,797 | https://www.zdnet.com/article/microsofts-rust-experiments-are-going-well-but-some-features-are-missing/ | Microsoft's Rust experiments are going well, but some features are missing | Catalin Cimpanu | # Microsoft's Rust experiments are going well, but some features are missing
### techrepublic cheat sheet
Microsoft gave a status update today on its experiments on using the Rust programming language instead of C and C++ to write Windows components.
In short, the experiments have gone well, and engineers described working with Rust as "generally positive;" however, some features are missing, but the company is willing to help and move the project forward.
### Microsoft's Rust experiments
Microsoft began experimenting with Rust over the summer. In a series of blog posts, the company announced that it would explore the idea of rewriting various products in Rust, a programming language that was built from the ground up with security in mind.
The Redmond-based software giant said it was interested in Rust because, over the past decade, more than 70% of the security patches it shipped out fixed memory-related bugs, an issue that Rust was created to address.
But while Microsoft didn't specifically say which products would be getting the Rust treatment, the company said it would keep users informed of how the experiments were going.
Today, almost four months later, we got the first feedback.
"I've been tasked with an experimental rewrite of a low-level system component of the Windows codebase (sorry, we can't say which one yet)," said Adam Burch, Software Engineer at the Microsoft Hyper-V team, in a blog post today.
"Though the project is not yet finished, I can say that my experience with Rust has been generally positive," Burch added.
"In general, new components or existing components with clean interfaces will be the easiest to port to Rust," the Microsoft engineer said.
### Features missing, but willing to help
However, not all things went smoothly. It would have been unrealistic if we expected they would. Burch cited the lack of safe transmutation, safe support for C style unions, fallible allocation, and a lack of support for at-scale unit testing, needed for Microsoft's sprawling code-testing infrastructure.
"I'm confident that we at Microsoft will be able to help in these endeavors to shape the future of the language to improve its usefulness in these scenarios," Burch said.
The Microsoft engineer said he sees a bright future for Rust in microcontrollers and low-level systems like kernels and hypervisors, where the language's security-first features will make it quite attractive once it matures.
Currently, efforts to bring Rust at feature-parity with C are underway, started and supported by Intel.
If Microsoft does go forward with approving Rust rewrites for some of Windows' components, then it should hurry up if it wants to be the first OS maker that does so, as the Linux project is also looking into using Rust for some of its kernel drivers. | true | true | true | Microsoft rewrote a low-level Windows component in Rust. Calls the experience "generally positive." | 2024-10-12 00:00:00 | 2019-11-07 00:00:00 | article | zdnet.com | ZDNET | null | null |
|
19,369,352 | http://cr.yp.to/qmail/guarantee.html | The qmail security guarantee | null | My offer still stands. Nobody has found any security holes in qmail.
Of course, ``security hole *in* qmail'' does not include
problems *outside* of qmail:
for example,
NFS security problems, TCP/IP security problems, DNS security problems,
bugs in scripts run from .forward files,
and operating system bugs generally.
It's silly to blame a problem on qmail
if the system was already vulnerable before qmail was installed!
I also specifically disallowed denial-of-service attacks:
they are present in every MTA, widely documented,
and very hard to fix without a massive overhaul of several major protocols.
(UNIX does offer some tools to prevent *local* denial-of-service attacks;
see my
resource exhaustion page
for more information.
See also my page responding to
Wietse Venema's slander.)
A group of qmail users offered a $1000 prize for one year under similar rules. The prize was not claimed; the money was donated to the Free Software Foundation.
In May 2005, Georgi Guninski claimed that some potential 64-bit portability problems allowed a ``remote exploit in qmail-smtpd.'' This claim is denied. Nobody gives gigabytes of memory to each qmail-smtpd process, so there is no problem with qmail's assumption that allocated array lengths fit comfortably into 32 bits.
Every few months CERT announces Yet Another Security Hole In Sendmail---something that lets local or even remote users take complete control of the machine. I'm sure there are many more holes waiting to be discovered; sendmail's design means that any minor bug in 41000 lines of code is a major security risk. Other popular mailers, such as Smail, and even mailing-list managers, such as Majordomo, seem just as bad.As it turned out, fourteen security holes were discovered in sendmail in 1996 and 1997.
I followed seven fundamental rules in the design and implementation of qmail:
sendmail treats programs and files as addresses. Obviously random people can't be allowed to execute arbitrary programs or write to arbitrary files, so sendmail goes through horrendous contortions trying to keep track of whether a local user was ``responsible'' for an address. This has proven to be an unmitigated disaster.
In qmail, programs and files are not addresses.
The local delivery agent, qmail-local,
can run programs or write to files as directed by
`~user/.qmail`, but it's always running as that user.
(The notion of ``user'' is configurable, but root is never a user.
To prevent silly mistakes,
qmail-local makes sure that
neither `~user` nor `~user/.qmail` is world-writable.)
Security impact:
`.qmail`,
like `.cshrc` and `.exrc` and various other files,
means that anyone who can write arbitrary files as a user can execute
arbitrary programs as that user. That's it.
A setuid program must operate in a very dangerous environment: a user is under complete control of its fds, args, environ, cwd, tty, rlimits, timers, signals, and more. Even worse, the list of controlled items varies from one vendor's UNIX to the next, so it is very difficult to write portable code that cleans up everything.
Of the twenty most recent sendmail security holes, eleven worked only because the entire sendmail system is setuid.
Only one qmail program is setuid: qmail-queue. Its only purpose is to add a new mail message to the outgoing queue.
The entire sendmail system runs as root, so there's no way that its mistakes can be caught by the operating system's built-in protections. In contrast, only two qmail programs, qmail-start and qmail-lspawn, run as root.
Even if qmail-smtpd, qmail-send, qmail-rspawn, and qmail-remote are completely compromised, so that an intruder has control over the qmaild, qmails, and qmailr accounts and the mail queue, he still can't take over your system. None of the other programs trust the results from these four.
In fact, these programs don't even trust each other. They are in three groups: qmail-smtpd, which runs as qmaild; qmail-rspawn and qmail-remote, which run as qmailr; and qmail-send, the queue manager, which runs as qmails. Each group is immune from attacks by the others.
(From root's point of view, as long as root doesn't send any mail, only qmail-start and qmail-lspawn are security-critical. They don't write any files or start any other programs as root.)
I have discovered that there are two types of command interfaces in the world of computing: good interfaces and user interfaces.
The essence of user interfaces is *parsing*:
converting an unstructured sequence of commands,
in a format usually determined more by psychology than by solid engineering,
into structured data.
When another programmer wants to talk to a user interface,
he has to *quote*:
convert his structured data into an unstructured sequence of commands
that the parser will, he hopes,
convert back into the original structured data.
This situation is a recipe for disaster. The parser often has bugs: it fails to handle some inputs according to the documented interface. The quoter often has bugs: it produces outputs that do not have the right meaning. Only on rare joyous occasions does it happen that the parser and the quoter both misinterpret the interface in the same way.
When the original data is controlled by a malicious user,
many of these bugs translate into security holes.
Some examples:
the Linux `login -froot` security hole;
the classic `find | xargs rm` security hole;
the Majordomo injection security hole.
Even a simple parser like getopt
is complicated enough for people to screw up the quoting.
In qmail, all the internal file structures are incredibly simple: text0 lines beginning with single-character commands. (text0 format means that lines are separated by a 0 byte instead of line feed.) The program-level interfaces don't take options.
All the complexity of parsing RFC 822 address lists and rewriting headers is in the qmail-inject program, which runs without privileges and is essentially part of the UA.
See BLURB in the qmail package for some of the reasons that qmail is so much smaller than sendmail. There's nothing inherently complicated about writing a mailer. (Except RFC 822 support; but that's only in qmail-inject.) Security holes can't show up in features that don't exist.
I've mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs. A big chunk of qmail is stolen from a basic C library that I've been developing for several years for a variety of applications. The stralloc concept and getln() make it very easy to avoid buffer overruns, memory leaks, and artificial line length limits. | true | true | true | null | 2024-10-12 00:00:00 | 1995-01-01 00:00:00 | null | null | null | null | null | null |
15,721,332 | http://www.bbc.co.uk/bbcthree/item/a1fe31a9-46d9-45a6-9322-1857476b60b5 | Keanu's motorbikes and other random celeb side-hustles | Declan Cashin | # Keanu's motorbikes and other random celeb side-hustles
- Published
**Everyone's a double-jobbing ****'slasher'**, external** these days - be it an accountant-slash-DJ, doctor-slash-florist, or bricklayer-slash-photographer.**
It doesn't just make sense in our insecure economic climate: it also keeps things fresh to have a bit of variety in our working lives.
Turns out A-list celebrities are no different. They've been getting in on the ever-important side hustle too. Just look at Keanu Reeves, who last week was in Milan to unveil three motorbikes, external that he designed and built for a company that he co-founded in 2007.
The Speed actor's business partner, Gard Hollinger, builds the bikes, external, while Keanu test drives them. And the mantra that underpins their business? “It has to make you giggle when you ride it,” Hollinger has said.
Keanu's side gig seems more like a passion project, born out of his interest in riding - though one of the star's motorcycles will set you back at least £60,000 ($78,000).
Here are some of the other celebs with side-hustles you might not expect...
**Beyonce and Jay-Z**
The most common celeb side-hustles are usually in the food and drink industry. One of the many businesses that Beyonce is involved with - in her free time from performing a world tour that made, external over $250 million - is a vegan meal delivery service, external.
Bey was moved to get involved in the company after she and her equally entrepreneurial husband, Jay-Z, took part in a vegan cleanse, external in December 2013, a move that inspired many, external, many, external copycat lifestyle feature articles, external. The 22-day plan costs about, external£455 ($600).
Allow Instagram content?
This article contains content provided by Instagram. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Meta’s Instagram cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
Elsewhere, Jay-Z has his fingers in a lot of pies, external- so much so that he's earned, external a crust (sorry) of £618 million ($810 million). One of the most random start-ups that Jay-Z has backed, external is a company that allows the mega-rich to book a private jet, a club not unlike taxi-hailing apps or online streaming services.
Its business model is based on research that finds, external the average private jet is only flying about 200 hours a year, when it could be in the air for up to 1,500 hours annually.
'Entry level' membership of the private jet club costs $15,000 for a year's service.
Kim Kardashian is a big fan, external of the service, though there have been reports that some users are questioning the standard of service on offer, among them the difficulty of booking an entire jet for a private flight. One member explained: “As my wife says, 'Well if I have to share the plane, it’s not really private, is it?'"
**P Diddy**
Similarly, P Diddy - or, as he was recently temporarily known, external'Brother Love' - has made a mint outside of music, pipping Jay-Z to the title, external of World's Richest Hip Hop Star, with a fortune of $820 million.
One of the quirkier features in Diddy's profitable portfolio, external is an alkaline water product, where " water is purified, external, infused with electrolytes and raised to a pH in excess of 9". This purports to have added health benefits, but some are not convinced that it's superior to plain old water, external.
**Ashton Kutcher**
Actor Ashton Kutcher has been a big investor in tech start-ups for the past few years, getting in on the ground floor of what were to become some of the biggest tech businesses in the world.
So successful has his investing streak been that Kutcher is believed to have a shared business portfolio worth, external around £190 million ($250 million). Not bad going for a guy who was already one of the highest paid actors on TV, pocketing £572,000 ($750,000) per episode for role on sitcom Two And A Half Men - working out at around £26,000 ($34,000) per minute, external of screen time per episode.
Among Kutcher's investments is an online company that sells funky spectacles - but for every pair it sells, the firm then makes a donation to a non-profit to produce a pair of glasses for someone in a country in the developing world.
**George Clooney**
Then there's George Clooney, who is the latest to making a killing from his part-time gig.
In the type of slick, casual move you'd expect of Clooney, the actor co-founded a tequila brand "by accident" with two mates back in 2013. Basically, they had been hanging out in Mexico when the idea struck to concoct their own drink.
"[It] really started by accident as far as a company," Clooney's business partner, Rande Gerber, said earlier this year, external.
"As you do in Mexico, we would drink a lot of tequila. We'd go out to bars and restaurants and bartenders would recommend them. Some were good, some not so good, and some expensive. There came a point where George turned to me and said, 'Why don't we create one that's perfect for us?'"
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
Why indeed. Cut to four years later, when Clooney and the guys sold their tequila business, external for a figure believed to be up to $1 billion. But before you get too envious, take solace in the fact that Gorgeous George isn't technically a billionaire now. He looks set to bank 'only' around $140 million, external(after tax) - from a $600,000 investment.
**Jessica Alba**
Similarly, Jessica Alba - an actress best known for Fantastic Four and Sin City - was inspired by the birth of her first child to found an ethical clothing and household product business in 2011.
In just five years, Alba has built it into a massive venture that last year was valued at $1.7 billion, external. Two years ago, Forbes magazine proclaimed Alba to be "America's richest self-made woman".
**James Franco**
Of course, not all celebs are in it for the cash when it comes to their extracurriculars. Some of them just want to embrace their nerdy side and feed the soul.
Actor James Franco is one star who seems to maintain a multi-hyphenate career portfolio, external just because it interests him and challenges him.
Some of his more out-there side projects include art exhibitions inspired by the movie Psycho, external(with Franco being pictured as the Janet Leigh shower scene victim) - as well as ones celebrating sewer pipe art, external, and a deer orgy, external.
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose **‘accept and continue’**.
Franco has also taught classes on film in New York University, UCLA, and California Institute of Art.
You can even read his students' reviews on a US professor-rating website, featuring such gems as this assessment: "...Every time we met to discuss my assignments he would just cry for the duration of our meeting, and then tell me I could let myself out? Like, every time. Maybe it was a performance art piece..."
Now *that's* a random side hustle.
**Read more:**
The cast-mates who stayed mates - and a few who really didn't
Nine iconic TV stars characters that were almost played by someone else
Seven actors who look completely different to their characters | true | true | true | Keanu's motorbikes and other random celeb side-hustles | 2024-10-12 00:00:00 | 2017-11-15 00:00:00 | article | bbc.com | BBC Three | null | null |
|
29,421,898 | https://gigablast.com/about.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,059,553 | https://www.businessinsider.com/huobi-global-warrant-seeks-102-bitcoin-coinbase-theft-doj-2021-10 | A Coinbase user lost $11.6 million in under 10 minutes after falling for a fake-notification scam, the US attorney's office said | Kevin Shalvey | - Federal investigators filed a warrant for 10.2 bitcoin held in a Huobi Global wallet.
- The cryptocurrency was stolen from a Coinbase account in an $11.6 million heist, officials said.
- Investigators said an unknown person sent a notification to a Coinbase user after a 200 bitcoin buy.
A federal judge this month approved a warrant to claw back more than $600,000 in bitcoin from a Huobi Global wallet, after federal investigators said it was part of an $11.6 million haul stolen from a Coinbase account.
In April, after a Coinbase user bought 200 bitcoin, a notification popped up, alerting them that their account had been locked, said a complaint filed by the US attorney's office in Los Angeles. Although the notification appeared to be from Coinbase, it wasn't.
Instead, the notification was the first step in an alleged fraud. In the moments that followed, almost $11.6 million in crypto, about 206 bitcoin, was removed from the user's account, investigators said.
It's unclear how the alleged fraudster knew about the Coinbase transaction and whether the online notification mentioned in the warrant appeared on a phone or computer. Coinbase declined to comment.
The Coinbase user, who was identified in court documents only as G.R., called a phone number on the notification, thinking it would connect to a Coinbase customer-service representative, said a federal complaint filed by investigators last month.
An "unidentified individual 1," or "UI-1," answered the call and asked G.R. to make a series of changes to the account, the complaint said.
Those changes included allowing remote access to the account, the complaint added.
"Once granted access to the Victim Account, UI-1 increased the daily transaction limit and also attempted to deactivate certain notifications and alert settings on the Victim Account," Dan G. Boyle, an assistant US attorney, wrote in a document filed in US District Court for the Central District of California in September.
Within moments, millions in bitcoin and XLM were removed from G.R.'s Coinbase account, investigators said.
"The total value of virtual currency transferred out of the Victim Account between 2:02:40 PST and 2:12:41 PST on or about April 20, 2021, without G.R.'s authorization was approximately $11,570,138," they wrote in their complaint.
The money was then moved by an unknown person through a series of transactions between several accounts. About 10.2 bitcoin ended up in an account with Huobi Global, one of the world's largest exchanges, investigators said.
The investigators filed a forfeiture notification, seeking to reclaim those bitcoin. Huobi Global didn't respond to a request for additional information.
In early October, Judge Dolly M. Gee approved the warrant request, and a notice was posted in case anyone other than G.R. wanted to claim ownership of the 10.2 bitcoin.
"Huobi has agreed to maintain a freeze on the funds pending resolution of the forfeiture action," Thom Mrozek, the director of media relations for the US attorney' office in Los Angeles, told Insider via email. "No one has been arrested or charged, but our investigation is ongoing."
**Check out** Business Insider's picks for best cryptocurrency exchanges | true | true | true | A Coinbase user bought 200 bitcoin, got a fake notification from a scammer, and lost $11.6 million, the US attorney's office in Los Angeles said. | 2024-10-12 00:00:00 | 2021-10-31 00:00:00 | https://i.insider.com/617e77be23745d0018245cea?width=1200&format=jpeg | article | businessinsider.com | Insider | null | null |
1,884,235 | http://jfarcand.wordpress.com/2010/11/08/using-jquery-xmpp-and-atmosphere-to-cluster-your-websocketcomet-application/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,669,402 | http://venturefizz.com/blog/favecast-creating-next-generation-recommendation-platform-through-video#.UnfJcYE9JiE.hackernews | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,186,742 | http://techcrunch.com/2014/02/05/switchcam-agent/ | Switchcam Pivots To Provide Analytics And Gallery Curation Tools For Musicians And Their Agents | TechCrunch | Ryan Lawler | Once upon a time, Switchcam was built to find, curate, and stitch together all the best moments from live, public performances that were posted on YouTube. Now it’s looking to provide a way for artists to determine which moments at live events most resonated with their fans, and to curate galleries of images from those events.
With its proprietary technology, Switchcam could scan YouTube videos posted by users and piece them together based on a mix of audio recognition and timestamps. Once Switchcam had accomplished that, users could watch a stream of videos that event goers had shot, using algorithms to try and isolate the best quality video and audio from all of its sources.
Somewhere along the line, the Switchcam team realized that the data it was collecting would be valuable for another reason — it could give artists, promoters, and agents analytics about their live performances. Moreover, it could help them pinpoint the most interesting moments, and to create image galleries out of them.
Switchcam takes data that it was already collecting from videos on YouTube and photos on Instagram, and, based on what users are sharing, figure out the highlights from those events. It also help artists and their managers to figure out who their biggest fans and biggest influencers are.
Considering many artists are making most of their money from touring nowadays rather than record sales or royalties, determining the relevance of when and who is sharing can be valuable when trying to connect with fans and grow an audience.
Pricing for Switchcam’s new analytics platform is based on how popular an artist is on Facebook, which basically correlates to how much data needs to be tracked:
- Analytics for artists with less than 100,000 “likes” on Facebook is $9 per month
- Analytics for artists with less than 1 million “likes” is $39 per month
- Analytics for artists with more than 1 million “likes” is $79 per month
Switchcam believes its technology can also be used by brands. But hey, let’s see what happens with artists first. The company had raised raising $1.2 million from Mark Cuban, 500 Startups, Turner MediaCamp, Vikas Gupta, David Beyer, Jeffrey Schox, Niket Desai, and Reed Morse. | true | true | true | Once upon a time, Switchcam was built to find, curate, and stitch together all the best moments from live, public performances that were posted on YouTube. Now it's looking to provide a way for artists to determine which moments at live events most resonated with their fans, and to curate galleries of images from those events. | 2024-10-12 00:00:00 | 2014-02-05 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
1,157,004 | http://www.mobilesider.com/topic/impressive-asphalt-5-3d-game-by-gameloft-on-bad | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,432,345 | http://www.t-gaap.com/2013/3/22/apples-big-secret-the-ios-laptop | Two Guys About Awesome Podcasts | Kathryn Sims | You work hard every day. With this, you deserve to relax and entertain your mind. The best thing to do is to listen to a podcast. Wherever you are, what you are doing, you can listen to a podcast that can make you feel better. You may not know it, but podcasts allow you to learn something you can apply in some future time. The good thing about it is that they come in different categories that you can choose from for your specific needs.
There are lots of podcasts in various categories. To make your search easy, we offer you a list of the best podcasts of 2020.
**Happier with Gretchen Rubin**
Happier with Gretchen Rubin is a podcast for self-help. Gretchen Rubin is a bestselling author that can offer you with manageable and practical advice about good habits and happiness. It is a thought-provoking and lively podcast that you can surely enjoy. If you are tired and feeling down and hopeless, this podcast can significantly help you. You can be motivated to find happiness after listening to this podcast.
**The Tony Robins Podcast with Tony Robbins**
Naturally, all of us want to attain success. However, there are some instances when you feel lonely, stressed, and hopeless due to the challenges and problems you may encounter. Well, there is a great solution to that. This podcast can be your essential tool to overcome your problems. You can learn about Tony’s effective strategies to achieving success. You can also apply these strategies to achieve your life goals for your satisfaction.
**Muscle for Life with Mike Matthews**
The Muscle for Life podcast can offer you the best techniques on making the right changes with your lifestyle that can lead to the greatest results in both fitness and relationships. The podcast was featured in Men’s Inquirer as the best upcoming postcast for men. With this, you can also change your negative circumstances and begin to enjoy the life you want, a successful and happy life.
**The School of Greatness with Lewis Howes**
The School of Greatness with Lewis Howes can be your partner to unlock your true potential towards a more successful and fulfilling life. Lewis Howes is a bestselling author that shares inspirational stories from the world’s most brilliant business minds. With their inspiring stories, you can discover what makes great people great.
**The Rich Roll Podcast with Rich Roll**
The Rich Roll Podcast with Rich Roll is a podcast that digs with the planet’s most thought-provoking thought leaders to give you a high level of inspiration, education, and empowerment to unleash your most authentic and best self. With that, you can apply your skills and creativity to make the best of your life. It can help you to keep motivated in attaining your dreams in life.
There you go. With these best podcasts of 2020, you can improve the status of your life through the application of ideas you have learned from each podcast. Listening to these podcasts can help you to feel good and look good. It can increase your motivation to keep moving ahead in your life. | true | true | true | null | 2024-10-12 00:00:00 | 2020-03-13 00:00:00 | null | null | null | null | null | null |
4,413,584 | http://dmitryivanov.net/personal-kanban-app/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,413,456 | https://www.scottaaronson.com/blog/?p=3482 | Because you asked: the Simulation Hypothesis has not been falsified; remains unfalsifiable | null | ## Because you asked: the Simulation Hypothesis has not been falsified; remains unfalsifiable
By email, by Twitter, even as the world was convulsed by tragedy, the inquiries poured in yesterday about a different topic entirely: **Scott, did physicists really just prove that the universe is not a computer simulation—that we can’t be living in the Matrix?**
What prompted this was a rash of popular articles like this one (“Researchers claim to have found proof we are NOT living in a simulation”). The articles were all spurred by a recent paper in *Science Advances*: Quantized gravitational responses, the sign problem, and quantum complexity, by Zohar Ringel of Hebrew University and Dmitry L. Kovrizhin of Oxford.
I’ll tell you what: before I comment, why don’t I just paste the paper’s abstract here. I invite you to read it—not the whole paper, just the abstract, but paying special attention to the sentences—and then make up your own mind about whether it supports the interpretation that all the popular articles put on it.
Ready? Set?
*Abstract:* It is believed that not all quantum systems can be simulated efficiently using classical computational resources. This notion is supported by the fact that it is not known how to express the partition function in a sign-free manner in quantum Monte Carlo (QMC) simulations for a large number of important problems. The answer to the question—whether there is a fundamental obstruction to such a sign-free representation in generic quantum systems—remains unclear. Focusing on systems with bosonic degrees of freedom, we show that quantized gravitational responses appear as obstructions to local sign-free QMC. In condensed matter physics settings, these responses, such as thermal Hall conductance, are associated with fractional quantum Hall effects. We show that similar arguments also hold in the case of spontaneously broken time-reversal (TR) symmetry such as in the chiral phase of a perturbed quantum Kagome antiferromagnet. The connection between quantized gravitational responses and the sign problem is also manifested in certain vertex models, where TR symmetry is preserved.
For those tuning in from home, the “sign problem” is an issue that arises when, for example, you’re trying to use the clever trick known as Quantum Monte Carlo (QMC) to learn about the ground state of a quantum system using a classical computer—but where you needed probabilities, which are real numbers from 0 to 1, your procedure instead spits out numbers some of which are negative, and which you can therefore no longer use to define a sensible sampling process. (In some sense, it’s no surprise that this would happen when you’re trying to *simulate quantum mechanics*, which of course is all about generalizing the rules of probability in a way that involves negative and even complex numbers! The surprise, rather, is that QMC lets you *avoid* the sign problem as often as it does.)
Anyway, this is all somewhat far from my expertise, but insofar as I understand the paper, it looks like a serious contribution to our understanding of the sign problem, and why local changes of basis can fail to get rid of it when QMC is used to simulate certain bosonic systems. It will surely interest QMC experts.
OK, but does any of this *prove that the universe isn’t a computer simulation*, as the popular articles claim (and as the original paper does not)?
It seems to me that, to get from here to there, you’d need to overcome **four huge difficulties**, any one of which would be fatal by itself, and which are logically independent of each other.
- As a computer scientist, one thing that leapt out at me, is that Ringel and Kovrizhin’s paper is fundamentally about computational complexity—specifically, about which quantum systems can and can’t be simulated in polynomial time on a classical computer—yet it’s entirely innocent of the
*language*and*tools*of complexity theory. There’s no BQP, no QMA, no reduction-based hardness argument anywhere in sight, and no clearly-formulated request for one either. Instead, everything is phrased in terms of the failure of one specific algorithmic framework (namely QMC)—and within that framework, only “local” transformations of the physical degrees of freedom are considered, not nonlocal ones that could still be accessible to polynomial-time algorithms. Of course, one does whatever one needs to do to get a result.
To their credit, the authors do seem aware that a language for discussing*all possible efficient algorithms*exists. They write, for example, of a “common understanding related to computational complexity classes” that some quantum systems are hard to simulate, and specifically of the existence of systems that support universal quantum computation. So rather than criticize the authors for this limitation of their work, I view their paper as a welcome invitation for closer collaboration between the quantum complexity theory and quantum Monte Carlo communities, which approach many of the same questions from extremely different angles. As official ambassador between the two communities, I nominate Matt Hastings. - OK, but even if the paper
*did*address computational complexity head-on, about the most it could’ve said is that computer scientists generally*believe*that BPP≠BQP (i.e., that quantum computers can solve more decision problems in polynomial time than classical probabilistic ones); and that such separations are provable in the query complexity and communication complexity worlds; and that at any rate, quantum computers can solve*exact sampling problems*that are classically hard unless the polynomial hierarchy collapses (as pointed out in the BosonSampling paper, and independently by Bremner, Jozsa, Shepherd). Alas, until someone proves P≠PSPACE, there’s no hope for an unconditional proof that quantum computers can’t be efficiently simulated by classical ones.
(Incidentally, the paper comments, “Establishing an obstruction to a classical simulation is a rather ill-defined task.” I beg to differ: it’s not ill-defined; it’s just ridiculously hard!) - OK, but suppose it
*were*proved that BPP≠BQP—and for good measure, suppose it were also experimentally demonstrated that scalable quantum computing is possible in our universe. Even then, one still wouldn’t by any stretch have ruled out that the universe was a computer simulation! For as many of the people who emailed me asked themselves (but as the popular articles did not), why not just imagine that the universe is being simulated on a*quantum*computer? Like, duh? - Finally: even if, for some reason, we disallowed using a quantum computer to simulate the universe, that
*still*wouldn’t rule out the simulation hypothesis. For why couldn’t God, using Her classical computer, spend a trillion years to simulate one second as subjectively perceived by us? After all, what is exponential time to She for whom all eternity is but an eyeblink?
Anyway, if it weren’t for **all four separate points above**, then sure, physicists would have now proved that we don’t live in the Matrix.
I do have a few questions of my own, for anyone who came here looking for my reaction to the ‘news’: *did you really need me to tell you all this?* How much of it would you have figured out on your own, just by comparing the headlines of the popular articles to the descriptions (however garbled) of what was actually done? How obvious does something need to be, before it no longer requires an ‘expert’ to certify it as such? If I write 500 posts like this one, will the 501st post basically just write itself?
Asking for a friend.
**Comment Policy:** I welcome discussion about the Ringel-Dovrizhin paper; what might’ve gone wrong with its popularization; QMC; the sign problem; the computational complexity of condensed-matter problems more generally; and the relevance or irrelevance of work on these topics to broader questions about the simulability of the universe. But as a little experiment in blog moderation, I *won’t* allow comments that just philosophize in general about whether or not the universe is a simulation, without making further contact with the actual content of this post. We’ve already had the latter conversation here—probably, like, every week for the last decade—and I’m ready for something new.
Comment #1 October 3rd, 2017 at 4:32 pm
> did you really need me to tell you all this?
No, but, it *is* great to have somewhere to point less-clueful friends/relatives/popsci enthusiasts. I think posts like this are a great public service, so thanks for writing them!
I can understand getting tired of it, so I won’t fault you for giving up at some point, but until then I plan to take full advantage of this kind of summary.
Comment #2 October 3rd, 2017 at 4:58 pm
for what it’s worth, Dima is at Oxford, not Cambridge.
Comment #3 October 3rd, 2017 at 5:03 pm
Just #2: Thanks! Fixed. Google misled me: when you search for him, his Cambridge homepage shows up first.
Comment #4 October 3rd, 2017 at 7:06 pm
I wondered if the world was going to hassle you about this…
Concerning their *technical* conclusion: They are appealing to a popular formal analogy between thermal transport in chiral systems, and gravitational anomalies. But as I understand it, the chiral anomaly for quasiparticles is an artefact of a cutoff, it only appears in effective field theory above the cutoff scale. Does this affect the validity of their argument? (That question is to the world, not to Scott.)
Comment #5 October 3rd, 2017 at 7:12 pm
In answer to your specific survey question: I read the Cosmos article “Physicists find we’re not living in a computer simulation” and, despite knowing next to nothing about the effect described and interpreted, I came up with your difficulty 4. I didn’t need to know about the other difficulties, because difficulty 4 stands by itself. I know that Bounded Very Large Finite Numbers are very large indeed, and yet all of them are infinitely closer to zero than they are to omega. No matter how large BVLFNs are (bounded, say, by the total quantum information content of this visible universe), a finite classical computer can run a simulation using them in *some* imagined bounded universe.
And the required size of those BVLFNS can, if desired, quite possible be contained in a classical computer in OUR universe (with your trillion-years-to-one-second time dilation), if all that is required is that reality need only be simulated, as you say, “as subjectively perceived by us”; the total (classical) information ever processed by all humans put together is really rather small.
Cheers, Paul
Comment #6 October 3rd, 2017 at 8:00 pm
So, it seems like my comment on this Slashdot thread https://slashdot.org/comments.pl?sid=11182851&cid=55293399 managed to channel you very effectively; I wrote it essentially imagining what you would have said. Hopefully I didn’t do such a terrible job (I certainly didn’t make point 4 you made).
Comment #7 October 3rd, 2017 at 9:29 pm
Mildly relevant, remember the very popular https://xkcd.com/505/
Are there any obstructions to that approach?
Comment #8 October 3rd, 2017 at 10:55 pm
i wonder if the general issue here (wrt popularization) is pervasive equivocation on what a ‘computer’ is. like when it comes to the possibility of AGI you get people talking past one-another about whether they’re referring to a prospective intelligence as being ‘a computer’, which is to say something essentially similar to a concrete kind of object they experience every day, as opposed to some realization of computation in the abstract sense.
when it comes to the simulation hypothesis we see this in popular discussions where people (non-experts) bring up quantum discreteness as a purported analogue for visual aliasing artifacts in video games. this is paradigmatic of the way these connections usually go: an incidental feature of computers-in-practice is equated to a (misunderstanding of) quantum-systems-in-principal.
in this frame, it’s obvious that (mis)taking the paper to mean “quantum systems can’t be simulated on quantum computers” is going to translate to “the universe can’t run a computer”, because a quantum computer is, in-frame, “quantum” first and a “computer” hardly at all. the “quantum” prefix obviates whatever follows, because a computer is something concrete, and “quantum” is a proxy for irreducible otherness.
i know at this point this probably just reads as lay-bashing, but, on the AGI side, i find it personally hard to read Penrose’s Orch-OR as anything but an extremely refined form of the same essential blockage.
Comment #9 October 3rd, 2017 at 11:29 pm
To your points 1-4, Scott, I’d add the following #5, which has come up here before: Even somehow finding that the Church-Turing thesis is false wouldn’t disprove the simulation hypothesis; it’d just mean that the universe simulating us had physics capable of doing uncomputable things as well.
Comment #10 October 4th, 2017 at 12:32 am
Scott, the answer is absolutely yes, people do need you to tell them these things, because the act of you doing so creates common knowledge about them in a way that counters the common knowledge created by fake news. You wrote the blog post on this and everything!
Comment #11 October 4th, 2017 at 4:41 am
Qiaochu #10: Aha, you’ve solved the mystery! And indeed, I feel like an idiot for not thinking of it.
Though maybe I repressed that solution from my consciousness, because of course it implies that I’ll keep needing to write this sort of post forever.
Comment #12 October 4th, 2017 at 5:29 am
In response to Qiaochu #10 and Scott #11: that suggests a way to reduce Scott’s workload:
Scott, you can just have someone else write the standard rebuttal like Joshua Zelinsky’s #6, and publish as a guest post on your blog. Less work and still almost the same common knowledge effect.
Comment #13 October 4th, 2017 at 5:36 am
I appreciate that that popular article explicitly links to the research article. (Ideally they should also give more details than just one link that might quickly become stale, but still, this is a good first step.) Some popular articles I’ve seen are much worse, as they refer to scientific articles only in ways that are difficult to track down.
Comment #14 October 4th, 2017 at 7:39 am
What might have gone wrong with this article’s popularization in The Daily Mail is best summarized by this YouTube video:
Comment #15 October 4th, 2017 at 8:40 am
Hadn’t heard of this, but read the abstract. It is clear that it does not support the popular interpretation. If we discover some manifestly uncomputable physical behavior of the universe I’d regard that as satisfying the “universe decidedly NOT a simulation” position.
On another subject, Scott did you hear of Vladimir Voevodsky’s passing? I’ve long been fascinated by HoTT and wonder what you think of the endeavor to change how mathematics is conducted to be more like programming with automated proof checkers? People are right now basically coding up famous mathematical proofs and assembling libraries of mathematical knowledge all checked with computer aided proofs. I would think this is right up your alley as it involves the combination of math and comp sci…
Comment #16 October 4th, 2017 at 8:46 am
Sniffoy #9: “the universe simulating us had physics capable of doing uncomputable things as well”
I think in that case our commonly understood definitions of “computer” and “simulation” no longer apply. Something decidedly *other* would be going on.
Comment #17 October 4th, 2017 at 9:04 am
Atreat: Yes, I was saddened to hear of Voevodsky’s passing. I never knew him, though I knew others who did. I don’t know his work, or indeed type theory in general, well enough to say anything intelligent.
My perspective is more like Sniffnoy’s: in some sense, the entire program of theoretical physics could be summed up as, “
supposingthe universe to be a simulation, what can we learn about the nature of the Simulator by examining it?” (The way Einstein put it was something like, “I want to know the Old One’s thoughts, whether He had any choice in creating the world.”)Modern notions like “It from Bit,” and directly studying the universe’s computational and information storage capacities, just make this particularly clear and explicit, without changing it fundamentally.
Even if the physical Church-Turing Thesis turned out to be false, that would simply be another (especially weird and surprising) facet of the putative Simulator.
It’s ironic, of course, that asking whether or not the world “is” a simulation is such a scientifically empty question, whereas understanding
what it would taketo simulate the world is (to my mind) the highest aspiration of physics.Comment #18 October 4th, 2017 at 9:51 am
Well, if we permit quantum simulations, then perhaps our universe is running the simulation of itself.
Comment #19 October 4th, 2017 at 11:05 am
If it is “unfalsifiable”, then it also falls short of a scientific hypothesis according to the approximate but flawed Popperian theory of science. However, positivism is IMO the correct foundation of science, and simulation argument does not contain a scientific hypothesis for entirely other reasons I tried to explain here:
https://examachine.net/blog/simulation-argument-does-not-contain-a-scientific-hypothesis/
The short summary is, I suppose, that whenever you talk about such mythological stuff with no evidence, when not even the slightest reason/observation exists for such an explanation, the explanation is bogus. There is no problem, hence no “solution” is needed.
I (strongly) refuted the simulation argument in this blog post written several years ago, which does contain the argument from quantum simulation time complexity (iterated in this paper, though it was not needed as Scott Aaronson already explained), but the really strong rejection is the argument from a priori probability, both of which you may find here.
https://examachine.net/blog/why-the-simulation-argument-is-invalid/
I am preparing a journal version of these essays (hitherto unpublished except for my blog), and I would welcome any scientific criticism.
Best Wishes,
Eray
Comment #20 October 4th, 2017 at 11:11 am
I don’t think any of us are in real disagreement. Should someone find the universe violating physical Church-Turing I think that would be the discovery of the millennia! I’d be awe struck and not at all interested in wasting time redefining an ultimately vacuous question.
Comment #21 October 4th, 2017 at 12:12 pm
>did you really need me to tell you all this?
Yes. (Well, not me, I looked at the headline, thought “Bullshit.”, and moved on. But for collective-you? Yes.) Two reasons:
1) There are always more idiots.
2) Even for people who have seen enough to draw their own (accurate) conclusions, their word won’t carry much weight with their excitable friends. Yours will. An expert endorsement of “Yeah this is dumb.” will be much more effective at discouraging irrational exuberance.
Comment #22 October 4th, 2017 at 12:38 pm
Love the summary – simulation of universe by quantum computing and computing technologies beyond it, an interesting endeavour on which the worldwide scientific community is already working so hard, is music for all who cares for vexing issue/s. Regards.
Comment #23 October 4th, 2017 at 1:28 pm
Scott, I think you’re overthinking it. To get “from here to there”, I present to you the following simple steps.
1) The first sentence of the abstract says: “It is believed that not all quantum systems can be simulated efficiently using classical computational resources.”
2) The world is quantum, right?
3) Then the universe is not a computer simulation!
QED
p.s.: Of course, I *did* enjoy your four-step breakdown.
Comment #24 October 4th, 2017 at 1:50 pm
TvD #23: Yeah, well, one person’s “overthinking it” is another’s “trying not to criminally underthink it.” 😉
Comment #25 October 4th, 2017 at 1:56 pm
Maris #14: That song is great; thanks! But alas, many more “reputable” outlets also ran this story.
Comment #26 October 4th, 2017 at 2:13 pm
Comment #7 – Turns out that the initial placement of all those rocks was …
Comment #27 October 4th, 2017 at 7:22 pm
Shmi #7: No, I see no obstructions whatsoever to the xkcd approach.
Zach #26: …did you mean to write more?
Comment #28 October 4th, 2017 at 7:27 pm
I’ve already got several comments in the moderation queue that just pontificate in general about the simulation hypothesis, in direct violation of my comment policy … please don’t!
Comment #29 October 4th, 2017 at 9:04 pm
One thought: most “serious” versions of the simulation hypothesis are things like Bostrom’s discussions of ancestor simulations, where a species might be interested in simulating their ancestors. In that context, if one did have sufficiently powerful complexity results, couldn’t rule out some forms of ancestor simulations at least. For example, if BQP really does take exponential time on a classical computer then we could at least rule out an ancestor simulation that was purely classical (since an ancestor simulation would be occurring in a universe with our own base physics). Similar remarks would for space-bounded resources.
Comment #30 October 4th, 2017 at 9:39 pm
Joshua #29: In Greg Egan’s
Permutation City, and probably in various other SF works, people live in simulations running on classical computers that are perfectly good enough to render their everyday experience, and that simulate quantum mechanics (if they do) merely in a hacky, approximate, and as-needed basis. You could argue that one of the main scientific motivations for building scalable quantum computers is to test and (presumably…) refute the hypothesis that that’s the kind of world we’re living in!You might say that that’s too wacky a motivation to appeal to anyone who actually funds experimental QC work, but I wouldn’t be completely sure. When I met Larry Page, it’s the first motivation he wanted to talk about.
Of course, even building scalable QCs will do nothing to rule out the possibility that our remote descendants are conjuring us (for some definition of “us”) into our present existence by simulating us on quantum computers … which, well, why wouldn’t they?
Comment #31 October 4th, 2017 at 9:51 pm
Scott #30,
“In Greg Egan’s Permutation City, and probably in various other SF works, people live in simulations running on classical computers that are perfectly good enough to render their everyday experience, and that simulate quantum mechanics (if they do) merely in a hacky, approximate, and as-needed basis. You could argue that one of the main scientific motivations for building scalable quantum computers is to test and (presumably…) refute the hypothesis that that’s the kind of world we’re living in!”
Great, so if they have systems to do more detailed computations when the humans are paying attention, we may end up using way too many resources and crash the system. Heck, that could be the Great Filter; if any given section of the universe develops intelligence life that uses too much resources, it just resets that portion of the universe to some simple initial state.
Comment #32 October 4th, 2017 at 10:15 pm
No, I didn’t need you to tell me the paper was misconstrued, but I enjoyed the post anyway. (As usual.)
Although not a falsification, it does seem to me it makes the possibility that we are in a simulation, which was infinitesimal already in my estimation, just a tad smaller.
Comment #33 October 4th, 2017 at 10:42 pm
JimV #32: The issue is, we already had general arguments for why quantum mechanics should be hard to simulate classically, and this paper doesn’t really add to them. Instead it explains why
particularquantum systems (bosonic systems in their ground state with such-and-such additional properties) are hard to simulate using aparticularmethod (QMC with local transformations). That’s what makes it so weird and arbitrary to latch on to this one particular paper when discussing the simulation hypothesis: if the hardness of classically simulating quantum systems is considered relevant at all, then why not take a much, much broader view of what we know about that?Comment #34 October 5th, 2017 at 5:04 am
re comment policy: some questions you might ask: Is it substantive? Is it challenging? Is it new or not said enough?
Comment #35 October 5th, 2017 at 10:25 am
Scott and PDV #21,
Yes, it’s good to say this. This, in my mind, is perhaps the main function of prominent (usually senior) scientists in a field. Given any suggestion, good or inane, they reduce it to the core problem, then recap the history – “yes, researchers A, B, and C have looked at this, this is what they found, check these papers”, and summarize the conclusions. Or if you are lucky, “That’s an interesting suggestion – I don’t think anyone has looked at that.” The combination of thorough knowledge of a field, and the property that the answers can be trusted, makes everyone else much more productive.
I can’t count the number of times that a short answer from an expert, with the underlying reasoning, has saved me hours to days of puzzling over what would have either been a dead end or replication of known results.
Comment #36 October 5th, 2017 at 1:42 pm
On the somewhat more serious quantum computation front; have you seen this paper about better classical algorithms for BosonSampling?
http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys4270.html
Comment #37 October 5th, 2017 at 1:55 pm
I must be missing something, because the problem seems simpler. Isn’t the falsification claim a form of question begging?
Aren’t they doing something a little like:
Simulation Hypothesis: the perceived laws of physics might be part of a simulation, governing virtual objects and without necessary causal relationships to real physics.
Skepticism-skeptic: but the simulation’s laws of physics would never allow that!
Comment #38 October 5th, 2017 at 2:00 pm
This particular simulation hypothesis should be called ‘accurate simulation hypothesis’.
I am proposing alternate simulation hypothesis. We may be living in simulated reality R1 that is attempt to simulate underlying fundamental reality R0 but that simulation many not be accurate. Some of our physical laws may be simulation artifacts or even optimization shortcuts. The question is, can we differentiate between artifacts and the intended laws of physics and discover true physical laws of R0?
Comment #39 October 5th, 2017 at 2:23 pm
Scott, thank you for being a true scientist/logician.
I know you aren’t a big fan of classically-computed simulation theories, so I appreciate that when misinformed popular press articles try to close the door on these theories, you point out their flawed reasoning. Your defense seems almost ironic, but since you’re a man of truth and reason, I guess it makes perfect sense after all.
Viva la “Digital Physics” (the movie and the theories) ! 🙂
Questions: Do you think any continuous or infinite processes are observable in our universe? Do you think any continuous or infinite processes exist in our universe? Do you think the amount of information needed to describe the history of our universe is finite? Is it important to consider historical quantum entanglement if it has already “collapsed”? Thoughts on retro-causality? Thoughts on a block universe?
Comment #40 October 5th, 2017 at 8:17 pm
Craig #36: Yes, of course I’ve seen it, and I believe we’ve even discussed it previously on this blog. It’s a really nice result, and I’m thrilled they did it.
But it’s important to understand that, far from overturning standard wisdom about BosonSampling or whatever, it
confirms a conjecturethat Alex Arkhipov and I made since 2013 or so: namely, that the complexity of BosonSampling should increase exponentially, but “only” with the number n of photons, and not also with the number m of optical modes. We knew that that was true, at any rate, in the Haar-random case under probabilistic assumptions (because of rejection sampling), so that any counterexample would have to be really weird. The new work shows that it’s true in the general case under no assumptions.Many of our experimental friends kept optimistically thinking that they could get more computational hardness merely by upping the number of modes, without having to deal with more photons simultaneously. So in conversations with them, we kept having to push back on that, and saying, “no, we have no evidence whatsoever that that’s true, even though we lack a rigorous proof that it’s
nottrue. So please continue to concentrate on getting more photons!”It’s nice to now have a rigorous proof to back us up. 🙂
Comment #41 October 5th, 2017 at 8:28 pm
Honest Annie, when you try to make a simulation hypothesis scientific, you will inadvertently have to invent a plausible multiverse hypothesis of the type where our universe is a kind of a subspace of a larger space though that’s still a speculative concept. IOW, the complexity would be decreased to exclude a Boltzmann-brain like posit. In its place, we would have to posit an evolutionary mechanism, which we might call “multiversal evolution”. I liken that scenario to a Virtual Machine. Virtual Machines are likely to evolve in an open-ended physical system with rich dynamics, because they will have self-consistent rules that preserve their bubbles, it turns out that is not a hypothesis that assumes itself, the general hypothesis is called “self organization” by some authors, though there are many names for it in the literature across several fields. Note that there is no generally agreed theory of multiverse in physics, some physicists would point to MWI, some others would talk about each universe having its own set of laws, and some would say they exist in a different temporal direction, and/or distant region of cosmos. Interestingly, this theory was pondered in the great science fiction classic Star Maker (which had an unnecessary theological bent), but the basic idea was that our universe evolved from lower dimensional manifolds. A hypothesis to ponder? Surely, computability was inherent to the very first manifold whatever it was, but it is entirely plausible that there are regions of the cosmos with different physical regimes. What’s more important is that any such scenario is much more plausible than an intelligent designer, which is shaven by Occam’s razor. Could it still be true? Sure, but you’d have to show some strong evidence for it, such evidence doesn’t exist, though I’d say that we have almost strong evidence for a (quantum) computable universe. That would be computational physics, not theology.
I have an answer to your question, though, I think that with enough data we would be able to ascertain the laws of the root manifold (the very first quantum vacuum, or whatever that is). That question is equivalent to asking whether a theory of everything is possible. “I can’t see strings, can I still find the correct theory of strings, or disprove them?”. It’s the very same question. There is nothing fundamental in the science of epistemology (AI) that tells us we cannot infer the hardest and most complete theory in science. No foundational barrier. We should be able to find it.
Comment #42 October 5th, 2017 at 8:38 pm
Eray #40, and Annie: Apologies, but this is getting too close to “general philosophizing about the simulation hypothesis, which could’ve just as well gone in
anypost on the topic rather than this one.” So if you want to continue in this vein, please do so elsewhere (I’m fine for you to post a link).Comment #43 October 5th, 2017 at 10:49 pm
Just for the record, I did no philosophizing in my unpublished post. I gave a compliment, made a little movie plug, and asked a few interesting questions. Oh well…
Comment #44 October 5th, 2017 at 11:53 pm
Jon K.: Sorry, your comment got caught in my spam filter for some reason. It’s now published.
Do you think any continuous or infinite processes are observable in our universe? Do you think any continuous or infinite processes exist in our universe?
That depends entirely on what you mean by terms like “exist” and “are observable.” My best guess is that the history of the observable universe is well-described by quantum mechanics in a finite-dimensional Hilbert space (specifically, about exp(10
122) dimensions). If so, then the outcome of any measurement would be discrete; you’d never directly observe any quantity that was fundamentally continuous. But the amplitudes, which you’d need to calculate the probabilities of one measurement outcome versus another one, would be complex numbers with nothing to discretize them.Do you think the amount of information needed to describe the history of our universe is finite?
Again, on the view above, the amount of information needed to describe
any given observer’s experienceis finite (at most ~10122bits). And the amount of quantum information contained in the state of the universe (i.e., the number of qubits) is also finite. But the amount of classical information needed to describe the quantum state of the universe (something that no one directly observes) could well be infinite.Is it important to consider historical quantum entanglement if it has already “collapsed”?
Is that question any different from just asking whether someone believes the Many-Worlds Interpretation? If not, then see my previous posts on the subject.
Thoughts on retro-causality? Thoughts on a block universe?
A lot of my thoughts about such matters are in my Ghost in the Quantum Turing Machine essay. I guess some people might call the freebit picture “retrocausal,” in a certain sense, although it denies the possibility of cycles in the causal graph of the world.
In any case, the usual motivations for retrocausality—namely, to get rid of the need for quantum entanglement, and to restore the “time symmetry” that’s broken by the special initial state at the Big Bang—I regard as complete, total red herrings. Retrocausality doesn’t help anyway in explaining quantum phenomena (what’s a “retrocausal explanation” for how Shor’s algorithm works, that adds the tiniest sliver of insight to the usual explanation, and that wouldn’t if true imply that quantum computers can do much
morethan they actually can?). And I’ve never seen any reason why our universeshouldn’thave a special initial state but no special final state. Life is all about broken symmetries; a maximally symmetric world is also a maximally boring one.Comment #45 October 6th, 2017 at 12:57 am
Scott’s Point #4 (or perhaps an extra point beyond #4) goes much further and does not really require any reference to computation. If the laws of physics, and the laws of computer science and mathematics are not really the governing rules of the universe but just the rules of the computer simulation (or dream or whatever) that we live in then we cannot apply scientific reasoning to refute living in simulation like other older supernatural claims.
A point that I find interesting (related to Scott’s #3) regarding a matrix-type simulations of humans (even a single human) is that it requires quantum fault-tolerance. The same applies to even more “mundane” tasks regarding predictability of humans and teleportation of humans. All these tasks seem to require quantum fault-tolerance.
(Three more points: the need for quantum fault tolerance to emulate complex quantum systems does not require that these systems demonstrate superior (beyond P) computation. The task of matrix-type simulations of individuals, as well as predicting and teleporting them would be extremely difficult even with quantum fault-tolerance. And, finally, there is nothing special here about “humans” and the same applies to sufficiently interesting small quantum systems based on non-interacting bosons or qubits.)
Comment #46 October 6th, 2017 at 1:29 am
Gil #45: Yes, I tried to order the points from “specific to this paper,” to “specific to our current state of knowledge where we can’t prove P≠PSPACE,” all the way down to “general-purpose refutation that logically applies forever to any purported disproof of the simulation hypothesis.”
Comment #47 October 7th, 2017 at 10:02 am
[…] Segundo Scott Aaronson, um cientista da computação que trabalha com computação quântica, o que os cientistas descobriram não tem relação com a hipótese do universo como uma simulação …. Em conclusão, a hipótese da simulação não foi falseada e permanece […]
Comment #48 October 10th, 2017 at 8:14 am
Sniffoy #9: “the universe simulating us had physics capable of doing uncomputable things as well”
Or maybe there’s no computation going on and the simulation is just a big stack of sheets of paper, each with a really long integer written on it, representing one possible state of the universe, and related states/sheets just “know” each other (forming consistent spacetimes).
Comment #49 October 11th, 2017 at 10:22 am
Scott: I’m mostly in agreement with your four points, but I think you’ve missed a technical detail of the simulationist argument that makes lower-bound complexity results connect with their arguments in a way that prevents them from escaping e.g. by appealing to god-like entities.
Nick Bostrom’s simulation argument is perhaps the most popular form of that sort of thing right now. Bostrom uses a funny argument to try to establish that there are more simulated people like us than non-simulated people like us, with the idea that if we accept the premises that establish that, we should guess that we’re among the simulated people. The argument relies on the assumption that the simulations in question are “ancestor simulations” which is to say that each universe being simulated is similar (in physics and scale) to the universe that contains it, and the folks running the simulation are much like us.
So Bostrom-style simulationists can’t appeal to nigh-infinite godlike entities. Curiously, the “ancestor simulation” constraint makes that form of the hypothesis effectively falsifiable – all we have to do is prove that you can’t simulate a universe like our own (w.r.t. physics and scale) from within a universe like our own and the argument fails.
So one could take a sufficiently strong lower bound on the complexity of simulating certain physical interactions as a strike against the Bostrom-style simulation argument. If we’d need a universe worth of memory to effectively simulate a few handfuls of electrons, then the universe simulations don’t scale at the rate they’d need to in order to establish that there are likely more simulated humans than non-simulated humans. Ergo, this general sort of result does potentially make problems for certain forms of the simulation argument.
Comment #50 October 11th, 2017 at 11:56 am
There is no evidence for simulation other than arguments people have invented and their greater or lesser appeal to our minds. The inability of classical systems to simulate quantum systems is irrelevant to the Bostrom simulation hypothesis, which assumes the world is just CGI jiggered up in whatever is the most efficient way to fill in our experience as ancestors of the sysadmins. The whole point of this argument is that this requires many orders of magnitude less compute than a full simulation of the physics would. Enough that anyone would bother with ancestor simulations (not as obvious a pursuit to me as it must be to Bostrom) and still have panoplies of crunch for every other purpose, assuming gargantuan 3D nanotech computers are possible. And no, you wouldn’t need quantum computing; all you’d have to do is make sure any experiments the human sims set up yield the right results. Your software might miss this sometimes, but then, if a QC-inconsistent result were reported, you could make sure it never happens again, and nobody would believe the original claimant. But seriously, this is all just bar talk. It’s very anthropocentric for a hypothesis about the fundamental nature of reality. The arguments invoke and wield concepts that we know we don’t entirely grasp or aren’t entirely sure make sense. It’s terribly suspicious that we’re discussing this at a time when people are increasingly immersed in the “virtual reality” of video games, the internet and apps for all things, when the computer almost does loom as a deity over our lives, increasingly so, and we have all this apocalyptic mythology about the Singularity and so on. And it’s just another version of It’s All Fake (or it could be, anyway), an argument that subsumes Descartes Demon, conspiracy theory and paranoia of every description, which feeds on itself because of its psychological, emotional, symbolic significance in the absence of any decisive refutation. What is the appeal of thinking, “It could be that the whole world is a big computer game”? That the game world is as real as the real world? Is this appealing to some people? Or does this sort of contemplation just distance one from everyday concerns and the demands of other people? In the case of Bostrom, it’s pretty clear that this comes out of the transhumanist idea of uploading to the cloud and becoming a big fat brain. And then there’s that clever argument.
Comment #51 October 23rd, 2017 at 8:16 pm
Not sure if this passes your comment legitimacy threshold (feel free to delete if not), but a few days ago I posted a question on Physics Stack Exchange about the “real-world” complexity/difficulty of quantum equilibration, which may have been inspired by this post (I frankly don’t remember): https://physics.stackexchange.com/questions/363653/do-systems-of-fermions-take-longer-to-equilibrate-than-systems-of-bosons-for-com
Comment #52 April 21st, 2019 at 1:33 am
[…] information on Van Vu’s series of lectures. Van Vu’s home page; Related posts: did physicists really just prove that the universe is not a computer simulation—that we can’t be… (Shtetl-Optimized); A related 2012 post on What’s […]
Comment #53 April 30th, 2020 at 10:20 pm
[…] da computação que trabalha com computação quântica, que afirmou em seu blog que “o que os cientistas descobriram não tem relação com a hipótese do universo como uma simulação …“. Em conclusão, “a hipótese da simulação não foi falseada e permanece […] | true | true | true | By email, by Twitter, even as the world was convulsed by tragedy, the inquiries poured in yesterday about a different topic entirely: Scott, did physicists really just prove that the universe is no… | 2024-10-12 00:00:00 | 2017-10-03 00:00:00 | article | scottaaronson.blog | Shtetl-Optimized | null | null |
|
14,664,244 | http://www.querybag.com | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,868,860 | http://stoddsblog.com/2015/07/10/thank-you-microsoft-and-goodbye/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,322,745 | https://medium.com/@woj_ciech/betabot-still-alive-with-multi-stage-packing-fbe8ef211d39 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,843,310 | https://mathvault.ca/hub/higher-math/math-symbols/logic-symbols/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,260,123 | http://boingboing.net/2015/03/24/tie-fighter-80s-anime-style.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,975,092 | https://illinoistrafficstops.com/ | Racial Disparities in Illinois Traffic Stops | null | ## Racial Disparities in Illinois Traffic Stops
#### Across the state of Illinois, Latinx and black drivers are searched and subjected to dog sniff testsStatewide, Latinx drivers are subjected to dog sniff tests at a lower rate than white drivers. However, in the agencies with statistically significant differences between white and Latinx dog sniff rates, dogs are used in stops involving Latinx drivers at higher rates. This is explained in more detail below. at significantly higher rates than white drivers when stopped by Illinois law enforcement. Yet when officers search cars, minority drivers are no more likely to actually have contraband than white drivers.
Clearly, there is a problem. We hope this report will serve as a resource to help law enforcement agencies make informed improvements around racial disparities for the good of their officers and the people they serve.
Since 2004, the Illinois Traffic and Pedestrian Stop Statistical Study Act has required Illinois law enforcement to document and report traffic stops to the Illinois Department of Transportation. The data reveal the effectiveness and unintended consequences of law enforcement tactics and allow agencies to compare themselves to each other.
## Traffic Stops
In 2017, over two million traffic stops occurred in Illinois. After each stop a law enforcement officer filled out this form, recording the details of the traffic stopSome law enforcement agencies did not submit data to the Illinois Department of Transportation. Only agencies that submitted data are included below., including:
- The driver's race
- The reason the driver was stopped
- Whether or not the officer conducted a search
- Whether or not contraband (illegal drugs, weapons, etc.) was found during the search
- What action resulted (e.g. a citation/ticket, written warning, verbal warning)
**in 2017. Shading changes in this plot do not indicate significance, but rather highlight a subset of the data and puts the metric into context of the whole population. Each square represents ~ stops.**
*Filter charts and text by selecting an agency.*
** conducted traffic stops involving black, Latinx, Asian, or white drivers in 2017.American Indian / Alaska Native, Native Hawaiian, and Pacific Islander are also listed on the traffic stops form. These races were left out here, not because they were seen as less important, but because the counts reported for these races were mostly too small to check for any sort of significance. Of these stops:**
- () involved a black driver,
- () involved a Latinx driver,
- () involved an Asian driver, and
- () involved a white driver.
People often want to compare the demographics of those stopped by law enforcement to the demographics of the residential population to calculate a "stop rate".Stop rate would refer to the metric calculated by dividing a race's stopped population by its driving population (assuming driving population were known). Assuming the driving population mirrors the residential population, an unbiased department should stop people of each race at about the same rate they appear in the population. However, this approach is not as straightforward as it may seem. First, the actual driving population, as well as the associated demographics, are unknown. Second, race is recorded differently on the traffic stop form than on the census, making comparisons difficult. In short, a "stop rate" is impossible to calculate accurately.
Black | Latinx | Asian | White | |
---|---|---|---|---|
Stop Count | ||||
Stop PercentageStop percentage refers to the percentage of the total stops that involved a driver of this race. |
In order to focus on metrics that are directly measurable—rather than guessing at demographics tracked using different systems and criteria—**we compare the outcomes by race for people who have already been stopped**. By choosing this focus, we do not mean to suggest that there are no racial disparities in who gets stopped. Agencies, especially those with high stop counts, should still consider their residential population when examining their data for bias.
Every traffic stop can produce several possible outcomes. For example, the officer may or may not search the driver. Officers may record different reasons to justify searches. They may receive **consent** from the driver to conduct a search, or they may establish probable cause, including through a **dog sniff alert**. During a search, the officer may or may not find contraband. The officer could also give a verbal warning, a written warning, or a citation.
We explore some of these outcomes below. *Select an agency above and follow along*.
## Consent Searches
During a traffic stop, an officer may ask permission to search your car. If you agree, this is called a "consent search." Unlike other searches that require officers to identify some suspicion of a crime, the decision to conduct a consent search is left to the subjective judgment of the officer. Supervisors and courts do not review those decisions. These subjective, unreviewed decisions raise concerns about racial bias, whether conscious or unconscious.
Some argue that the rate at which drivers are searched (or asked to be searched) is not necessarily an indicator of discrimination because officers may base their decisions on evidence that the driver has contraband, and individuals in different racial groups may carry contraband at different rates. So let's compare the rates at which contraband was found (from now on referred to as "contraband hit rates") by Illinois law enforcement agencies during consent searches.
**First, are minority drivers any more likely to be found with contraband?** Darker bubbles represent agencies where the difference in hit rates are statistically significant. (Find an explanation of statistical significance and how we've chosen to display it here.) As you can see below, the minority contraband hit rates of most agencies do not significantly differ from white contraband hit rates.
The contraband hit rate data provides **no evidence that any agency should search minority drivers more than white drivers**. In fact, when minority hit rates do differ significantly from white rates, they are lower. In other words, **for departments with statistically significant differences, officers are more likely to find contraband on white drivers than black or Latinx drivers**.
Next, are there racial disparities in officers' requests to search drivers?
In 2017, black and Latinx drivers across Illinois were asked to consent to a search **1.5x** and **1.2x** more often, respectively, than white drivers. **Some agencies requested consent to search black and Latinx drivers 2x, 3x, and upwards of 11x more often than white drivers.**
**Across Illinois, officers are more likely to ask black and Latinx drivers to consent to a search, even though they have not been shown to be more likely to possess contraband.**
## Dog Sniffs
The next outcome we explore is whether there are there racial disparities in officers' decisions to use a dog to sniff a vehicle during a traffic stop.
An officer's decision to use a dog during a traffic stop is another subjective decision and, as with consent searches, **officers are more likely to use a dog during stops of black drivers**. Across the state, Latinx drivers are subjected to dog sniff tests at a lower overall rate than white drivers, but **in the agencies with statistically significant differences between white and Latinx dog sniff rates, dog sniffs are used in stops involving Latinx drivers at higher rates**.
**These patterns in the use of consent searches and dog sniff tests illustrate the disparate impact of policing on minority communities.**
## Citations
A traffic stop generally results in one of three outcomes:
- Citation / ticket
- Written Warning
- Verbal Warning
In an ideal world, officers would issue tickets or warnings at approximately equal rates among racial groups of people who are stopped. However, interpreting discrepancies in these rates requires nuance. In the aggregate, patterns of issuing tickets to black and Latinx drivers, but not white drivers, may indicate racial discrimination. On the other hand, patterns of officers frequently stopping minority drivers but finding nothing to ticket may indicate racial profiling.
Drivers themselves would likely prefer that an officer give them a warning rather than a ticket. But high rates of officers only issuing warnings could be an indication that officers are using potential violations of traffic laws as pretext to pull people over. These stops result in fewer tickets because the real motivation behind the stop is to look in the car or conduct a search. Read more on this this practice here.
There are often stark racial differences in the decision to issue a ticket or a warning. These disparate outcomes—whether more tickets are issued to white drivers compared to black and Latinx drivers—warrant further investigation.
This website is an ongoing project. We are continuing to analyze this data, and will soon update this site with other metrics.
## Key Takeaways
Across Illinois:
- When law enforcement officers search a car, they are typically no more likely to find contraband on a minority driver than a white driver.
- Yet, Latinx and black drivers are searched at higher rates than white drivers.
- Black drivers are also more likely to be subjected to dog sniff tests than white drivers.
- In the agencies with statistically significant differences between white and Latinx dog sniff rates, dog sniffs are used in stops involving Latinx drivers at higher rates.
- Across Illinois, citation rates are mixed. There are patterns of both very low and very high citation rates for black and Latinx drivers, with a large number of law enforcement agencies citing minority drivers at significantly higher rates than white drivers.
## Further Analysis and Data Collection
**The Illinois Traffic and Pedestrian Stop Statistical Study Act is set to expire in July of 2019**. This session there will be a bill introduced to continue collection of this data. We urge you to support the continuation of this data collection by calling your representatives.**UPDATE: **We're very happy to announce that a new bill that will require the permanent continued collection of this data by law enforcement was signed into law by Governor Pritzker in June 2019!
This website does not explore all the information packed into this data, much of which is quite nuanced. We hope to continue to expand this website with new, explorable plots as the latest data becomes available.
If you'd like to explore the full data set, you can find it here. Please feel free to reach out to us if you find anything interesting. | true | true | true | Across the state of Illinois, Latinx and black drivers are searched, subjected to dog-sniff tests, and given citations at significantly higher rates than white drivers when stopped by law enforcement. Yet when officers search cars, minority drivers are no more likely to actually have contraband than white drivers. | 2024-10-12 00:00:00 | 2015-01-01 00:00:00 | website | justds.org | justds.org | null | null |
|
29,379,926 | https://filipnikolovski.com/posts/thoughts-on-microservices/ | Some thoughts on Microservices | null | # Some thoughts on Microservices
I know that the topic of microservices has been discussed over and over again, just wanted to add my two cents to the pile, based on my experience with this approach of designing web apps:
- A lot of people believe that the microservices architecture solve software problems that are of scaling and performance nature. But the most important problem that they solve, is an organizational one.
- Conway’s law is always in play. When you think about how the software that you build should look like, you need to think how your organizational structure should look like. They always go hand in hand.
- If you are a single team, then a design involving multiple moving pieces does not make a lot of sense from that perspective. Who should take ownership over each component? How can the services be truly independent from one another? It is inevitable to entangle them just because it’s more convenient that way and end up with something that resembles a microservice architecture, but in reality it is more of a “distributed monolith”. Starting with a microservices architecture is the wrong play that a lot of people seem to make. The structure of the software always ends up looking like the structure of the organization. It is inevitable.
- Never start with a microservice architecture if you have a single team.
- As the organization scales though, and more people are added to the team, then it is the right time to raise the question for the current design of the architecture.
- As the team grows, it will become more and more difficult to manage it. Breaking this big team apart into multiple, smaller, independent teams is a natural step going forward. Then the question that we should ask is: how can those teams take full ownership over the parts of the product that they are responsible for? How can we make the teams take control over their own “destinies”?
- For a team to be truly independent, it needs full decision making power in every layer of the stack: from the UI/UX, the APIs that the backend is going to expose, all the way down to the infrastructure that willl power the whole thing. Doing this as a single monolithic application is certainly feasible, but then the teams will need to be synchronized in their development process, especially in the deployment phase. They will step on each other’s toes constantly. Thus, a need will arise to create an architecture that will mirror the organizational one. Microservices solve this exact problem - scaling the teams.
- The services need to be composable, and play well with each other, just as you would create composable modules in a monolith. Breaking it apart and simply sticking a web server in front of it won’t save you.
- For features that span multiple domains, a clear ownership of the data, as well as clear and consistent APIs are a must, otherwise you risk complicating the relationship between the services that are involved. Defining these boundaries is the responsibility of the teams that will develop this feature. The communication between the services should reflect the communication of the teams. | true | true | true | null | 2024-10-12 00:00:00 | 2021-11-29 00:00:00 | null | null | null | null | null | null |
33,037,575 | https://jjy.luxferre.top/ | Here you can sync your radio-controlled watch or clock. For free. | null | # Here you can sync your radio-controlled watch or clock. For free.
Welcome to Fukushima. Well, kinda… **Press any key or click anywhere to start the transmission.**
## What is that high-frequency noise in the background?
This is JJY time signal being transmitted through your speakers at its third harmonic (because 40 KHz is impossible to emit directly). Just plug in the headphones, make sure the clock on your device is in sync, load this page and start reception on the watch. Place the watch near the speakers/headphones and you should be good.
The page is called Fukushima because the station that transmits original JJY signal at 40 KHz frequency is located there.
## Why would I need this?
This was mostly made for fun and experimental purposes but if you don't want to manually sync your watch for some reason and are living too far away from any longwave time transmitters, this might really come in handy.
## How do I actually start the reception?
Well, refer to the manual of your watch. On most watches, you'll have to change your home city to Tokyo. However, on most Casio Waveceptor/G-Shock branded watches you just need to enter the (undocumented) reception test menu (press Mode+Light+Receive simultaneously), select "J 40" reception mode with lower right button and start it with the upper right button. This way you don't need to fiddle around with your home timezone and will still get the emulated signal.
## Wouldn't this mess up my local time?
As long as the time on your device/browser is set correctly (and synced recently - via NTP, for example), it shouldn't. Moreover, this particular page (as well as its underlying library) was created just because all existing solutions of JJY emulation **do** mess up your local time if your device timezone isn't Japanese. In contrast to them, regardless which timezone you're in, this service **always** transcodes local time into the correct Japanese time and transmits it accordingly.
## What are the dependencies?
This service is completely client-side and relies on the JJY.js library. Of course, Web Audio API support is required and Performance API support is highly recommended (for more or less precise clocking). Major modern desktop and smartphone browsers support both APIs so you don't have to worry about it. Moreover, entire library is implemented in ECMAScript 5 standard so you also don't have to worry about ES6 support either.
## I see that you're emitting sine waves. How is this ever expected to work? And why don't you use built-in sine (square, triangle etc) oscillator?
**TL;DR: this is not a pure sine wave. Pure sine (square, triangle etc) doesn't work. This is an artificially emulated waveform similar to the 16-bit signed PCM sine wave which was used in original Japanese simulator apps long ago.**
In fact, these are very good questions that don't have simple answers. This may seem to contradict any known laws of physics but here are your facts. Original JJY simulator apps (for Windows, Android and iOS) were all aimed at 16-bit signed PCM output. Their radio emission principle was based on this research (originally in Japanese). In short, all these apps were modulating the signal in an assumption that overdriven sine wave distortion would create enough power for its 3rd harmonic to be clearly transmitted via sound hardware. Web Audio API, however, operates solely on 32-bit float output, allowing us to build waveforms pure enough for any general-purpose usage. And it turned out that overdriven pure float sine simply does not produce the desired distortion effect. Same goes for either normal or overdriven square: the power of its 3rd harmonic is just not enough to be clearly received by Casio watches. So the conclusion was shocking: **it's the imperfection of the sine wave that actually causes necessary signal distortion when ramping up the gain!** Since Japanese researchers didn't seem to have any other output options at the time, they might easily miss this fact.
Nevertheless, the simplest option as of now was to emulate the original 16-bit signed PCM signal from float sine values by multiplying them by 32767, flooring and dividing back by 32767. All of a sudden, it started working after this. Finding even more optimal distortion level (substitution of 32767 by something else, e.g. 19683 was found to be even better when using an external USB speaker) is an entirely new field of research (since we're not limited with integer "bitness" these days and are operating on float range) but for now the quantization level of ⅓ of the AudioContext sampling rate was chosen. I.e. if your Web Audio API context sampling rate is 44100, the distorter parameter value will be equal to 14700.
However, if you're brave enough to read down to here, here's my reward for you. Recent JJY.js version exposes `distorter` parameter, and you can experiment with it yourself. Just append `#number` to this page URL. Example with 19683. Feel free to find the distorter parameter that syncs your watch the fastest. Distorter value must be an integer or float greater than or equal to 2. If the constraint fails, the default of ⅓ of the sampling rate applies.
## Is there any offline app for any platform utilizing your findings?
Nope, and there are no plans to make one. However, there are some plans to make more "app-like" version of this page that would not allow any experimental customizations but would include highly-optimized code and leverage HTML5 offilne application cache. So when you visit that version once in non-incognito mode, you'll be able to return to it even when you're offline. Stay tuned!
## Why isn't my watch/clock picking up the signal, regardless how I place it or what speakers/headphones I use?
It's not recommended to switch tabs while syncing because it can and will affect the precision timer code. Unfortunately, browser runtimes are very far from realtime systems. So, while all possible effort is made to emit the signal precisely each second (calculating setTimeout deltas, signal prebuffering etc), user's help is still needed to maintain this precision. Which is, in this case, as simple as keeping this tab open.
But if you're constantly here and are sure that all the reception is set up correctly but are not getting even stable L2 signal, then you're probably looking at the "beta" version of the service right now. This page is usually reuploaded whenever a new JJY.js version is tested. But if you found that this page is not capable of emitting sufficient sound whereas underlying library (JJY.js) is, feel free to contact the author on GitHub.
## Are there any plans of making JJY60/WWVB/MSF/DCF77 emulators?
Let's break it down this way:
JJY60 — definitely no. All JJY-enabled watches support both JJY40 and JJY60, so I see no need in duplicating the effort by finding a different modulation method.
DCF77 — I really wish I could but most probably no. 77.5 KHz signal cannot be easily emulated under 2nd or 3rd harmonics within audible range.
MSF — yes, that's probably the best candidate which could be worked on next. It's as relevant for me as DCF77 but is much more realistic to implement because of 60 KHz transmission. It also has a more straightforward-encodable timecode information. It's going to be fun and more practical since all Casio watches capable of DCF77 are also capable of MSF reception for the same set of time zones.
WWVB — probably yes but only after MSF emulator is complete. By the way, its timecode is very similar to JJY. | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
15,421,514 | https://www.wsj.com/articles/where-amazon-is-failing-to-dominate-hollywood-1507282205 | Where Amazon Is Failing to Dominate: Hollywood | Ben Fritz; Joe Flint | ET
On the Monday morning after Amazon.com Inc. AMZN 1.16%increase; green up pointing triangle failed to win a single prize at the Primetime Emmy Awards, one of its senior television executives gathered his dejected staff at their Los Angeles-area office for a pep talk.
While Amazon Studios went home empty-handed, its streaming rivals Hulu and Netflix Inc. won multiple awards. Additionally, Amazon had earlier passed on opportunities to bid on “The Handmaid’s Tale” and “Big Little Lies,” which won top awards that night, said people with knowledge of the sales process.
Copyright ©2024 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8 | true | true | true | Amazon Studios has been stumbling when it comes to producing content that attracts audiences and buzz. It has alienated top producers and is struggling to redefine its strategy. | 2024-10-12 00:00:00 | 2017-10-06 00:00:00 | article | wsj.com | The Wall Street Journal | null | null |
|
40,825,407 | https://www.thecity.nyc/2024/06/19/basketball-hoops-blacksmiths-welding/ | The Hands Behind New York City’s Hoop Dreams | Haidee Chu | *This story is part of Summer & THE CITY, our weekly newsletter made to help you enjoy — and survive — the hottest time in the five boroughs.** Sign up here**.*
It’s orange, round, and 18 inches across. It’s in every New York City public basketball court, yet most aren’t exactly the same.
At dozens of courts across the five boroughs, countless players have chucked millions of balls at, into and sometimes through hundreds of steel hoops handmade by a handful of blacksmiths in a Parks Department workshop.
“The circle doesn’t just form itself — we form it,” said 36-year-old blacksmith Giovanni Romano. “Is there a machine that can probably make it? I mean, yeah … But this is just the way it’s been done.”
Inside a red, metal-lined shack hidden from plain sight along the Flushing Meadows Corona Park rim, a large welding table anchors the workshop where Romano and three other Parks blacksmiths fabricate basketball hoops for city parks the old-fashioned way.
Nearby, a large American flag adorns the wall in the space where the four native New Yorkers also make barbecue grills and lifeguard chairs for public parks, beaches and pools.
“When I was a little kid just driving through the city with my parents and seeing construction sites and big cranes, I would go, ‘Oh wow, look at that,’” Romano said. “Then I became an ironworker, and I did that. And then I came here, and we’re doing rims, we’re doing pools.”
Romano and 43-year-old Rob Valenti had met as ironworkers on city buildings and bridges, while 40-year-old Andre Emilien helped put together the city’s gas pipes. Chris Claiborne, 44, worked on gratings and railings at NYCHA developments. But all of them managed to find their way to the shack, the birthplace of many city hoops.
The rims begin with a solid, cylindrical steel stock that’s cut to a length of 58.5 inches. Then it is fed and cranked through a roller, where indented wheels bend it into a crescent shape. The crescent then lands on an anvil mounted atop the trunk of a tree cut down in the Flushing park, and a blacksmith hammers it into a near-perfect hoop before transferring it over to a welding table.
Orange sparks danced off that table as a sharp blue light marked where Romano joined the hoop onto slabs of metal, which the blacksmiths had cut and shaped to form an anchor that would be bolted onto a backboard.
The process can burn your eyes without proper protective gear, Valenti warned. “It’s like an ultraviolet light,” Claiborne added. “It’s like the solar eclipse — a mini version of it.”
“It feels good thinking and knowing that they’re in the park, and kids are using it,” Valenti said of the handmade rims. “A lot of great basketball players that came out of New York played on these hoops, so that’s pretty cool.”
**An Endangered Craft**
In the nearly 140 years since the game of basketball was first introduced to New York City, it’s become the city’s quintessential sport — one that needs nothing more than a ball to bounce on flat ground along with a backboard and a rim.
But without the rim, there’s no game at all.
The handmade rims the city blacksmiths make are notoriously unforgiving, with their solid steel sending missed shots flying.
“I did not like playing basketball growing up,” said Emilien, who played in uptown Manhattan as a kid. “You don’t really put any thought into that unless you play basketball for real or you end up making them — and now I know why those balls can fly so far.”
But the rims he makes are becoming rarities as the Parks Department now puts up cheaper, factory-manufactured rings at new street courts and those receiving full renovations — turning to handmade hoops only when the old ones need replacing.
That means the shop no longer makes as many hoops as it once did. The blacksmiths had made about 30 of them so far this year, Valenti estimated, and those will withstand all kinds of use, like when New Yorkers use them as dead-hang or pull-up bars.
“The lifespan of a basketball hoop could be a hundred years,” said Romano. “The only way they come down is if someone takes them down, or if they’re repairing a park. It’s not breaking from just wear and tear.”
On a spring Tuesday, a crew of four maintenance workers installed eight new, pre-manufactured rims across four freshly painted basketball courts at Dr. Charles R. Drew Park in South Jamaica.
But where did the old rims go?
“The dumpster,” said Chris Gruver, a 54-year-old supervisor of mechanics. Another supervisor chimed in, chuckling: “Either that or the recycling plant.”
**‘They Were Terrible’**
At the West Fourth Street Courts in Greenwich Village, the rims are now machine jobs. But some players and spectators still carry with them memories of the blacksmiths’ handmade hoops — even the famously challenging double-rimmed ones the Parks Department discontinued decades ago.
That includes 70-year-old Vincent Matos, one of the “old heads” who remains a near-daily presence at the court that’s commonly known as the Cage.
Matos had spent much of his youth sharpening his shot on handmade hoops across the city, he said — from Co-Op City and Webster Avenue in The Bronx to Spanish Harlem and as far down as Battery Park in Manhattan.
These days, Matos runs a basketball team — for “young inner city teenagers committed to rising above their adversities” — called the New York Sky Risers.
“Basketball was a tool,” to relieve pressure and build community, said Matos.. “And now it’s a way for me to communicate with the youth — to save lives, even.”
Across the court, 24-year-old Travis Elmore, a recently graduated Division-2 college player, awaited a worthy match.
The six-foot-six-inch power forward grew up playing on the courts of Harlem, he said, brushing up his skills on the handmade, no-net, double-rimmed hoops that hurled failed shots aggressively in unpredictable directions.
“Oh my gosh, they were terrible. I hate playing on the double rims,” Elmore recalled. But knowing those had been hand-made, he said, “makes me want to take care of the rims a lot more, you know?”
As for Matos, he recalled learning to play ball at eight years old while growing up in an orphanage in Rockland County and then turning to the city courts for an “escape from reality” as he grew up in a group home in The Bronx.
“This is an escape for a lot of people,” he said. “You can come here and be whoever you want. It doesn’t matter, because you get treated the same. Here, you are all equal.” | true | true | true | These blacksmiths put the baskets in basketball, creating the notoriously unforgiving steel rims that send missed shots flying in city parks. | 2024-10-12 00:00:00 | 2024-06-19 00:00:00 | article | thecity.nyc | THE CITY - NYC News | null | null |
|
21,400,224 | https://mitxela.com/projects/concepts | concepts - projects | null | Date
Name
Progress
Random
Talking Tines
Progress: Concept
Tuning forks that talk
29 Jun 2024
Simsim Pendant
Progress: Concept
Mercurial Medallion
5 Mar 2024
Hand-cranked Flux Capacitor
Progress: Concept
Control the flow of time
19 Dec 2023
Analemmagraph
Progress: Concept
Video of a day, one year in every frame
26 Sep 2023
Ethereal Ball of String
Progress: Concept
Another entanglement
20 Jun 2023
Endless Hourglass
Progress: Concept
A timeless illusion
13 Feb 2023
Digital Hit Camera
Progress: Concept
Dreams of a tiny toy
11 Feb 2023
Jack Sparrow's Compass
Progress: Design Stages
To show me where the crow flies.
19 Aug 2022
Thor's Piano
Progress: Concept
For electrifying music
11 Aug 2022
Lest Lint be Lost
Progress: Concept
Another episode of Laundry Zen
18 Feb 2022
Custard Die
Progress: Design Stages
Settable loaded dice
3 Feb 2022
Many Wings Make Flight Work
Progress: Concept
Flock of mechanical birds
2 Oct 2021
Shamir's Password Store
Progress: Concept
Distributed password manager
20 Mar 2021
Electric Accordion
Progress: Design Stages
Folding instrument, scissors style
19 Mar 2021
Alpha Pendant
Progress: Concept
An eternal heartbeat
6 Jul 2020
Joule's Domestic Hot Water Supply
Progress: Concept
Spiritual successor to Joule's Kettle
29 Apr 2020
Human-readable 2D Barcode
Progress: Concept
QR-code-like symbol, readable by both humans and machines
20 Mar 2020
Entanglograph
Progress: Concept
Entangled graphs, oh!
5 Mar 2020
Geiger Chimes
Progress: Concept
A radioactive jingle
11 Feb 2020
Clockwork Etch-a-Sketch
Progress: Concept
Mechanical drawing toy
8 Feb 2020
Swiss Roll Nuclear Reactor
Progress: Concept
Yummy
2 Sep 2019
Intracardiac Adrenaline Injection Alarm Clock
Progress: Concept
For those who really, really struggle to get out of bed
16 May 2019
Bubblewrap Popping Assistance Tool
Progress: Concept
Tool for assistance in the popping of bubblewrap
16 May 2019
Socks Machine
Progress: Concept
It's a Socks Machine
31 Jan 2019
Double Helix Fountain
Progress: Concept
Moist
31 Jan 2019
Heatshrink Socks
Progress: Concept
Sock Woes Begone
31 Jan 2019
Van der Bed
Progress: Concept
An Electrostatic Night's Sleep
31 Jan 2019
Hot Server
Progress: Concept
Computers, that are hot, on purpose
31 Jan 2019
Automatic Thermostatic Imbiber
Progress: Concept
A drinking straw. May be difficult to clean.
27 Jan 2019
Autotune Slide Whistle
Progress: Concept
Finally, the war on tonality has ended
28 Sep 2018
Ramset Bikelock
Progress: Concept
I think this might be my best bike-lock idea yet.
23 Jun 2018
Penrose Pixels
Progress: Demo
Aperiodic arrangement of image elements
28 May 2018
One-Way Glass
Progress: Concept
Things can be seen through it one way, but not the other.
16 May 2018
Personal Globus
Progress: Concept
Model Earth
27 Feb 2018
Homing Hubcaps
Progress: Concept
Method for mitigating missing wheel rims
7 Nov 2017
AsciiCam
Progress: Demo
Terminal Vision
1 Nov 2017
Bowtie Headphones
Progress: Concept
Triangulastic
11 Sep 2017
Pothole Fingerprints
Progress: Concept
GPS based on road texture
8 Jan 2017
Hourglass Mechanical Clock
Progress: Animated
Mechanically measures the moment the sand slide stops.
22 Feb 2016
Geocentric Orrery Sundial Watch
Progress: Concept
It's always useful to keep a tiny orrery in your pocket.
23 Jan 2016
Flying Car
Progress: Concept
"It is not the car that flies, it is only yourself" ~ bald kid from the matrix.
23 Jan 2016
Shoe Horn
Progress: Concept
Foot-Mounted Stethoscope
23 Jan 2016
Mitre Saw Nut Cracker
Progress: Concept
For kernel extraction.
25 Nov 2015
Polar Coordinate Etch-a-Sketch
Progress: Demo
Escape the Cartesian system!
19 Nov 2015
Barometric Toaster
Progress: Concept
Hyperbaric Toasting Apparatus
16 Nov 2015
Slinky Roomba
Progress: Concept
To boldly go where no roomba has gone before.
28 Oct 2015
Steampunk Mug
Progress: Concept
But not a pillock with gears on his hat.
25 Oct 2015
Wind-up Boots
Progress: Concept
Toasty toes
18 Oct 2015
Biometric Toaster
Progress: Concept
Advanced toaster with fingerprint scanner.
6 Oct 2015
Louvre Timer
Progress: Concept
Colour-changing kitchen timer
5 Oct 2015
Solid Fuel Lighter II
Progress: Concept
Semiautomatic
23 Sep 2015
Electrostatic Boots
Progress: Concept
High Voltage Footwear
10 Sep 2015
Giant Loppers
Progress: Concept
For hardcore lopping.
19 Jul 2015
Tube Amp Camera
Progress: Concept
Non-linear.
1 May 2015
Clockwork Battery
Progress: Concept
Retro Device Retrofit
27 Apr 2015
Inert Gas Kitchen Tap
Progress: Concept
Argon/CO2 mix from a flexible hose, located somewhere between the stove and the fridge.
14 Apr 2015
Variocentric Orrery
Progress: Demo
'Jeeves, set my orrery to the Tychonic system, post-haste!'
30 Dec 2014
Decent Electric Hob
Progress: Concept
A simple addition that would make the bloody things usable.
11 Oct 2014
Total Balance
Progress: Concept
Full Solid Angle WB Meter
24 Aug 2014
The Physical Jpeg
Progress: Animated
Interactive art installation
30 Jun 2014
Joule's Kettle
Progress: Concept
Splash.
29 Jun 2014
Dambusters Play Set
Progress: Concept
Re-enact the bouncing-bomb raids in miniature
23 Jun 2014
Mortar Conveyance Tubes
Progress: Concept
"Pneumatic" delivery system.
26 May 2014
Engine Chorus
Progress: Concept
Growling duality.
7 May 2014
Whisk Sharpening Kit
Progress: Concept
New, from the Acme Utensil-Conversion Equipment Co.
29 Mar 2014
Resonance Time
Progress: Concept
Hear the Hour at Night
25 Mar 2014
The Megalomaniac's Melodica
Progress: Concept
Dynamics!
26 Feb 2014
South Facing Ear Trumpet
Progress: Concept
South Facing Ear Trumpet Hat
23 Feb 2014
Steam Calculator
Progress: Concept
Digital!
22 Feb 2014
Bernoulli's Barbecue Tongs
Progress: Concept
Bafflement and Bodgery Begone!
18 Feb 2014
Emergency Autorotating Escape Hat
Progress: Concept
Positively Ludicrous, Inspired by Nature
15 Feb 2014
Russian Roulette on the Putting Green
Progress: Concept
Putt putt putt putt BANG
9 Feb 2014
Pencil Lathe
Progress: Concept
Miniature machine exclusively for sharpening stationery.
1 Feb 2014
Let Water Flow Uphill
Progress: Concept
An exciting experiment in hypothetical hydrodynamics
25 Jan 2014
Self-heating Zipper
Progress: Concept
Fed up with your sleeping bag's cold metal zip as you get in? The solution is here...
3 Jan 2014
Monster Induction Metal-breathing Jet Engine Ion Drive
Progress: Concept
Proper Spaceship Propulsion
16 Oct 2013
Exercise Mower
Progress: Concept
For those proclived to exercising and mowing.
11 Aug 2013
Velodrome in the Sky
Progress: Concept
Pedal power a plane
25 Jun 2013
Wifi Shredder
Progress: Concept
To be left inconspicuously around the office
23 May 2013
Inductive Automatic Watch Winder
Progress: Concept
Wirelessly wind an automatic watch.
25 Apr 2013
Old Timey Self-Checkout
Progress: Concept
A moment of calm amidst the incessant ramming of targeted advertisements into our brains.
22 Apr 2013
Viola Bridge Transducer
Progress: Experimenting
New sounds resonating your way soon.
16 Jun 2012
Time Travelling Computer
Progress: Concept
Not as useless as you'd think.
14 Jun 2012
Sophisticated Sundial
Progress: Concept
Accuracy, clarity, and maybe mobility
2 Feb 2012
A Self Pumping Fountain
Progress: Concept
Of purely mechanical means.
28 Sep 2011
Tiltrotor Autogyro Quadcopter Hybrid
Progress: Concept
A new kind of aircraft.
17 Feb 2011
Water Elevator
Progress: Concept
Redefining luxury.
17 Feb 2011
Hoover MP3 Player
Progress: Concept
Practicality and style
5 Feb 2011
Executive Document Fastener
Progress: Concept
Staples are so plebeian.
20 Nov 2010
Moore's Pocket Calculator
Progress: Concept
Celebrate technological advancements with your own calculator collection
26 May 2010
Frame Lock
Progress: Concept
Bicycle anti-theft design
20 Apr 2010
Intravenous Coffee Alarm
Progress: Concept
A refreshing way to wake up.
18 Feb 2010
Solid Fuel Lighter
Progress: Concept
Candle and flint in a zippo-style case
6 Jan 2010
KiloVolt Toilet
Progress: Concept
For men with awful aim
28 Aug 2009
Shearing pin tumbler lock
Progress: Design Stages
Segmented pins provide added annoyance for lock pickers.
11 Jun 2009
Mirror Blinds
Progress: Concept
Cheap, manual solar thermostat
9 Jun 2009
Nanoworld Domination
Progress: Concept
Doom and gloom for the nanopixies.
13 May 2009
Lawn Knitter
Progress: Concept
Making it easier to be green.
4 May 2009
Motherboard Irrigation
Progress: Concept
The lazy man's way to clean a PC.
6 Apr 2009
PeopleWheel
Progress: Concept
Power from people the easy way
1 Apr 2009
Thermochromic House Paint
Progress: Concept
For temperature control.
24 Mar 2009
Single Room Lighting
Progress: Concept
Energy saving without the effort.
20 Mar 2009
Lenticular Traffic Lights
Progress: Concept
To reduce confusion
12 Jan 2009
Portable Personal Private Planetarium
Progress: Concept
The perfect pocket peripheral.
19 Nov 2008
Pot Luck
Progress: Half Baked.
Who needs Pot Noodle, when you can have Pot Luck? Just add water.
29 Oct 2008
Nitinol Tent Poles
Progress: Concept
Memory wire self erecting tent.
19 Oct 2008
Junkyard Mech Walker
Progress: Concept
Overkill cure for paranoia.
30 Aug 2008
Shuttle launch from a blimp
Progress: Concept
A way to save fuel?
9 Aug 2008
Rowboat Wars
Progress: Concept
Water borne bumper cars.
14 Jul 2008
Stirling engine in the cooling system of a car
Progress: Concept
A way of dramatically increasing engine efficiency.
5 Jul 2008
The Hydroelectric Calculator
Progress: Concept
Pioneering in renewable energy, I present the latest advancement that will save the planet.
5 Jul 2008
The Mountain Railway Morning Brew
Progress: Design Stages
Finally, an excuse to permanently lay model railway track all through my house.
5 Jul 2008
Thought experiment: the light-second rod.
Progress: Solved
Questioning our perception of movement.
5 Jul 2008
The Offshore Golf Course
Progress: Concept
The ultimate way to waste $100 million.
1 Jul 2008
All Projects | true | true | true | null | 2024-10-12 00:00:00 | 2024-06-29 00:00:00 | null | null | null | mitxela.com | null | null |
37,132,915 | https://www.wired.com/story/prosecraft-backlash-writers-ai/ | Why the Great AI Backlash Came for a Tiny Startup You’ve Probably Never Heard Of | Kate Knibbs | Hari Kunzru wasn’t looking for a fight. On August 7, the Brooklyn-based writer sat on the subway, scrolling through social media. He noticed several authors grumbling about a linguistic analysis site called Prosecraft. It provided breakdowns of writing and narrative styles for more than 25,000 titles, offering linguistic statistics like adverb count and ranking word choices according to how “vivid” or “passive” they appeared. Kunzru pulled up the Prosecraft website and checked to see whether any of his work appeared. Yep. There it was. *White Tears*, 2017. According to Prosecraft, in the 61st percentile for “vividness.”
Kunzru was irked enough to add his own voice to the rising Prosecraft protest. He wasn’t mad about the analysis itself. But he strongly suspected that the founder, Benji Smith, had obtained his catalog without paying for it. “It seemed very clear to me that he couldn’t have assembled this database in any legal way,” he says. (And Kunzru is no stranger to thinking about these issues; in addition to his successful career as a novelist, he has a past life as a WIRED writer.)
“This company Prosecraft appears to have stolen a lot of books, trained an AI, and are now offering a service based on that data,” Kunzru tweeted. “I did not consent to this use of my work.”
His message went viral. So did a plea from horror writer Zachary Rosenberg, who addressed Benji Smith directly, demanding that his work be removed from the site. Like Kunzru, he’d heard about Prosecraft and found himself upset when he discovered his work analyzed on it. “It felt rather violating,” Rosenberg says.
Hundreds of other authors chimed in. Some had harsh words for Smith: “Entitled techbro.” “Soulless troll.” “Scavenger.” “Shitstain.” “Bloody hemorrhoid.”
Others pondered legal action. The Author’s Guild was inundated with requests for assistance. “The emails just kept coming in,” says Mary Rasenberger, its CEO. “People reacted really strongly.” Prosecraft received hundreds of cease-and-desist letters within 24 hours.
By the end of the day, Prosecraft was kaput. (Smith deleted everything and apologized.) But the intense reaction it provoked is telling: The great AI backlash is in full swing.
Prosecraft’s founder didn’t see the controversy coming.
On Monday, Benji Smith had recently returned to his home in a small town just outside of Portland, Oregon.
He’d spent the weekend at a gratitude meditation conference, and he was excited to return to work. Until this past May, Smith had held a full-time job as a software engineer, but he’d quit to focus on his startup, a desktop word processor aimed at literary types, called Shaxpir. (Yes, pronounced “Shakespeare.”) Shaxpir doesn’t make much money—not enough to cover its cloud expenses yet, Smith says, less than $10,000 annually—but he’d been feeling optimistic about it.
Prosecraft, which Smith launched in 2017, was a side hustle within a side hustle. As a stand-alone website it offered linguistic analysis on novels for free. Smith also used the Prosecraft database for tools within the paid version of Shaxpir, so it did have a commercial purpose.
Although he was anointed the ur-tech bro of the week, Smith doesn’t have much VC slickness. He’s a walking Portlandia stereotype, with piercings and bird tattoos and stubble; he talks effusively about the art of storytelling, like he’s auditioning for the role of a superfan of *The Moth*. A self-described theater kid, Smith dabbled in playwriting before getting his first tech gig at a computational linguistics company.
The idea for Prosecraft, he says, came from his habit of counting the words in books he admired while he was working on a memoir about surviving the 2012 *Costa Concordia* shipwreck. (“*Eat Pray Love* is 110,000 words,” he says.) He thought other authors might find this type of analysis helpful, and he developed some algorithms using his computational linguistics training. He created a submissions process so writers could add their own work to his database; he hoped it would someday make up the bulk of his library. (All in all, around a hundred authors submitted to Prosecraft over the years.) It did not occur to Smith that Prosecraft would end up enraging many of the very people he wanted to impress.
Prosecraft did not train off any large language models. It was not a generative AI product at all, but something much simpler. More than anything else, it resembled the kind of tool an especially devoted and slightly corny computational linguistics graduate student might whip up as an A+ final project. But it appears to share something crucial with most of the AI projects making headlines these days: It trained on a massive set of data scraped from the internet without regard to possible copyright infringement issues.
Smith saw this as a grimy means to a justifiable end. He doesn’t defend his behavior now—“I understand why everyone is upset”—but wants to explain how he defended it to himself at the time. “What I believed would happen in the long run is that, if I could show people this thing, that people would say, ‘Wow, that's so cool and it's never been done before. And it's so fun and useful and interesting.’ And then people would submit their manuscripts willfully and generously, and publishers would want to have their books on Prosecraft,” he says. “But there was no way to convey what this thing could be without building it first. So I went about getting the data the only way that I knew how—which was, it's all there on the internet.”
Smith didn’t buy the books he analyzed. He got most of them from book-pirating websites. It’s something he alluded to in the apology note he posted when he took Prosecraft down, and it’s something he’ll admit if you ask, although he seems bewildered about how mad people are about it. (“Would people be less angry with me if I bought a copy of each of these books?” Smith wonders out loud as we talk over Zoom. “Yes,” I say.) The practice of using shadow libraries to conduct scholarly work has been debated for years, with projects like Sci-Hub and Libgen disseminating academic papers and books to the applause of many researchers who believe, as the old adage goes, that information wants to be free.
Many of the authors who chastised Smith, like Kunzru, disapprove primarily of this pirated database. Or, more specifically, they hate the idea of trying to make money off work derived from a pirated library as opposed to simply conducting research. “I’m not against all data scraping,” Devin Madson says. “I know a lot of academics in digital humanities, and they do scrape a lot of data.” Madson was one of the first people to contact Smith to complain about Prosecraft last week. What rubbed her the wrong way was the attempt to profit from the analytical tools developed with scraped data. (Madson also more broadly disapproves of AI writing tools, including Grammarly, for, as she sees it, encouraging the homogenization of literary style.)
Not every author opposed Prosecraft, despite how it appeared on social media. MJ Javani was delighted when he saw that Prosecraft had a page about his first novel. “As a matter of fact, I dare say, I may have paid for this analysis if it had not been provided for free by Prosecraft,” he says. He does not agree with the decision to take the site down. “I think it was a great idea,” Daniela Zamudio, a writer who submitted her work, says.
Even supporters have caveats about that pirated library, though. Zamudio, for instance, understands why people are upset about the piracy but hopes the site will come back using a submissions-based database.
The moral case against Prosecraft is clear-cut: The books were pirated. Authors who oppose book pirating have a straightforward argument against Smith’s project.
But did Smith deserve all that blowback? “I think he needed to be called out,” Kunzru says. “He maybe didn't fully understand the sensitivity right now, you know, in the context of the WGA strike and the focus on large language models and various other forms of machine learning.”
Others aren’t so sure. Publishing industry analyst Thad McIlroy doesn’t approve of data scraping, either. “Pirate libraries are not a good thing,” he says. But he sees the backlash against Prosecraft as majorly misguided. His term? “Shrieking hysteria.”
And some copyright experts have watched the furor with their jaws near the ground. While the argument against piracy is simple to follow, they are skeptical that Prosecraft could’ve been taken to court successfully.
Matthew Sag, a law professor at Emory University, thinks Smith could’ve mounted a successful defense of his project by invoking fair use, a doctrine allowing use of copyrighted materials without permission under certain circumstances, like parody or writing a book review. Fair use is a common defense against claims of copyright infringement within the US, and it’s been embraced by tech companies. It’s a “murky and ill-defined” area of the law, says intellectual property lawyer Bhamati Viswanathan, who wrote a book on copyright and creative arts. Which makes questions of what does or does not constitute fair use equally murky and ill-defined, even if it’s derived from pirated sources.
Sag, along with several other experts I spoke with, pointed to the Google Books and HathiTrust cases as precedent—two examples of the courts ruling in favor of projects that uploaded snippets of books online without obtaining the copyright holders’ permission, determining that they constituted fair use. “I think that the reasons that people are upset really don't have anything to do with this poor guy,” says Sag. “I think it has to do with everything else that’s going on.”
Earlier this summer, a number of celebrities joined a high-profile class action against OpenAI, a suit that alleges that the generative AI company trained its large language model on shadow libraries. Sarah Silverman, one of the plaintiffs, alleges OpenAI scraped her memoir *Bedwetter* in this way. While the emotional appeal behind the lawsuit is considerable, its legal merits are a matter of debate within the copyright community. It’s not widely viewed as a slam dunk by any means. It’s not even clear a court will find that the source of the books is relevant to the fair-use question, in the same way that you couldn’t sue a writer for copying your plot on the grounds that they shoplifted a copy of your book.
Rasenberger strongly supports enforcing copyright protections for authors. “If we don't start putting guardrails up, then we will diminish the entire publishing ecosystem,” she says. Rasenberger cites the recent US Supreme Court decision on whether some of Andy Warhol’s artwork infringed on copyright as evidence that the legal system may be reining in its interpretation of fair use. Still, she sees the legal question as unsettled. “What feels fair to an author isn't always going to align with the current fair-use law,” Rasenberger says.
“Prosecraft is a little guy who got swept up in a much bigger thing—he’s collateral damage,” says Bill Rosenblatt, a technologist who studies copyright.
Rosenblatt is fascinated by how far public opinion on copyright and data has shifted since the days of Napster. “Twenty years ago, Big Tech positioned this as ‘it's us against the big evil book publishers, movie studios, record labels,’” Rosenblatt says. Now the dynamic is strikingly different—the tech companies are the Goliaths of business, with artists, musicians, and writers attempting to rein them in. While Prosecraft might’ve been viewed more sympathetically in an earlier era, today it is seen as ideologically aligned with Big Tech, no matter how small it actually is.
Smith offered the same service for five years without issue—but at a moment when writers and artists are deeply wary of artificial intelligence, Prosecraft suddenly looked suspicious in this new context. An AI company only in the loosest sense of the term, Prosecraft wasn’t so much low-hanging fruit as it was a random cucumber on the ground *near* the fruit tree. Was there something rotten about it? Yes, sure. But describing it as collateral damage isn’t inaccurate. The real targets of the AI backlash that swept Prosecraft away are the generative AI companies that are currently the toast of Silicon Valley, as well as the corporations planning to use those generative AI tools to replace human creative work.
A year from now, it’s unlikely people will remember this particular social-media-fueled controversy. Smith acquiesced to his critics quickly, and a little-used, small-potatoes analytics tool is now defunct. But this incident is illustrative of a larger cultural turn against the unauthorized use of creative work in training models. In this specific case, writers scored an easy victory against one dude in Oregon with a shaky grasp on the concept of passive voice.
I suspect the reason so many prominent voices celebrated so loudly is because the larger ongoing fights will be much longer, and much harder to win. The Hollywood writer’s strike, with the Writers Guild of America demanding that studios negotiate over the use of AI, is the longest strike of its kind since 1988. The OpenAI lawsuit is another attempt to wrest back control; as mentioned, it is likely to be a far harder fight to win considering fair-use precedence.
In the meantime, writers are also moving to create their own individual guardrails for how generative AI can use their work. Kunzru, for example, recently negotiated a publishing contract and asked to add a clause specifying that his work not be used to train large language models. His publisher cooperated.
Kunzru is far from the only author interested in gaining control over how LLMs train on his work. Many writers negotiating contracts are asking to include AI clauses. Some aren’t having the smoothest experiences. “There's been a huge amount of pushback against AI clauses in contracts,” Madson says.
Literary agent Anne Tibbets has seen a surge in interest from writers in recent months, with many clients in contract negotiations asking to include an AI clause. Some publishers tend to be slow to respond, debating the most appropriate language.
Others aren’t interested in any form of compromise for this potential new revenue stream: “There are some publishers who are flat-out refusing to include language at all,” Tibbets says. Meanwhile, agencies are already hiring consultants specifically to guide their AI policies—a sign that they are well-aware that this conflict isn’t going away. | true | true | true | A literary analytics project called Prosecraft has shuttered after backlash from the writing community. It's a harbinger of a bigger cultural tide shift. | 2024-10-12 00:00:00 | 2023-08-14 00:00:00 | article | wired.com | WIRED | null | null |
|
3,471,405 | http://gun.io/blog/dirtyshare-pure-javascript-peer-to-peer-p2p-file-sharing-nodejs-socketio/ | Hire Dedicated JavaScript Developers | Hire JavaScript Programmers | Gun.io | null | # Hire dedicated JavaScript developers for full-time or part-time jobs
At Gun.io, we know that hiring dedicated JavaScript developers can be a challenge. The process can be long and painstaking. Typically, it also requires a developer from your team to pause their work and manage the vetting.
Some good news—we can help. By tapping into our network of JavaScript experts, we help companies like yours hire dedicated JavaScript developers for full-time and part-time positions. Instead of algorithms and non-technical recruiters, Gun.io’s vetting and matching are run by a team of senior developers, so you can trust the JavaScript programmers you hire.
## Why Choose Gun.io
### Hire JavaScript programmers vetted by our team of senior engineers
Developers on our platform go through a rigorous screening process. This screening includes an algorithmic screening, a background check, and a live technical interview with one of our senior engineers. As a result, approximately 100 developers get to work with Gun.io clients each month out of 1,000 who apply.
### Hire fast
Hire dedicated JavaScript developers in 13 days or less (our average time-to-hire).
### Hire from a trusted pool of talent
Hundreds of companies – from startups to Fortune 500s – have been served through our platform. What’s more, 70% of currently engaged JavaScript developers have 10+ years of experience.
## Hire JavaScript Experts
## How to Hire Dedicated JavaScript Developers with Gun.io
### 1. Build your ideal candidate
Tell us the skills you require and chat with our talent team.
### 2. Receive your candidates
In 3-5 business days, you’ll be sent a shortlist of top developers for your role. There are no job posts or stacks of resumes to review—just a shortlist of great matches ready to work.
You can then decide which candidates you’d like to chat with, and we’ll arrange the intro calls. One of our team will also sit in to help answer any questions.
### 3. Get started
Once contracts are in place and you hire a JavaScript programmer, we’ll connect you. You can then work together as you see fit. We also support both parties as needed and handle hours tracking, billing, and payments.
At Gun.io, we want you to be 100% satisfied, so we’ll help you hire another freelance JavaScript developer if you have any problems.
## Hire top Javascript developers
### What is JavaScript?
JavaScript is a programming language that helps developers create dynamic content on websites. For example, JavaScript lets developers show or hide additional information at the click of a button, play audio and video on a webpage, and display animations.
JavaScript also lets developers use open-source libraries like jQuery and React and work with popular frameworks like Vue.js, Express.js, and AngularJS. This ease of use means JavaScript has many advantages over other languages, such as greater flexibility.
For these reasons, JavaScript is one of the most popular programming languages in the world. What’s more, it’s still used by companies like Microsoft and Google, despite its invention almost three decades ago.
### What do JavaScript developers do?
JavaScript developers are responsible for designing and creating applications that run on JavaScript. You can think of them as the interior designers of web development. They layer and build JavaScript on top of existing HTML and CSS code to allow for interactions on sites.
Typically, front-end web developers use JavaScript. However, since the release of Node.js, developers can also do back-end programming.
The role of a JavaScript developer will vary across organizations. For example, a developer may be responsible for the functionality of a whole website, or they may focus on one or two pages.
Either way, it’s challenging work, and JavaScript developers need a whole set of skills to be effective.
### What skills should a JavaScript developer have?
Starting a project can be daunting. However, figuring out what type of JavaScript developer you need can make the process a whole lot smoother.
If you’ve already started a project, you know what technologies and frameworks you need. You can list those requirements to hire a top programmer. However, if you’re starting from scratch, you’ll need to determine which languages, libraries, and frameworks will work best for your project.
There are three types of JavaScript developers: front-end, back-end, and full-stack. Front-end developers create user interfaces, while back-end developers handle server-side components. Full-stack developers can do both.
#### When to hire a front-end JavaScript developer:
If you want your site to look and function as desired, you need a front-end developer. They use CSS, HTML, and JavaScript to create user interfaces. They can also handle everything from static sites to fully interactive ones.
In addition, front-end developers can work with back-end developers to build RESTful APIs and adaptive HTML5/CSS3 layouts.
#### When to hire a back-end JavaScript developer:
A back-end developer can connect the data you receive to your user interface. They’re responsible for databases and APIs and focus on server-side functionality.
In addition, back-end programmers develop business logic, work with databases, and test and debug application components.
#### When to hire a full-stack developer:
If you’re building an app or website from scratch, a full-stack developer can do both front-end and back-end development. However, while they know both areas, that doesn’t necessarily mean they’re experts.
### How much does it cost to hire dedicated JavaScript developers with Gun.io?
We get a lot of questions about how much it’ll cost to hire a dedicated JavaScript developer (or three) for your team. The short answer is this: If you have a budget, we can design a solution to suit your needs. But planning for a future headcount is much easier when you have numbers and data.
Consider this example. You have an existing app. You just need to add new features, update software, and/or migrate code to a new platform. You likely know what technologies and developers you need.
However, if you’re starting from scratch, you need to figure out which platform to use. You must also know which technologies can bring your project to life.
With this in mind, we don’t have standard prices on Gun.io. Instead, we offer flexible retainers based on your needs, the project, and the developers’ salary expectations. These retainers include Gun.io’s fee, and you’ll see the total price upfront on developers’ profiles. There are no extra fees.
We charge full-time placements a 20% fee of their negotiated first-year salary at your company. But you only pay when you start working with a candidate you love.
To help you move forward in the hiring process, you’ll find below some data we’ve gathered from recent hires on the Gun.io platform. These numbers represent the averages and ranges for developer costs within their respective experience and geography bands.
This data is a great place to start understanding how companies competing with you for talent think about their investment with each hire.
When looking at the data, you’ll notice that average rates aren’t linear. This is because urgency, the number of developers with the skillset you’re looking for, and other hire-specific factors can swing these averages.
We recommend using this data as a guide. You’ll still need to consider other factors to determine what will best meet your needs.
### Why is Gun.io the best choice for hiring JavaScript experts?
Here at Gun.io, we’ve served hundreds of companies around the world. We’ve also matched thousands of top JavaScript developers with full- and part-time jobs.
Our rigorous vetting process ensures companies get matched with the best talent. This process includes an algorithmic screening and a live technical interview with one of our senior engineers.
## FAQs
### How is Gun.io different from other JavaScript developer hiring platforms?
At Gun.io, we make hiring JavaScript developers effortless by streamlining the entire process. Our clients appreciate our quick and easy hiring experience, with an average time-to-hire of 13 days. In addition, we’ve implemented a strict vetting process and built a network of top developers. Our developers average 8+ years of experience.
### Can I hire a JavaScript developer part-time?
Yes, you can! At Gun.io, you can hire dedicated JavaScript developers full-time, part-time, or on a contract-to-hire basis. Of the three, contract-to-hire is a favorite among our clients since it lets you see if a developer is a good fit before hiring full-time.
### Does Gun.io only provide hiring services, or can I get project management support for my JavaScript project too?
Yes, you can (again!) Gun.io can help you hire a dedicated JavaScript developer on a full-time, part-time, or contract-to-hire basis *and* provide project management support. In addition, we facilitate intro calls and support both parties throughout the engagement. We also handle hours tracking, billing, and payments.
### What can our JavaScript developers do for you?
Our JavaScript developers can help you with:
- Web design and development
- Server-side applications
- UI/UX
- Mobile apps
- Maintenance and support
## Meet available, vetted talent today
So if you’re looking to hire dedicated JavaScript developers, we’ve got you covered! | true | true | true | Looking to hire dedicated JavaScript developers? We’ve got you covered with our ultimate hiring guide. Let us help you today! | 2024-10-12 00:00:00 | 2024-06-04 00:00:00 | article | gun.io | Gun.io | null | null |
|
1,591,304 | http://www.schneier.com/blog/archives/2010/08/a_taxonomy_of_s_1.html | Schneier on Security | Name | ## A Revised Taxonomy of Social Networking Data
Lately I’ve been reading about user security and privacy—control, really—on social networking sites. The issues are hard and the solutions harder, but I’m seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it’s essential to separate them.
Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again—revised—at an OECD workshop on the role of Internet intermediaries in June.
- Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
- Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
- Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it—another user does.
- Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
- Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
- Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.
There are other ways to look at user data. Some of it you give to the social networking site in confidence, expecting the site to safeguard the data. Some of it you publish openly and others use it to find you. And some of it you share only within an enumerated circle of other users. At the receiving end, social networking sites can monetize all of it: generally by selling targeted advertising.
Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.
It’s also clear that users should have different rights with respect to each data type. We should be allowed to export, change, and delete disclosed data, even if the social networking sites don’t want us to. It’s less clear what rights we have for entrusted data—and far less clear for incidental data. If you post pictures from a party with me in them, can I demand you remove those pictures—or at least blur out my face? (Go look up the conviction of three Google executives in Italian court over a YouTube video.) And what about behavioral data? It’s frequently a critical part of a social networking site’s business model. We often don’t mind if a site uses it to target advertisements, but are less sanguine when it sells data to third parties.
As we continue our conversations about what sorts of fundamental rights people have with respect to their data, and more countries contemplate regulation on social networking sites and user data, it will be important to keep this taxonomy in mind. The sorts of things that would be suitable for one type of data might be completely unworkable and inappropriate for another.
This essay previously appeared in *IEEE Security & Privacy*.
Edited to add: this post has been translated into Portuguese.
peri • August 10, 2010 7:30 AM
I have a tiny suggestion about the presentation.
If I understand this correctly then the entrusted data part could use an addition that makes it clear that I not only entrust data to others but that they can also entrust data to me. I am sure that was implied but it took me a moment to work it out. | true | true | true | null | 2024-10-12 00:00:00 | 2010-08-10 00:00:00 | null | null | schneier.com | schneier.com | null | null |
16,944,304 | https://www.bloomberg.com/news/articles/2018-04-27/abba-reunites-for-new-album-and-takes-a-chance-on-hologram-tour | Bloomberg | null | To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below. | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
19,545,667 | https://www.nytimes.com/2019/03/30/us/politics/dea-money-counter-records.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,081,946 | https://www.science.org/content/article/rice-so-nice-it-was-domesticated-thrice | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,898,401 | https://github.com/automl/auto-sklearn | GitHub - automl/auto-sklearn: Automated Machine Learning with scikit-learn | Automl | **auto-sklearn** is an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator.
Find the documentation **here**. Quick links:
```
import autosklearn.classification
cls = autosklearn.classification.AutoSklearnClassifier()
cls.fit(X_train, y_train)
predictions = cls.predict(X_test)
```
If you use auto-sklearn in scientific publications, we would appreciate citations.
**Efficient and Robust Automated Machine Learning**
*Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum and Frank Hutter*
Advances in Neural Information Processing Systems 28 (2015)
Link to publication.
```
@inproceedings{feurer-neurips15a,
title = {Efficient and Robust Automated Machine Learning},
author = {Feurer, Matthias and Klein, Aaron and Eggensperger, Katharina and Springenberg, Jost and Blum, Manuel and Hutter, Frank},
booktitle = {Advances in Neural Information Processing Systems 28 (2015)},
pages = {2962--2970},
year = {2015}
}
```
**Auto-Sklearn 2.0: The Next Generation**
*Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer and Frank Hutter**
arXiv:2007.04074 [cs.LG], 2020
Link to publication.
```
@article{feurer-arxiv20a,
title = {Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning},
author = {Feurer, Matthias and Eggensperger, Katharina and Falkner, Stefan and Lindauer, Marius and Hutter, Frank},
booktitle = {arXiv:2007.04074 [cs.LG]},
year = {2020}
}
```
Also, have a look at the blog on automl.org where we regularly release blogposts. | true | true | true | Automated Machine Learning with scikit-learn. Contribute to automl/auto-sklearn development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2015-07-02 00:00:00 | https://opengraph.githubassets.com/fd60ccdce8ca1065f44a0bbfe1ad2ae2d11b892b9b85133ff42b19c1a26e37cf/automl/auto-sklearn | object | github.com | GitHub | null | null |
21,525,965 | https://www.theatlantic.com/magazine/archive/2019/12/adam-serwer-civility/600784/ | Civility Is Overrated | Adam Serwer | # Civility Is Overrated
The gravest danger to American democracy isn’t an excess of vitriol—it’s the false promise of civility.
*Image above: *William Howard Taft and a succession of other Republican presidents privileged restoring relations with the South over protecting black Americans’ rights.
Joe Biden has fond memories of negotiating with James Eastland, the senator from Mississippi who once declared, “I am of the opinion that we should have segregation in all the States of the United States by law. What the people of this country must realize is that the white race is a superior race, and the Negro race is an inferior race.”
*To hear more feature stories, see our full list or get the Audm iPhone app.*
Recalling in June his debates with segregationists like Eastland, Biden lamented, “At least there was some civility,” compared with today. “We got things done. We didn’t agree on much of anything. We got things done. We got it finished. But today, you look at the other side and you’re the enemy. Not the opposition; the enemy. We don’t talk to each other anymore.”
Biden later apologized for his wistfulness. But yearning for an ostensibly more genteel era of American politics wasn’t a gaffe. Such nostalgia is central to Biden’s appeal as an antidote to the vitriol that has marked the presidency of Donald Trump.
Nor is Biden alone in selling the idea that rancor threatens the American republic. This September, Supreme Court Justice Neil Gorsuch, who owes his seat to Senate Republicans depriving a Democratic president of his authority to fill a vacancy on the high court, published a book that argued, “In a very real way, self-governance turns on our treating each other as equals—as persons, with the courtesy and respect each person deserves—even when we vigorously disagree.”
Trump himself, a man whose rallies regularly descend into ritual denunciations of his enemies, declared in October 2018, as Americans were preparing to vote in the midterm elections, that “everyone will benefit if we can end the politics of personal destruction.” The president helpfully explained exactly what he meant: “Constant unfair coverage, deep hostility, and negative attacks … only serve to drive people apart and to undermine healthy debate.” Civility, in other words, is treating Trump how Trump wants to be treated, while he treats you however he pleases. It was a more honest description of how the concept of civility is applied today than either Biden or Gorsuch offered.
There are two definitions of civility. The first is not being an asshole. The second is “I can do what I want and you can shut up.” The latter definition currently dominates American political discourse.
The country is indeed divided today, and there is nothing wrong with wishing that Americans could all get along. But while nonviolence is essential to democracy, civility is optional, and today’s preoccupation with politesse both exaggerates the country’s divisions and papers over the fundamental issues that are causing the divisions in the first place. The idea that we’re currently experiencing something like the nadir of American civility ignores the turmoil that has traditionally characterized the nation’s politics, and the comparatively low level of political violence today despite the animosity of the moment.
Paeans to a more civil past also ignore the price of that civility. It’s not an unfortunate coincidence that the men Joe Biden worked with so amicably were segregationists. The civility he longs for was *the result of* excluding historically marginalized groups from the polity, which allowed men like James Eastland to wield tremendous power in Congress without regard for the rights or dignity of their disenfranchised constituents.
The true cause of American political discord is the lingering resistance of those who have traditionally held power to sharing it with those who until recently have only experienced its serrated edge. And the resistance does linger. Just this fall, a current Democratic senator from Delaware, Chris Coons, told a panel at the University of Notre Dame Law School that he hoped “a more diverse Senate that includes women’s voices, and voices of people of color, and voices of people who were not professionals but, you know, who grew up working-class” would not produce “irreconcilable discord.”
In his “Letter From Birmingham Jail,” Martin Luther King Jr. famously lamented the “white moderate” who “prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice.” He also acknowledged the importance of tension to achieving justice. “I have earnestly opposed violent tension,” King wrote, “but there is a type of constructive, nonviolent tension which is necessary for growth.” Americans should not fear that form of tension. They should fear its absence.
At their most frenzied, calls for civility stoke the fear that the United States might be on the precipice of armed conflict. Once confined to right-wing fever swamps, where radicals wrote fan fiction about taking up arms in response to “liberal tyranny,” the notion has gained currency in conservative media in the Trump era. In response to calls for gun-buyback programs, Tucker Carlson said on Fox News, “What you are calling for is civil war.” The president himself has warned that removing him from office, through the constitutionally provided-for mechanism of impeachment, might lead to civil war.
Civil war is not an imminent prospect. The impulse to conjure its specter overlooks how bitter and fierce American politics has often been. In the early days of the republic, as Richard Hofstadter and Michael Wallace wrote in their 1970 book, *American Violence*, the country witnessed Election Day riots, in which “one faction often tried violently to prevent another from voting.” In the 1850s, the nativist Know-Nothings fielded gangs to intimidate immigrant voters. Abolitionists urged defiance of the Fugitive Slave Act, and lived by their words, running slave catchers out of town and breaking captured black people out of custody. Frederick Douglass said that the best way to make the act a “dead letter” was “to make half a dozen or more dead kidnappers.”
During the Gilded Age, state militias turned guns on striking workers. From 1882 to 1968, nearly 5,000 people, mostly black Americans, were lynched nationwide. From January 1969 to April 1970, more than 4,000 bombings occurred across the country, according to a Senate investigation. As Hofstadter wrote, “Violence has been used repeatedly in our past, often quite purposefully, and a full reckoning with the fact is a necessary ingredient in any realistic national self-image.”
The absence of this realistic national self-image has contributed to the sense of despair that characterizes American politics today. The reality, however, is that political violence is less common in the present than it has been at many points in American history, despite the ancient plague of white supremacy, the lingering scourge of jihadism, and the influence of a president who revels in winking justifications of violence against his political opponents and immigrants. Many Americans can’t stand one another right now. But apart from a few deranged fanatics, they do not want to slaughter one another en masse.
The more pertinent historical analogue is not the fractious antebellum period right-wing partisans seem so eager to relive but the tragic failures of Reconstruction, when the comforts of comity were privileged over the difficult work of building a multiracial democracy. The danger of our own political moment is not that Americans will again descend into a bloody conflagration. It is that the fundamental rights of marginalized people will again become bargaining chips political leaders trade for an empty reconciliation.
The Reconstruction amendments to the Constitution should have settled once and for all the question of whether America was a white man’s country or a nation for all its citizens. The Thirteenth Amendment abolished slavery, the Fourteenth Amendment established that anyone could be a citizen regardless of race, and the Fifteenth Amendment barred racial discrimination in voting. But by 1876, Republicans had paid a high political price for their advocacy of rights for black people, losing control of the House and nearly losing the presidency to the party associated with a violent rebellion in defense of slavery. Democrats agreed to hand Rutherford B. Hayes the presidency in exchange for the withdrawal of federal troops in the South, effectively ending the region’s brief experiment in multiracial governance. Witnessing the first stirrings of reunion, Douglass, the great abolitionist, wondered aloud, “In what position will this stupendous reconciliation leave the colored people?” He was right to worry.
One state government after another fell to campaigns of murder and terror carried out by Democratic paramilitaries. With its black constituency in the South disempowered, the Republican Party grew reliant on its corporate patrons, and adjusted its approach to maximize support from white voters. As for those emancipated after a devastating war, the party of abolition abandoned them to the despotism of their former masters. Writing in 1902, the political scientist and white supremacist John W. Burgess observed, “The white men of the South need now have no further fear that the Republican party, or Republican Administrations, will ever again give themselves over to the vain imagination of the political equality of man.”
The capitulation of Republicans restored civility between the major parties, but the political truce masked a horrendous spike in violence against freedmen. “While the parties clearly move back from confrontation with each other, you have the unleashing of massive white-supremacist violence in the South against African Americans and a systematic campaign to disenfranchise, a systematic campaign of racial terror in the South,” Manisha Sinha, a history professor at the University of Connecticut and the author of *The Slave’s Cause: A History of Abolition*, told me. “This is an era when white supremacy becomes virtually a national ideology.”
This was the fruit of prizing reconciliation over justice, order over equality, civility over truth. Republicans’ acquiescence laid the foundation for the reimposition of forced labor on the emancipated, the establishment of the Jim Crow system, and the state and extrajudicial terror that preserved white supremacy in the South for another century.
The day William Howard Taft was inaugurated, in March 1909, was frigid—a storm dropped more than half a foot of snow on Washington, D.C. But Taft’s inaugural address was filled with warm feeling, particularly about the reconciliation of North and South, and the full and just resolution of what was then known as the “Negro problem.”
“I look forward,” the party of Lincoln’s latest president said, “to an increased feeling on the part of all the people in the South that this Government is their Government, and that its officers in their states are their officers.” He assured Americans, “I have not the slightest race prejudice or feeling, and recognition of its existence only awakens in my heart a deeper sympathy for those who have to bear it or suffer from it, and I question the wisdom of a policy which is likely to increase it.”
To that end, he explained, black people should abandon their ambitions toward enfranchisement. In fact, Taft praised the various measures white Southerners had devised to exclude poor white and black Americans—“an ignorant, irresponsible element”—from the polity.
Writing in *The Crisis *two years later, W. E. B. Du Bois bitterly described Taft’s betrayal of black Americans. “In the face of a record of murder, lynching and burning in this country which has appalled the civilized world and loosened the tongue of many a man long since dumb on the race problem, in spite of this, Mr. Taft has blandly informed a deputation of colored men that any action on his part is quite outside his power, if not his interest.”
The first volume of David Levering Lewis’s biography of Du Bois shows him in particular anguish over what he called the “Taft Doctrine” of acquiescence to Jim Crow, which, in Lewis’s words, “had virtually nullified what remained of Republican Party interest in civil rights.” Taft’s Republican successors generally followed suit, culminating in Herbert Hoover, who in 1928 “accelerated the policy of whitening the GOP below the Mason-Dixon Line in order to bring about a major political realignment,” as Lewis put it in the second volume of his Du Bois biography. Taft, who was now the chief justice of the Supreme Court, described the strategy as an attempt “to break up the solid South and to drive the Negroes out of Republican politics.”
Taft couldn’t have predicted exactly how this realignment would take place, but he was right about the result. Despite the best efforts of Southern Democrats to segregate the benefits of the New Deal, the policies devised by Franklin D. Roosevelt to lift America out of the Great Depression alleviated black poverty, reinvigorating black participation in politics and helping transform the Democratic Party. “Government became immediate, its impact tangible, its activities relevant,” wrote the historian Nancy Weiss Malkiel in her 1983 book, *Farewell to the Party of Lincoln*. “As a result, blacks, like other Americans, found themselves drawn into the political process.”
The New Deal’s modest, albeit inadvertent, erosion of racial apartheid turned Southern Democrats against it. Thus began a period of ideological heterodoxy within the parties born of the unresolved race question. Whatever their other differences, significant factions in both parties could agree on the imperative to further marginalize black Americans.
Some of the worst violence in American history occurred during the period of low partisan polarization stretching from the late Progressive era to the late 1970s—the moment for which Joe Biden waxed nostalgic. In Ivy League debate rooms and the Senate cloakroom, white men could discuss the most divisive issues of the day with all the politeness befitting what was for them a low-stakes conflict. Outside, the people whose rights were actually at stake were fighting and dying to have those rights recognized.
In 1955, the lynching of Emmett Till—and the sight of his mutilated body in his casket—helped spark the modern civil-rights movement. Lionized today for their disciplined, nonviolent protest, civil-rights demonstrators were seen by American political elites as unruly and impolite. In April 1965, about a month after police attacked civil-rights marchers in Selma, Alabama, with billy clubs and tear gas, *National Review *published a cover story opposing the Voting Rights Act. Titled “Must We Repeal the Constitution to Give the Negro the Vote?,” the article, written by James Jackson Kilpatrick, began by lamenting the uncompromising meanness of the law’s supporters. Opposing the enfranchisement of black people, Kilpatrick complained, meant being dismissed as “a bigot, a racist, a violator of the rights of man, a mute accomplice to the murder of a mother-of-five.”
The fact that *National Review*’s founder, William F. Buckley Jr., had editorialized that “the White community in the South is entitled to take such measures as are necessary to prevail, politically and culturally, in areas in which it does not predominate numerically” went unmentioned by Kilpatrick. Both civility and democracy were marred by the inclusion of black people in politics because in the view of Kilpatrick, Buckley, and many of their contemporaries, black people had no business participating in the first place.
Since the 1970s, American politics has grown more polarized, as the realignment Taft foresaw moved toward its conclusion and the parties became more ideologically distinct. In recent years, the differences between Republicans and Democrats have come to be defined as much by identity as by ideology. If you are white and Christian, you are very likely to be a Republican; if you are not, you are more likely to be a Democrat. At the same time, Americans have now sorted themselves geographically and socially such that they rarely encounter people who hold opposing views.
It’s a recipe for acrimony. As the parties become more homogeneous and more alien to each other, “we are more capable of dehumanizing the other side or distancing ourselves from them on a moral basis,” Lilliana Mason, a political scientist and the author of *Uncivil Agreement*, told me. “So it becomes easier for us to say things like ‘People on the other side are not just wrong; they’re evil’ or ‘People on the other side, they should be treated like animals.’ ”
Ideological and demographic uniformity has not been realized equally in both parties, however. The Democratic Party remains a heterogeneous entity, full of believers and atheists, nurses and college professors, black people and white people. This has made the party more hospitable to multiracial democracy.
The Republican Party, by contrast, has grown more racially and religiously homogeneous, and its politics more dependent on manufacturing threats to the status of white Christians. This is why Trump frequently and falsely implies that Americans were afraid to say “Merry Christmas” before he was elected, and why Tucker Carlson and Laura Ingraham warn Fox News viewers that nonwhite immigrants are stealing America. For both the Republican Party and conservative media, wielding power and influence depends on making white Americans feel threatened by the growing political influence of those who are different from them.
In stoking such fears, anger is a powerful weapon. In his book *Anger and Racial Politics*, the University of Maryland professor Antoine J. Banks argues that “anger is the dominant emotional underpinning of contemporary racism.” Anger and racism are so linked, in fact, that politicians need not use overtly racist language to provoke racial resentment. Anger alone, Banks writes, can activate prejudiced views, even when a given issue would seem to have little to do with race: “Anger operates as a switch that amplifies (or turns on) racist thinking—exacerbating America’s racial problem. It pushes prejudiced whites to oppose policies and candidates perceived as alleviating racial inequality.” This is true for politicians on both sides of the political divide—but the right has far more to gain from sowing discord than from mending fences.
Trumpists lamenting civility’s decline do not fear fractiousness; on the contrary, they happily practice it to their own ends. What they really fear is the cultural, political, and economic shifts that occur when historically marginalized groups begin to exert power in a system that was once defined by their exclusion. Social mores that had been acceptable become offensive; attitudes that had been widely held are condemned.
Societies are constantly renegotiating the boundaries of respect and decency. This process can be disorienting; to the once dominant group, it can even feel like oppression. (It is not.) Many of the same people who extol the sanctity of civility when their prerogatives are questioned are prone to convulsions over the possibility of respecting those they consider beneath them, a form of civility they deride as “political correctness.”
In a different political system, the tide would pull the Republican Party toward the center. But the GOP’s structural advantage in the Electoral College and the Senate, and its success in gerrymandering congressional and state legislative districts all over the country, allow it to wield power while continuing to appeal solely to a diminishing conservative minority encouraged to regard its fellow Americans as an existential threat.
The Trump administration’s attempt to use the census to enhance the power of white voters was foiled by a single vote on the Supreme Court on the basis of a technicality; it will not be the last time this incarnation of the Republican Party seeks to rig democracy to its advantage on racial terms. Even before Trump, the party was focused not only on maximizing the influence of white voters but on disenfranchising minority voters, barely bothering to update its rationale since Taft praised Jim Crow–era voting restrictions for banishing the “ignorant” from the polity.
The end of polarization in America matters less than the terms on which it ends. It is possible that, in the aftermath of a Trump defeat in 2020, Republicans will move to the political center. But it is also possible that Trump will win a second term, and the devastation of the defeat will lead the Democrats to court conservative white people, whose geographic distribution grants them a disproportionate influence over American politics. Like the Republicans during Reconstruction, the Democrats may bargain away the rights of their other constituencies in the process.
The true threat to America is not an excess of vitriol, but that elites will come together in a consensus that cripples democracy and acquiesces to the dictatorship of a shrinking number of Americans who treat this nation as their exclusive birthright because of their race and religion. This is the false peace of dominance, not the true peace of justice. Until Americans’ current dispute over the nature of our republic is settled in favor of the latter, the dispute must continue.
In the aftermath of a terrible war, Americans once purchased an illusion of reconciliation, peace, and civility through a restoration of white rule. They should never again make such a bargain.
*Support for this article was provided by a fellowship from Columbia University’s Ira A. Lipman Center for Journalism and Civil and Human Rights. It appears in the December 2019 print edition with the headline “Against Reconciliation.”*
### Related Podcast
Listen to Adam Serwer discuss this piece with Jeffrey Goldberg and Danielle Allen on *Radio Atlantic*.
*Subscribe to *Radio Atlantic*: Apple Podcasts *| *Spotify *|* Stitcher (How to Listen)* | true | true | true | The gravest danger to American democracy isn’t an excess of vitriol—it’s the false promise of civility. | 2024-10-12 00:00:00 | 2019-11-12 00:00:00 | article | theatlantic.com | The Atlantic | null | null |
|
12,130,943 | http://emerge.me | Insurance for Medical Emergencies | null | Figure out what your current health insurance doesn’t cover
Choose a recommended supplemental policy to cover the gap
Purchase instantly and get covered
Figure out in seconds what your financial risk is for an unexpected medical emergency such as an illness or accident.
No more waiting for weeks to get reimbursed for your medical bills. Our insurance providers send your payment in as little as 24 hours.
Use our guided wizard to learn what your options are and how you can get protected in minutes and not days.
Your policy helps cover your insurance deductible and other out-of-pocket expenses such as lost wages, surgery costs, travel and lodging expenses for out-of-town care, and home care.
End-to-end coverage options for you, your spouse or domestic partner, and your children.
Answer a few questions and we’ll automatically weed through multiple policies and match you with a recommended plan based on your unique needs and budget. | true | true | true | Health insurance alone may not protect you from the devastating financial impact of out-of-pocket expenses you may face in the event of a critical illness or accident. We close the gap. | 2024-10-12 00:00:00 | null | null | null | null | Emerge.me | null | null |
1,391,293 | http://manifesto.org/software/why-mobile-apps-arent-going-away/ | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
34,809,979 | https://www.cnbc.com/2023/02/15/founder-who-sold-his-startup-to-google-says-the-company-has-lost-focus.html | Founder who sold his startup to Google says the company has lost its mission, is mismanaged and has no sense of urgency | Ashley Capoot | A former Google employee said the company has lost its way, writing in a recent blog post that Google is inefficient, plagued by mismanagement and paralyzed by risk.
Praveen Seshadri joined the Alphabet-owned company at the start of 2020 when Google Cloud acquired AppSheet, which Seshadri co-founded. He said in the blog post Monday that though he was welcomed and treated well, he left Google with an understanding that the "once-great company has slowly ceased to function." He left in January, according to his LinkedIn profile.
Seshadri argued it's a "fragile moment" for Google, particularly because of the recent pressures it is facing to compete with Microsoft's artificial intelligence initiatives. Seshadri said Google's problems are not rooted in its technology, but in its culture.
"The way I see it, Google has four core cultural problems," Seshadri said. "They are all the natural consequences of having a money-printing machine called 'Ads' that has kept growing relentlessly every year, hiding all other sins. (1) no mission, (2) no urgency, (3) delusions of exceptionalism, (4) mismanagement."
Google did not immediately respond to a request for comment.
Instead of working to serve customers, Seshadri argued most employees ultimately serve other Google employees. He described the company as a "closed world" where working extra hard isn't necessarily rewarded. Seshadri said feedback is "based on what your colleagues and managers think of your work."
Seshadri said Google is hyper-focused on risk and that "risk mitigation trumps everything else." Every line of code, every launch, nonobvious decisions, changes from protocol and disagreements are all risks that Googlers have to approach with caution, Seshadri wrote.
He added that employees are also "trapped" in a long line of approvals, legal reviews, performance reviews and meetings that leave little room for creativity or true innovation.
In its last employee-wide survey, which CNBC reported on last March, workers gave the company poor marks in being able to execute, which they said contributed to the bureaucracy that bogged down the company's ability to be innovative.
"Overall, it is a soft peacetime culture where nothing is worth fighting for," Seshadri wrote "The people who are inclined to fight on behalf of customers or new ideas or creativity soon learn the downside of doing so."
Seshadri said Google has also been hiring at a rapid pace, which makes it difficult to nurture talent and leads to "bad hires." Many employees also believe the company is "truly exceptional," Seshadri said, which means that a lot of antiquated internal processes continue to exist because "that's the way we do it at Google."
Seshadri said Google has a chance to turn things around, but he doesn't think the company can continue to succeed by merely avoiding risk. He argues that Google needs to "lead with commitment to a mission," reward people who fight for "ambitious causes" and trim the layers of middle management.
"There is hope for Google and for my friends who work there, but it will require an intervention," he wrote. | true | true | true | A former Google employee said the technology giant is inefficient, plagued by mismanagement and paralyzed by risk. | 2024-10-12 00:00:00 | 2023-02-15 00:00:00 | article | cnbc.com | CNBC | null | null |
|
8,152,854 | http://www.economist.com/blogs/schumpeter/2014/08/dress-codes | Suitable disruption | null | ## Discover more
### Sir Jim Ratcliffe, chemicals magnate turned sports mogul
The British billionaire is buying up teams from sailing to football to cycling
### Masayoshi Son is back in Silicon Valley—and late to the AI race
This isn’t the first time the Japanese tech investor has missed the hot new thing
### When workplace bonuses backfire
The gelignite of incentives
### China is writing the world’s technology rules
It is setting standards for everything from 6G to quantum computing
### Can Mytheresa make luxury e-commerce a success?
It reckons it can succeed where Richemont has failed
### Ratan Tata, a consequential and beloved figure in Indian business
He reshaped one of India’s most successful conglomerates | true | true | true | Suitable disruption | 2024-10-12 00:00:00 | 2014-08-04 00:00:00 | /engassets/og-fallback-image.png | Article | economist.com | The Economist | null | null |
41,089,965 | https://sourcegraph.com/blog/chat-oriented-programming-in-action | Chat-oriented programming (CHOP) in action | null | ## Chat-oriented programming (CHOP) in action
What we now call "programming" was once called "high-level programming" in the era of Fortran and mainframes programmed mostly in low-level assembly. The dawn of "high-level" programming languages like C (which today many would consider "low-level") and then interpreted languages like Python and JavaScript changed the very foundation of what it meant to be a programmer.
With the advent of LLMs, it seems we are on the cusp of another foundational shift. The new way of programming is something that Steve Yegge has called CHOP, or chat-oriented programming, and can be classified as *coding via iterative prompt refinement*. We are currently in an in-between state, where the classical "line-smithing" way of coding is still highly relevant and important. But there seems to be an emerging set of behaviors and patterns that seem to fall into a new paradigm. We'll cover some of these new behaviors and how they relate to the general areas of the software creation lifecycle.
I have been writing code professionally for over 15 years. I have seen many trends come and go, coding methodologies take off and be replaced, and coding paradigms evolve. In this post, I want to share my take on chat-oriented programming, compare and contrast it to how I wrote code in the past, and discuss how I use AI coding assistants like Cody to help me write, understand, and ship code faster.
## Chat-oriented programming (CHOP)
Before AI tools like ChatGPT and specialized coding assistants like Cody hit the scene, writing code required a familiar workflow that looked something like this:
-
**Understand the issue**: read the GitHub, Jira, or Linear issue, requirements doc, or other information to understand what change you need to make to a codebase. -
**Understand the codebase**: set up your working environment, review the documentation, look through relevant files in the codebase, and familiarize yourself with the files you'll be working on. -
**Research a solution**: there are times when you know exactly what you need to do and can jump into a file and make the changes, but more often than not you'll hit up Google or Stack Overflow to research libraries, docs, and discussions relevant to the issue at hand. You'll then piece together this information and make it actionable for the problem you are trying to solve. -
**Write and test the code**: finally you get to actually write some code. Open up the relevant files and make edits or create new ones and implement the functionality. Test the functionality to make sure it behaves the way you expect it. Once the main functionality is in, write additional code to handle errors and edge cases, write unit tests, and integrate with the greater system. -
**Code review**: make the PR and get it reviewed. At this stage, your manager or peer will review the code and make sure it aligns with the rest of the codebase conventions, functions as expected, and is high quality. At this stage, you may refactor the code based on feedback and finally you're ready to merge. -
**Ship it**: once all the checks, both automated and manual, are good to go, squash and merge that PR, ship the feature and repeat the process for the next one.
In the chat-oriented programming era, this coding workflow is changing dramatically, both inside and outside the editor. One interesting thing about the above workflow is that only one of the six steps is inside the code editor, even though we often think of editors as the primary application for developing software.
Let's see how CHOP changes things both inside and outside the editor across various programming scenarios.
## Understanding a codebase
As a professional developer, you spend as much (if not more) time understanding the codebase, gathering relevant context, and understanding requirements as you do actually writing code. This is unavoidable, but with chat-oriented programming, it can happen much more effectively.
Understanding a new or even a familiar codebase without the assistance of a coding assistant is like wandering through a dense forest with only a tiny flashlight. You spend a ton of time reading documentation, which may or may not be up to date, tracing function calls, and adding in plenty of logging statements to understand how data flows through the system, and trying to piece together the big picture. A change that seems trivial to make on the surface can have a multitude of unintended side effects that will make you question your own sanity. Dev productivity tools like IDEs and code search gave you a brighter lantern and a map you could consult, but you still had to explore different paths on your own, one at a time.
In the world of chat-oriented programming, understanding a codebase is more akin to having a robotic travel guide who can not only guide you through the forest, but highlight points of interest, explain the local fauna, and speed-run the trails to help you find the quickest path. To bring it back to coding, your AI coding assistant can summarize modules, explain obscure functions, and provide clear and concise answers to your questions. Instead of digging through a directory, you simply ask your questions in natural language, and the AI coding assistant goes and finds all the relevant files for that query, uses those files as context, and gives you a tailored answer then and there.
Let's see this in practice. Recently, Anthropic announced the release of Claude 3.5 Sonnet, a powerful new model that we wanted Cody users to have access to. Our engineering team had already done the work to integrate this model on the backend but our clients team was tasked with adding support for this model in the Cody VS Code extensions.
Pre-CHOP, my workflow would result in a ton of manual searching in the codebase to find where we define and add new models. Eventually, I would land on this page where we define our models:
From here, I would need to dive deeper into the individual options and understand what `ModelUsage.Chat`
meant or whether I should add the `expandedContextWindow`
for the new model, and so on. I could spend hours navigating through the codebase and reaching out to my colleagues on Slack for further context before feeling confident enough to write a single line of code.
Let's see how we can accomplish this with Cody and chat-oriented programming. With the Cody extension installed in your IDE of choice, you can chat with your codebase. When you ask a question, Cody fetches the most relevant files and provides them as context alongside your query to your LLM of choice.
We can ask general questions like "what is this codebase about?" or "how do I set up my working environment for this codebase" and Cody will generally give you a really good answer. But this information can usually be found in the `README.md`
file of a project. Let's ask something a little more complex - such as how would I add a new LLM model to our Cody VS Code extension.
Now you may be asking yourself "how did Cody know all these things?" The answer is context. Each time we ask Cody a question it looks through relevant code files and includes them with our query to the LLM. We can see which files and lines were used for context by clicking the Context dropdown.
Any questions we have about our codebase we can just ask Cody and get personalized answers based on our codebase. We can easily ask follow-up questions to learn more. For example, I noticed that the usage property is an array that supports Chat and Edit, but I want to know what this actually means. I can just ask a follow-up question to find out:
This comes in handy when we want to really develop a deep understanding of our codebase before making changes. Instead of reading the actual code or bothering our engineering counterparts, we can just ask Cody.
Within the chat-oriented programming paradigm, understanding of our codebase becomes as simple as asking questions. You can ask natural language questions in the same way that you would ask your team lead or teammate and get high-quality answers in seconds. In the pre-CHOP world, you would spend a ton of time looking at the code, Googling what each library does, and then building a mental model of how it all works. With CHOP, the hard work is done for you, you just have to ask the right questions.
### Sidebar: Chat with Open Source Repositories
Chatting with your codebase in your IDE is powerful, but what if you could chat with any open source repository in the world? You could learn how and where Next.js implements its file-based routing:
How Eloquent ORM manages SQL relationships:
And anything else you desire. Cody on the Web behaves much like the IDE extension but allows you to ask questions about any open source repository that Sourcegraph has indexed on its public search instance located at https://sourcegraph.com/search. Give it a try today.
## Writing Code
In the pre-CHOP era, writing code often felt like starting with a blank canvas and a vague idea of what you wanted to paint. You'd spend hours brainstorming, sketching out structures, and piecing together snippets of existing code, samples from Stack Overflow, or from your own memory.
With CHOP, writing code is more like having a collaborative painting session with a skilled artist. You commission a painting (your initial prompt), and the AI coding assistant fills in the details, suggests improvements, and even offers alternative approaches. You can ask for specific code snippets to be generated, refactors to existing code, or even generate entire functions by clearly stating what your desired end goal is.
Test-Driven Development (TDD) is a popular software development approach where test cases are created before code is written. Chat-oriented programming can greatly improve TDD by allowing developers to define both the test cases and implementation logic in natural language. Rather than focusing on line-by-line logic, developers can focus on the end result and let the AI coding assistant handle writing the code. The tests then act as a necessary "contract of correctness" for the AI-generated code.
Anthropic recently released Claude 3.5 Sonnet and with it a feature called Artifacts that can serve as a great introduction to chat-oriented programming. Cody handles CHOP in multiple ways. We saw in the previous section on understanding a codebase that the Cody chat box can generate snippets of code. Some developers may prefer this approach. Ask Cody a question in the chat box, review the output, and if they're happy with it, insert it into their code files.
But Cody has another way to do CHOP. With the Edit Code command (CMD+K) a developer can insert new code or edit existing code directly in the file they are working in. Let's go back to our `sourcegraph/cody`
repo and add Claude 3.5 sonnet to our models list.
This type of CHOP workflow allows you to focus on the **what** instead of figuring out the **how**. You ask the AI coding assistant to complete tasks, it does it, you review the changes and accept them, or if the generated code isn't what you want, ask for a revision. In this case you are essentially acting as the code reviewer while the AI coding assistant is active as the programmer.
I am personally a huge fan of this approach to coding, and it has become my default way of writing code. It allows me to focus on the end result rather than the line-by-line implementation logic. With Cody's edit command, you can insert brand-new code or make edits to existing functions and files.
Chat-oriented programming is a powerful tool for experienced developers. That's not to say newer programmers cannot benefit from it, but in my experience, being familiar with the generated code helps a ton in understanding whether the generated code meets the requirements or needs further refinement. On the flip side, Cody can also help you become familiar with a codebase really quickly if you are new to it.
## Debugging code issues
If writing code is one side of the coin, debugging code is the other. No matter how skilled you are or how much thought you put into the code you write, the odds of getting it perfect from the get-go are slim. Debugging code can be a daunting task. Sometimes, the bug is super obvious, and other times, while it may be a single line of code that needs to be changed, finding it can take days. Debugging code often involves a ton of trial and error, logging statements all over the place, and patience.
In the traditional way of writing code, rubber duck debugging or rubberducking is a popular method of debugging your code in which you basically talk through the issue and explain your code to an inanimate object (usually a rubber duck). As you are explaining the code line-by-line you'll eventually find the snag in your logic and thus pinpoint where the issue lies. This method has worked for me many times but is not infallible.
With chat-oriented programming, the AI coding assistant is your rubber duck. Instead of being an inanimate object that only listens to your woes, it can provide valuable feedback. Cody is an expert on your codebase and can analyze error messages, suggest potential causes all the way down to the line number, and in many cases even generate the fix for you.
In the pre-CHOP era of coding when you would get a build or compile time error you would manually review the output, navigate to the troublesome line of code, and identify what the issue was. Some languages and frameworks, like Rust, for example, have really detailed error logs, while others leave a cryptic trail of breadcrumbs for you to piece together making debugging all that much harder.
With Cody, you can side-step a lot of the manual debugging by highlighting the output and asking Cody to explain what the issue is in plain English. Let's see it in action. Going back to our example of adding a new model, let's say I omitted the additional required properties for the model and then I tried to run the application. I would get the following error message:
Pre-CHOP, I would likely try to navigate the problematic file and try to debug it myself. If I couldn't find the solution, I'd copy and paste this error message into Google, find a StackOverflow discussion on a similar error, and retrace my steps to debug it. With CHOP, I can just ask Cody to explain the issue at hand to me. Once I have the explanation, I can ask follow-up questions to better understand the problem or even ask Cody to fix the issue for me.
## Maintaining a high-quality codebase
Maintaining large and complex codebases has always been a challenging task. It requires a constant investment in documentation, adherence to best practices and codebase guidelines, manual code reviews, and more. Sometimes following all the best practices is just not feasible. Tight deadlines mean that corners get cut and sometimes getting a working solution out the door is more important than having it be perfect. But if you are constantly cutting corners and only focusing on shipping, you are no doubt accumulating a ton of tech debt that will eventually need to be paid.
Pre-CHOP, developers often dedicated a portion of their development sprints to code clean-up and refactoring to make code more maintainable. Companies that don't invest in maintaining their codebases will eventually suffer the consequences of slower development cycles, more security issues, worsening performance, and more bugs.
With chat-oriented programming, you have a powerful tool that can help keep you on track and reduce tech debt, but it will not do all the work for you. You can use Cody to identify potential problems, suggest improvements, write unit tests, and refactor code to match your guidelines, but at the end of the day, the onus is on you to be vigilant in maintaining the codebase.
Let's see how we can accomplish this with Cody. In the example below, I will show how I can use the Find Code Smells command on a file to identify potential improvements. Even a great programmer like myself can improve ;).
Overall, it's not bad, but there's definitely some room for improvement. Let's tackle issue number one. I will simply follow up in the Cody chat window and ask Cody how I can improve this code to tackle the highlighted issue.
With this improvement, my codebase became that much more maintainable!
Once we've made the changes to our codebase, we'll want our manager or peers to review them. Writing git commit messages is another one of those tasks that needs to be done but is not the most exciting thing in the world to do. This makes it prime real estate for our AI coding assistant to take the reins. Cody can generate the git commit message for you and include all relevant changes automatically. See it in action below for the changes we made in this article.
### Sidebar: Cody Code Guardrails
Since the LLMs used by Cody are trained using broad corpuses of data, it's rare but possible for Cody to return code that closely matches public code. OSS attribution guardrails is a new beta feature designed to reduce a team's exposure to introducing copyrighted code into their codebase.
Guardrails works by verifying code suggested by Cody against a large corpus of public code. Cody runs this verification check any time it generates a code snippet of 10 lines or more. This impacts code returned to the user in two ways:
Autocomplete: Any multi-line suggestion of 10 or more lines is verified against the public code corpus before being returned to the user. If there is a positive match against public code, that suggestion is not returned to the user.
Chat and commands: Any time Cody generates a snippet of 10 or more lines of code, it is verified by guardrails. The snippet is still returned to the user, but Cody also includes a note alongside the suggestion indicating whether the guardrails check was passed or failed.
You can see this latter functionality below:
## Conclusion
In this post, we covered how developers can use chat-oriented programming (CHOP) to understand, write, debug, and maintain their codebase. CHOP as a programming methodology is still in its infancy, but this layer of abstraction is proving to be valuable for developers wanting to leverage AI in their workflow.
If you want to experience CHOP for yourself, give Cody a try today, connect with us on Discord, or join the Sourcegraph community. Share how you're using CHOP and give us feedback on how we can improve the experience. | true | true | true | Learn how chat-oriented programming (CHOP) is helping developers understand, write, debug, and maintain code. | 2024-10-12 00:00:00 | 2024-07-27 00:00:00 | article | sourcegraph.com | Sourcegraph | null | null |
|
41,234,284 | https://www.pv-magazine.com/2024/08/12/worlds-highways-could-host-52-3-billion-solar-panels-say-researchers/ | World’s highways could host 52.3 billion solar panels, say researchers | Patrick Jowett | A research team has determined that covering the world's highways with solar roofs could generate 17,578 TWh per year, which is more than 60% of global electricity consumption in 2023.
Their study, titled “Roofing Highways With Solar Panels Substantially Reduces Carbon Emissions and Traffic Losses,” was recently published in the journal *Earth’s Future*. It explores the potential to install solar panels above highways and major roads.
With more than 3.2 million km of highways worldwide, the researchers calculated the costs and benefits of constructing a solar panel network using polycrystalline solar panels with a 250 W capacity. The analysis found that covering highways with solar panels could generate more than four times the annual energy output of the United States and offset 28.78% of current CO2 emissions, while also reducing global traffic deaths by 10.8%.
“This really surprised me,” said Ling Yao, a remote sensing scientist at the Chinese Academy of Sciences and the study’s lead author. “I didn't realize that highways alone could support the deployment of such large photovoltaic installations, generating more than half of the world's electricity demand, and greatly easing the pressure to reduce global carbon emissions.”
The researchers also identified regions such as eastern China, Western Europe, and the US East Coast as the most ideal for deployment, despite challenges related to setup and maintenance costs. Yao noted the importance of pilot programs to demonstrate the practicality of this concept.
The research team consisted of academics from the Chinese Academy of Sciences, Tsinghua University and Chinese Academy of Geosciences, all located in Beijing, as well as New York’s Columbia University.
This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: [email protected].
Do they mean a roof like cover (excellent, keep rain off cars too) or the actual road surface?
The ‘article’ contains a beautifully illustrated schematic of roof passing over a road with accompanying inverters. What the article or its supporting pilot studies do not directly address is the engineering needed to provide the supports for this roof or indeed effectively consider it. Imho a broad brush estimate of 4 times the cost of the panel for the infrastructure is wholly insufficient, particularly since the authors’ ‘reasonable estimate’ seems to hinge solely on material cost (which historically is strongly dependent upon local geography and consequently cannot be considered constant across all geographies in all countries). To illustrate, straightforward considerations might include:
How high should the roof be? Will the underlying road service only cars or include commercial vehicles, in which case how wide should it be?
How strong should it be? How will it resist the elements (wind, rain, snow, particularly if they impact the roof at non normal incidence)? Wind speed increases with increasing height; moreover changing the inclined angle of the roof will incur greater susceptibility to wind damage from pressure differences and turbulence effects. Will rainwater runoff from the panels require additional infrastructure? Snow loading can be significant particularly in say North America.
Is it crash resistant? If not, how easy would it be to replace sections?
What impact would its construction have on transportation infrastructure?
How frequently will the solar panels need to be replaced?
Would power lines need to be rerouted to support the infrastructure?
Will excess energy produced be stored or simply wasted?
The statistical data utilised for modelling does not take into consideration variations in seasonal environmental conditions Perhaps contemplating the environmental cost of a pilot program to estimate the time needed to recoup the energy expended in providing the infrastructure, together with financial cost. (My own rough calculations give estimates in excess of the postulated 25 year lifespan. Further, the Uncertainties and Limitations within this article do little to remedy the situation.) Such a refined model could be enhanced to include any essential or planned maintenance.
It may have been better to compare the road deployment to alternate methodologies for the large-scale production of renewable energy whilst continuing to , such as (say) installing floating solar panels near to existing wind farms where at least the power transmission infrastructure is already present.
To quote William C. Taylor, “Just because you can doesn’t mean you should”.
Vehicles would need to turn on their headlights early, if sunlight gets blocked like that? So you would have the irony of burning deisel fuel to create electricity, to create light, that is only needed because of the green-electric panels above
A solution would be to hang electric lights, that are required for safety in tunnels, & hang them below the solar panels. A neat circular solution. | true | true | true | Researchers from the Chinese Academy of Sciences, Tsinghua University, Chinese Academy of Geosciences, and Columbia University have concluded that solar-covered highways could meet more than 60% of the world’s annual energy needs. | 2024-10-12 00:00:00 | 2024-08-12 00:00:00 | article | pv-magazine.com | Pv Magazine International | null | null |
|
25,971,492 | https://www.tomshardware.com/news/risc-v-open-source-gpu-nvidia-intel-amd-arm-imagination | Free Open Source GPU Under Development for RISC-V | Anton Shilov | # Free Open Source GPU Under Development for RISC-V
Another GPU developer emerges with a RISC-V-derived architecture.
The days of open-source GPUs may soon be upon us. The RISC-V architecture enables small companies to develop purpose-built processors and microcontrollers without paying a royalty. There are numerous free and commercial IP building blocks for RISC-V-based system-on-chips (SoCs), but the portfolio lacks a graphics option. This will change in a few years as a group of enthusiasts has started developing an open-source GPU based on the RISC-V architecture.
At this point, there are no plans to compete against AMD, Arm, Imagination, and Nvidia in the foreseeable future. Instead, the group plans to develop a scalable fused CPU-GPU ISA that could scale from simplistic microcontrollers all the way to advanced GPUs supporting ray tracing, machine learning, and computer vision applications with custom hardware extensions.
On a high level, RV64X-designed GPUs use a basic RV32I or RV64I core that supports new instructions built on the base vector instruction set. Initially, it will use an RV32I core, but eventually, an RV64I core will replace it as the goal is to create an area-efficient design with custom programmability and extensibility that could be used for CPUs, GPUs, and VPUs, writes Jon Peddie for EE Times.
To properly process graphics, the basic RISC-V core will support new graphics and machine learning specific — RV32X — data types, including scalars (8, 16, 24, and 32 bit fixed and floats, vectors (RV32-V), and matrices (2x2, 3x3, and 4x4); vector/math instructions; pixel/texture instructions; frame buffer instructions; a special register set (featuring configurable 136-bit vector registers); and some graphics-specific instructions. Initially, the graphics core will support the Vulkan API, but the group strives to make it DirectX (shader model 5) and OpenGL/ES-compliant.
The RV64X group says that its graphics processor will implement a standard graphics pipeline in microcode, but it will also be able to add custom rasterizers (splines, SubDiv surfaces, patches) and custom pipeline stages to support features not supported by commercially-available GPU designs.
The group proposes an RV32X reference implementation that features a hardware texture unit (i.e., the Larrabee lesson has been learned), a special function unit, a 32KB L1 cache, an 8K uCode SRAM cache, and four 32-bit DSPs/ALUs that can process FP32 and INT32 data, reports HardwareLuxx. The reference design will most likely be implemented using an FPGA.
The RV64X project is at its early stages of development and it will take at least a couple of years before the specification will be finalized and any hardware implementation emerges, believes Jon Peddie, the president of Jon Peddie Research. In fact, even the specification is subject to change based on stakeholder and community input, so it is way too early to discuss performance or any other matters.
## Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
The group, which calls itself RV64X as its plan is to develop a 64-bit universal ISA, is led by Atif Zafar from Pixilica, Grant Jennings from GOWIN Semiconductor, and Ted Marena from CHIPS Alliance and Western Digital.
Initially, an RV64X-designed graphics controller will be used for very simple microcontrollers that require extremely small units due to cost concerns. But as the design evolves, its descendants could address more demanding applications years and generations from the initial implementation.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
hannibal Interesting... how do They handle the complex gpu related patents? Do They do everything so differently that no patent royaltes Are needed. That Sounds almost impossible.Reply
If this happens, it could mean huge things to gpu market in the long run! Some company like Xiaomi that now makes low cost phones could produce really cost effective GPUs!
So any extra info about the patent situation would be really usefull. -
oceanwaves Excellent news ! IMO big Telcos & Networks should take notice : Nokia ,Ericsson, ATT, Verizon, DT, Telefonica , the savings can be huge in new intensive scalable services , the web is just starting to grow , as long as established Media does not break the evolution ahead.Reply -
Strontium
My guess is by the time this is viable at scale those patents will have expired. Patents do not last forever.hannibal said:Interesting... how do They handle the complex gpu related patents? -
hannibal Maybe, but do They really wait tens of years to wait the patents to go old? Sounds dubious at best. The patents do get old, but it takes a long time!Reply
There is reason why there Are only two x86 cpu manufacturers... anyone else would get sued to the dead... patent last 20 years and after that those companies make new improvements that get patented, so those GPUs should be always 20 years old technology compared ro competition... -
castl3bravo I'm assuming whatever Nvidia did in their GPU hardware isn't something the RISC-V folks need to worry about. This is hardware engineering where the only requirement is a modification to Vulkan, DirectX and OpenGL APIs that allows API to interact with their new GPU hardware. Why waste time trying to copy Nvidia's implementation? The patent trolls, I mean lawyers, would love it if your h/w designed copied parts of Nvidia's.Reply -
101Force hannibal said:Interesting... how do They handle the complex gpu related patents?
The article mentions the Vulkan API, which is open-source, and presumably other open-source APIs such as OpenGL, OpenCL, OpenVG, and AMD's FidelityFX (which includes an open-source DLSS-like API) are being considered. DirectX isn't open-source, but I suspect Microsoft doesn't charge a royalty for hardware DirectX support since that encourages sales of the Windows operating system.
hannibal said:Some company like Xiaomi that now makes low cost phones could produce really cost effective GPUs!
Xiaomi probably already uses OpenGL ES in their products. | true | true | true | Another GPU developer emerges with a RISC-V-derived architecture. | 2024-10-12 00:00:00 | 2021-01-30 00:00:00 | article | tomshardware.com | Tom's Hardware | null | null |
|
39,728,980 | https://mgwalker.github.io/blog/amtrak-api/ | Making a nice API of Amtrak's ugly API2 November 2023 | null | # Making a nice API of Amtrak's ugly API
2 November 2023
I had an upsetting experience with Amtrak recently. I had booked passage on a train that went about halfway across the country. I was going to get on it near the end of its route and only take it a dozen or so stops. But there were delays and the train ended up being several hours behind. Amtrak rebooked me on to a bus, which... no thank you. There's a reason I picked the train.
Anyway, as I was watching my train's status for a full two days before my actual departure time, I caught myself thinking several times, "Gee, it'd be cool if I could get data about my specific train and build myself a little one-off widget to track just that." You know, since Amtrak's main site is pretty awful for that purpose.
Have you ever seen Amtrak's train map? It's pretty cool, right? You can see all the active trains, all the stops on their route, and arrival and departure times. Neat! And if you're a nerd like me, you immediately want to see if there's an API driving this so you can do... I don't know, anything you want to with it, I guess.
And it turns out, there's an API! There are a handful of endpoints, but a first inspection of the browser's network tool shows two particularly interesting ones:
First, a list of all the train stations:
And then a list of all the active trains:
You, like me, get excited and click those links to discover... a wall of encoded text. What the actual heckin' heck? Well it looks like base64 encoding, so let's try decoding it and seeing what happens.
Oh, it's still gibberish. Well... time to search the internet, I guess.
I found a small number of older forum threads from someone who'd written a
wrapper around the Amtrak API and exposed their own. It's
amtraker-v3 by
piemadd. Digging around in there, I found the
main script was
calling into those same URLs above, and this code had a variable called
`publicKey`
. *Fascinating*. But where does it come from? Time to inspect some
code, I guess.
Opening up the Firefox dev tools, looking at the resource list, and filtering to just Javascript, I found this one file that looks promising:
I started scanning that file from the top to see if any names stood out. It wasn't long until I landed on this:
```
/*
__$$_jmd - public key
masterSegment - length of data to be extracted from the encrypted response - 55 is just a fake
//FAKE VARIABLES to throw off people hahahahaha
__$_s - salt value
__$_v - iv vale
*/
```
Ah-ha! A public key, salt, and IV. Is there some encryption going on here? That
looks likely. Surprising and bewildering, but likely. In any case, now I have
some new stuff to search for. I don't know what the public key is, but whoever
wrote this kindly let me know *this* isn't it, but I should be able to find it
somewhere else under this variable name.
Firefox's developer tools have a debugger that features a search across all
script sources on the page. I did a quick debugger search for `__$$_jmd`
and
found that it's being populated from an XHR:
```
$.getJSON(_$$_666.route_listview_url, function(data) {
/*MasterZoom is the sum of the zoom levels from the routes_list.json file. That is the index in the routesList.v.json -> arr array where we have the public key stored.
IF THE ROUTES_LIST CHANGES, REMEMBER TO CHANGE THE INDEX TO BE CORRECT */
__$$_jmd = (data.arr[masterZoom]);
```
More helpful comments! I don't know what this master zoom thing is yet, but let
me first see what this URL is. No need to go hunting for it, though. I just pop
in a breakpoint and reload the page. Then when the breakpoint stops the code,
I can simply inspect the value of `route_listview_url`
:
And just look at those three lists of blobs of text. The first ones look like
UUIDs and the others don't look like anything. Alright, I can't do anything with
this right now, but I know the public key is in the `arr`
list somewhere. Back
to this master zoom thing, I guess.
The comments tell us the master zoom is the "sum of the zoom levels from the
routes_list.json file", and that's the index into the `arr`
array where the
public key is. Back to the network tab, and I find this URL:
And looky there, a list of things with a `ZoomLevel`
property. A quick bit of
console REPL'ing:
```
await fetch("https://maps.amtrak.com/rttl/js/RoutesList.json")
.then((r) => r.json())
.then((list) =>
list.reduce((sum, { ZoomLevel }) => sum + (ZoomLevel ?? 0), 0)
);
```
And I got a result: 194. Okay, so the master zoom is 194. Now I can go back a
few steps to where `__$$_jmd`
was being set. It's coming from a list called
`arr`
in the `route_listview_url`
. At index number 194, I find my public key,
which turns out (at the time of this writing) to be
`69af143c-e8cf-47f8-bf09-fc1f61e5cc33`
.
Immediately after that code, there was code that sets the salt and
initialization vector. Those come from the same `route_listview_url`
data, but
different properties. And the indices are computed differently, so time to look
into that.
For the salt, the comments say this:
```
/*Salt Value - the element is at the 8th position. So we can essentially pick any number from 0-100 (length of the s array in the file), get the length of the element, and then go to that index
the following funky looking code will evaluate to 8. Salt has a length of 8
*/
__$_s1._$_s =
data.s[data.s[Math.floor(Math.random() * (data.s.length + 1))].length];
```
Which I found ** hilarious**. Why would you obfuscate this code if you're
describing how you did it immediately above? Were these comments supposed to
have been stripped away? In any case, the initialization vector code looks more
or less the same:
```
/*Initialization Vector Value - the element is at the 32th position. So we can essentially pick any number from 0-100 (length of the IV array in the file), get the length of the element, and then go to that index
the following funky looking code will evaluate to 32 - IV has a length of 32
*/
__$_s1._$_i =
data.v[data.v[Math.floor(Math.random() * (data.v.length + 1))].length];
```
Going back to my source data file, I find my salt and IV: `9a3686ac`
and
`c6eb2f7f5c4740c1a2f708fefd947d39`
, respectively.
Alright, now what to do with these things? Let's see where else `__$$_jmd`
is
used in the code. I find it referenced in some pretty gnarly-looking code, but
looking a little above that for context, I find two interesting things:
- I'm in a callback for an XHR request. Adding a breakpoint reveals that this request is for getTrainsData. Eureka! (Maybe.)
- Look at these amazing comments!
`/*MasterSegment is the length of the string at the end of the encrypted data that contains the secret key To decrypt - we do the following 1. Take masterSegment (88) length - from the right of the data - this has the private key 2. Everything from 0 to the end - master segment is the raw data - that needs to be decrypted 3. Decrypt the 88 characters using the public key - that will give you a pipe separated string of the private key (random guid from MDS) and a time stamp (to scramble it) 4. Now use the private key and decrypt the data stored from step 2. 5. Parse the decrypted data - and rejoice 6. KSUE -means key issue 7. __$$_jmd - the public key that we obtain`
Okay, so... the REST data is encrypted, and here are the instructions for decrypting it. Let's look at the code that does the work:
```
var json = JSON.parse(
__$_s1._$_dcrt(
dd.substring(0, dd.length - masterSegment),
__$_s1._$_dcrt(dd.substr(dd.length - masterSegment), __$$_jmd).split("|")[
masterSegment - 88
]
)
);
```
That's kind of a mess. So first things first, let me try to clean it up a bit so I can make sense of what's happening:
```
const masterSegment = 88;
const publicKey = __$$_jmd;
const privateKeyCipher = dd.substr(dd.length - masterSegment);
const privateKey = __$_s1._$_dcrt(privateKeyCipher, publicKey).split("|")[
masterSegment - 88
];
const rawData = dd.substring(0, dd.length - masterSegment);
const data = __$_s1._$_dcrt(rawData, privateKey);
var json = JSON.parse(data);
```
There's only one thing here that's not just standard Javascript:
`__$_s1._$_dcrt`
. And it's easy enough to find that function by setting a
breakpoint and then stepping into the debugger (presented here formatted with
prettier):
```
/*CryptoJS-Security - the salt and IV values here are fake to throw someone off. All variable names are changed*/
var __$_s1 = {
_$_s: "amtrak",
_$_i: "map",
_$_dcrt: function (_, $) {
return _$_cjs._$_sea
._$_dcr(
_$_cjs.lib._$_cpar.create({ _$_ctxt: _$_cjs.enc.Base64.parse(_) }),
this._$_gk($),
{ iv: _$_cjs.enc.Hex.parse(this._$_i) }
)
.toString(_$_cjs.enc.Utf8);
},
_$_gk: function (_) {
return _$_cjs._$_pdf2(_, _$_cjs.enc.Hex.parse(this._$_s), {
keySize: 4,
iterations: 1e3,
});
},
};
```
Continuing down into these obfuscated variable names, I make a happy discovery.
This `_$_cjs._$_sea._$_dcr()`
function is defined in `AS.js`
, which begins with
yet more super helpful comments:
`/*CJS-AES - origin cryptojs-aes file. Variables/methods all changed with random names*/`
A little more inspection and it turns out all of `_$_cjs`
is just
crypto-js with the symbol names changed to attempt
to obfuscate it. That helps tremendously. So a quick look through crypto-js
documentation, and I've got an assumption about what these functions are doing:
```
_$_dcrt: function (_, $) {
// Decrypt.
return _$_cjs._$_sea
._$_dcr(
// Create a cipher object from the value that results from base64
// decoding the raw data that was passed in. We don't know what
// algorithm or configuration we're using, though.
_$_cjs.lib._$_cpar.create({ _$_ctxt: _$_cjs.enc.Base64.parse(_) }),
// Derive a key from the passed-in private key
this._$_gk($),
// Provide an initialization vector in bytes by parsing it from hex.
{ iv: _$_cjs.enc.Hex.parse(this._$_i) }
)
.toString(_$_cjs.enc.Utf8);
},
_$_gk: function (_) {
// Derive a key from a string. This appears to be PBKDF2 key derivation,
// getting bytes by parsing the salt from hex, and iterating a thousand
// times and getting a key size of 4. 4 somethings.
return _$_cjs._$_pdf2(_, _$_cjs.enc.Hex.parse(this._$_s), {
keySize: 4,
iterations: 1e3,
});
},
```
In pseudocode, we have basically:
```
key = hash(privateKey, salt)
decrypt(
rawData,
key,
initializationVector
)
```
And we have a good idea of what the key derivation function is: PBKDF2. But what hashing function is it using? And what encryption algorithm? Well, since this is crypto-js, maybe it has defaults. So I went to GitHub and found that the defaults had pretty recently been changed. Before that, the default had long been SHA1 and AES-128-CBC, so... let's just assume those are right for now. Given the big encrypted blob response from the getTrainsData endpoint, I should be able to get the data basically like this:
```
PUBLIC_KEY = 69af143c-e8cf-47f8-bf09-fc1f61e5cc33
SALT = 9a3686ac
IV = c6eb2f7f5c4740c1a2f708fefd947d39
MASTER_SEGMENT = 88;
password = rawData[rawData.length - MASTER_SEGMENT:end]
encryptedData = rawData[start:rawData.length - MASTER_SEGMENT]
privateKey = pbkdf2(password, SALT, iterations:1000, size:4, SHA1)
data = decrypt(AES-128-CBC, key, encryptedData))
```
And it turns out that's basically correct, as implemented in my repo. One funky twist is that crypto-js's PBKDF2 key size is how many 4-byte units you want, so the actual key should be 16 bytes. Using Node.js's standard crypto library, you'll want the number of bytes, not the number of 4-byte units. (In crypto-js's code, it seems to refer to these as words, and they just happen to be 4 bytes long. But the word "word" isn't especially meaningful on its own, as it has variously meant 16, 32, and 64 bytes at different times and in different architectures).
So there you go. Now you too can access Amtrak data the easy way. Kudos to whoever wrote the Amtrak map page because a) the weird obfuscations they made were pretty clever and b) it sure seems like they knew it wasn't going to be effective. I really appreciated their funny comments throughout!
And if you're curious what I'm doing with this data? I built a page for that! My very own Amtrak train tracker. | true | true | true | null | 2024-10-12 00:00:00 | 2023-11-02 00:00:00 | null | null | null | mgwalker.github.io | null | null |
2,747,954 | http://www.christianpost.com/news/court-rules-use-of-gps-to-track-cheating-spouse-not-privacy-invasion-52087/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,659,278 | https://disbug.io/blog/how-to-improve-the-efficiency-of-development-and-testing-team/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,197,157 | http://blogs.ft.com/undercover/2010/03/the-hardest-logic-puzzle-ever/ | Financial Times Home | Janan Ganesh | ## Podcasts
15 min listen
29 min listen
12 min listen
21 min listen
## Explore more events from the FT
Upcoming events
Discover unmissable flagship events with FT journalists to expand your thinking and elevate your career
Explore all eventsJoin the discussion
FT Forums is a series of members only communities, powered by the Financial Times | true | true | true | News, analysis and opinion from the Financial Times on the latest in markets, economics and politics | 2024-10-12 00:00:00 | 2024-09-26 00:00:00 | null | website | null | Financial Times | null | null |
35,427,769 | https://www.carexpert.com.au/car-news/tesla-removes-parking-sensors-to-save-money-the-results-are-predictably-terrible | Tesla removes parking sensors, its cars start running into things | Dev Singh | First it was lumbar adjustment for the front passenger, then it was carpet in the front boot, shortly after the mobile charger disappeared – and now Tesla has removed front and rear parking sensors in the **Model 3 **and** Model Y** in a quest to save money and improve profit margins for shareholders.
Removing parking sensors sounds like a great idea if you have the technology to replace traditional parking sensors. But, Tesla currently doesn’t. And as you’d expect, the results are predictably terrible.
Tesla removed traditional ultrasonic parking sensors, which use ultrasonic waves to detect the distance of objects from the front or rear of a vehicle, and now instead relies on camera vision only to build an image of the vehicle’s surroundings.
That image is then meant to give the vehicle an accurate assessment of how close it is to objects.
An independent EV dealer and Tesla owner in the UK shot a video and demonstrated just how bad Tesla vision is when putting an updated Tesla Model Y through a number of basic parking situations.
In the video the car can be seen reversing into a pedestrian at low speed, while claiming it still had room to continue further back.
Other parts of the video show the car’s situational awareness map constantly jumping around. At one point it shows a large truck in the way of the car, even though none of the vehicles around the car ever moved during the demonstration.
There are also parts of the video where the parking tech stops working altogether for a brief period.
Tesla doesn’t offer reverse autonomous emergency braking (technology that stops the vehicle if it’s about to reverse into a person or vehicle behind it or approaching from the side), which is basic safety technology available on many new cars.
It’s pretty clear that the tech is woefully underdeveloped and needs a lot of work.
Tesla is constantly working on improving the technology and it’s understood a software update will fix this down the track – owners can also still use the reverse camera, which remains operational.
Until then, owners need to just rely on their own vision instead of trusting the numbers shown on the screen.
This tech fail mimics regular phantom braking complaints from Tesla owners after the brand disabled and stop shipping cars with radars built in, instead relying solely on camera vision. | true | true | true | In a quest to save money Tesla has removed parking sensors from its cars. The results, as you'd expect, are terrible. | 2024-10-12 00:00:00 | 2023-04-03 00:00:00 | article | carexpert.com.au | CarExpert.com.au | null | null |
|
21,967,977 | https://www.youtube.com/watch?v=r-TLSBdHe1A | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
273,860 | http://zenhabits.net/2008/08/7-powerful-ways-to-get-the-most-out-of-any-situation/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,737,849 | http://www.sfgate.com/local/science/article/NASA-Quantum-teleportation-achieved-in-9975310.php | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,330,020 | http://groups.gaglers.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,090,506 | https://usefathom.com/blog/reduce-aws-bill | Reducing our AWS bill by $100,000 - Fathom Analytics | Jack Ellis | # Reducing our AWS bill by $100,000
technical Jack Ellis · Jan 22, 2024By popular demand, I'm back with another technical blog post. Who am I kidding? I love writing these technical blog posts, and in this post, I will go through everything we did to cut $100,000/year off of our AWS bill.
## Setting the stage
One of our goals for 2024 was to optimize our infrastructure spending because it had been rising fast as our business grew. We're an independent company with zero outside funding, and we stay in business by spending responsibly. In addition, we have a handful of other areas that we want to invest in to deliver even more value to our customers, and we were overspending on AWS.
We decided to focus on only our ingest endpoint, which handles billions of requests a month because that's where all the money was being spent. We use Laravel Vapor, which deploys Laravel applications to AWS, and we were utilizing the following services:
- Application Load Balancer (ALB)
- Web Application Firewall (WAF)
- AWS Lambda
- Simple Queue Service (SQS)
- CloudWatch
- NAT Gateway
- Redis Enterprise Cloud
- Route53
We use other services to run our application, but these were the areas of attack for reducing costs on AWS.
The flow of the ingest endpoint was a CDN (not CloudFront) -> WAF -> ALB -> Lambda -> SQS -> Lambda -> [PHP script which utilizes Redis and SingleStore] -> Done -> Add to CloudWatch logs.
With that out of the way, let's get into the specifics of how we reduced cost.
## CloudWatch
Savings: $7,550 per year
We're starting here because this was the first area I addressed. I am moderately furious at this cost because it was pointless and purposeless spending.
For a long time, I was convinced that the cause of high CloudWatch costs was because Laravel Vapor injected lines like "Preparing to add secrets to runtime" as log items. But that was removed, yet the pointless CloudWatch costs were still occurring.
I looked deeper, and it turns out that we were spending $7,000/year for these pointless logs.
And, if you're thinking, "These aren't pointless; we actually use these for debugging," then great, this isn't a wasted expense for you. We don't use these; we use Sentry to profile performance and catch errors.
After seeing that, I went down the rabbit hole and found tutorials on what I could do. I cannot find the specific articles I used, but I'll write the steps you can take to stop Lambda from writing these pointless logs. I will tailor this to Laravel Vapor users, but it will work the same for you if you're using Lambda.
- Go into
**IAM** - Click on
**Roles** - Search for
**laravel-vapor-role**(This is the role we use for running Lambda functions. If you're not sure what role you use, open up your Lambda function, go to Configuration, then Permissions, and you'll see the execution role) - Click on the role
- Click into the
**laravel-vapor-role-policy**policy to edit it - Remove the
**logs:PutLogEvents**line from the policy and save the changes - Go into
**Lambda** - Click the name of your function (note: you'll need to complete this for every function)
- Click the
**Configuration**tab - Click the
**Monitoring and operations tools**link on the left - Under the ****Logging configuration**** section, click the edit button
- Expand the
**Permissions**section - Untick
**Add required permissions** - Save it
`Note: As of 20th January 2024, there is a bug in the Lambda UI which ticks the "Add required permissions" option despite you having it off. Until this is fixed, I advise that once you're on the Edit logging configuration page (after step 11), refresh your browser. That'll give you the accurate state.`
And that's all you need to do. But the best part is that you keep your Lambda monitoring. You know that **Monitor** tab within your function that has all those useful graphs? You still get that as part of AWS Lambda. Fantastic news.
In addition to the above, if you already have a ton of CloudWatch logs or plan to use them in some capacity, go into CloudWatch and set a retention policy on your log group. By default, there isn't one.
## NAT Gateway
Savings: $17,162 per year
Up next is this fun little toy. A toy that has stung plenty of people in the past. I didn't think much of it when I added a NAT gateway to our service. Sure, I knew there'd be extra costs as we moved towards going private, but I was comfortable with some of our workload going over that NAT gateway. After seeing vast amounts of data being moved across it, I admitted I was wrong.
We now use a NAT gateway to communicate with the internet from within a private subnet in AWS. The only things that go over this gateway are things like API calls to external services. All database interactions are now done via VPC Peering (Redis) or AWS PrivateLink (SingleStore).
We route through private networking to increase speed and improve security. It's an absolute bargain as far as I'm concerned.
## S3
Savings: $5,774 per year
This one was quick. We had versioning turned on in one of our buckets. And it was a small UI area which I had missed entirely. So, if your S3 cost is through the roof and it seems a bit unreasonable, check your buckets. If a “Show versions” radio input appears above the table, that's the cause. We deleted our historical data there and were laughing.
I hope this little S3 note helps somebody out there. But let's move on.
## Route53
Savings: $2,547 per year
Another quick and easy one. And this is still a WIP. No, we didn't secure colossal cost savings, but we still don't want to waste money.
Route53 bills you for queries. DNS resolvers will cache records based on the TTL value of the DNS entry. But if you set the TTL too low on your DNS record, it will mean that the cache will (well, SHOULD, on modern DNS resolvers) invalidate the value of the DNS record and request it from Route53 again. This costs you money each time.
The solution is to increase the TTL on records you know won't need changing for a while or in an emergency. AWS doesn't charge for alias mapping either.
## Lambda & SQS
Lambda savings: $20,862 per year SQS savings: $23,989 per year
Now, let's get into where we've really cut costs. Up until recently, we were doing Lambda -> SQS -> Lambda, and this felt pretty good. After all, we wanted resilience, and, when making this decision initially, our database was in a single AZ, so we had to use SQS in-between because it was a multi-AZ, infinitely scalable service.
But now we've built our infrastructure where we have our databases in multiple availability zones, so we just don't need SQS, and it's instantly dropped our Lambda cost.
This is happening because we've cut the Lambda requests in half and introduced the following changes:
- There is now only one Lambda request per pageview instead of two
- The average Lambda duration on the HTTP endpoint has decreased significantly since we're no longer putting a job into SQS, we're simply hitting Redis and running a database insert (each of these operations takes 1ms or less typically)
- We are still using SQS as a fallback (e.g. if our database is offline), but we're not using it for every request.
- We are no longer running additional requests to SQS for each pageview/event
This obviously wouldn't work for everyone since most people use their queue for heavy lifting, but we don't do heavy lifting in our ingest. In fact, our pageview/event processing is absolutely rapid, and our databases are built to handle inserts at scale.
## WAF
Savings: $12,201
At the time of writing, this is a work in progress. We are planning on moving away from using AWS' WAF for security on our ingest endpoint and moving to our CDN provider's per-second rate limiting instead. The cost is included in what we pay, and we prefer a per-second rate limited to five minutes. We aren't using WAF for bot blocking (e.g. scrapers) and actually do that in Lambda (customers will find out why soon; watch this space).
CDN providers such as Bunny.net and cdn77.com offer very competitive pricing to CloudFront and offer solid reliability. Whilst I don't recommend using Bunny for custom domains at scale, they have been a reliable vendor for us and many people in my network.
For our use case and scale, CloudFront just isn't economical. The reason why we only had to raise our entry-level pricing by $1 was because we knew we could get our costs down and not have to charge our customers lots more. Comparatively, we've seen others in the analytics space hike their prices up tremendously. But I digress.
## CloudFront
Savings: $4,800 per year
For CloudFront, we had a good lesson on how the Popular Objects section works. You can go into CloudFront -> Popular objects, choose a distribution and establish where you may be bleeding.
We've been spending $4,800 per year on EU (Ireland), and I just ignored it. Perhaps we were somehow getting more EU traffic than ever before. But it just didn't make sense, considering the rest of the world was only costing $488 per year.
Well, long story short, we did some digging, and our EU isolation infrastructure currently sends our main servers over 100 million requests a month. We have been moving around 4 TB of data per month. This is to achieve the sync of blocked IPs that customers add to their sites, so it's somewhat expected, but we can easily remove this cost.
My advice is to really think about whether CloudFront is worth the cost if you're running it at scale. As I said above, there are other options available, and they are fantastic. But if you're using CloudFront and you want to optimize spend, use that Popular Objects section to see if there's any way you could offload certain assets to a cheaper CDN.
Note: I only included the cost of CloudFront for this excess cost that had slipped by us, not for serving our dashboard/API.
## Everything else
Our savings total up to $94,885. But now let's do some magic with that figure:
- $97,731.55 after we add 3% for Support (developer) package from AWS
- $109,459.34 after we add 12% for GST & PST (I appreciate sales tax is recoverable but wanted to include it for full context)
So, the work here has saved us $109,459 USD per year (minus returned sales tax). All of this was needed because we are going to be introducing new features that would drive these costs up.
As I said earlier, we're an independent company and every dollar matters. Our goal is not to IPO or become the next big unicorn; our goal is to provide the best possible alternative to Google Analytics for our customers and survive for the long term. And, in all honesty, we'd rather continue to invest in our analytics database software, which does a ton of heavy lifting, than pointlessly give the money to AWS for things we don't need.
I hope this article is helpful. Please let me know if this blog post helped you save money on your bill.
P.S. The discussion on Twitter/X is here
### You might also enjoy reading:
BIO
Jack Ellis, CTO + teacher
#### Recent blog posts
Tired of how time consuming and complex Google Analytics can be? Try Fathom Analytics: | true | true | true | We reduced our AWS bill so that we could invest in more important areas. | 2024-10-12 00:00:00 | 2024-01-22 00:00:00 | article | usefathom.com | Fathom Analytics | null | null |
|
4,052,754 | http://wayra.org/en/blog/first-winners-wayra-uk | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,021,459 | https://ewanvalentine.io/the-importance-in-picking-a-lane/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,292,932 | https://doodad.dev/pattern-generator | DOODAD.dev | null | null | true | true | false | Online tools for making the internet fast, accessible, expressive, and green. | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
8,696,249 | http://give.masteringmodernpayments.com/giveaways/books/?hn=1 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,004,390 | https://fiftytwo.in/story/milk/ | Milk | Sukhada Tatke | But scientists like Narendra Nayak, then an assistant chemistry professor at the Kasturba Medical College in Mangaluru, continued conducting demonstrations on the day after. In the evening, his lab lost electricity supply. All of a sudden, some members of his audience pelted him with stones, giving him scalp wounds. He spent the night in hospital. Later, he said, he discovered that his attackers were attached to a right-wing organization.
“They had been part of the crowd and were asking all kinds of questions. I was engrossed in my demonstration while they had been preparing the attack,” said Nayak, who is now president of the Federation of Indian Rationalist Associations.
It was not the first attack on a rationalist, and would not be the last. Narendra Dabholkar was shot dead in August 2013. In subsequent years, other well-known critics of superstition such as Govind Pansare, MM Kalburgi and Gauri Lankesh were murdered.
This is not how it was supposed to be. Jawaharlal Nehru and BR Ambedkar both felt that political freedom needed the protections of scientific modernity. Despite their differences, they were united in their adherence to reason and critical thinking. “Stories of Gods are cooked to make you into fools,” Ambedkar said, “and you all are trapped in all these kinds of false stories.”
Nehru, entranced by India’s ancient civilisations, had written influential books about Indian history, but warned that glorifying the “golden past” was a “foolish and dangerous pastime.” His government’s Scientific Policy Resolution, finalised in 1958, underscored the importance of “encouraging individual initiative for the acquisition and demonstration of knowledge, and for the discovery of new knowledge.” A couple of decades later, a Scientific Temper Statement signed by scientists and intellectuals stressed, once again, the value of reason in a culture where faith seemed to dictate so much of social life. The signers hoped for this statement to bring about “a second Indian Renaissance.”
Some religious critics condemned idol worship itself as the carrier of superstition and Brahmanical hegemony. In 1953, the Tamil social activist and politician EV Ramaswamy, known as Periyar, led an agitation to break statues of Ganesh, or Pillayar, as he is often known in Tamil Nadu. “We have to eradicate the gods who are responsible for the institution which portray us as sudras, people of low birth, and some others as Brahmins of high birth,” he told his followers. “We have to break the idols of these gods. I start with Ganesa because it is he who is worshipped before undertaking any task.” When Ambedkar’s Dalit followers began to convert to Buddhism, he asked them to take 22 vows, one of which was: ‘I shall have no faith in Gauri, Ganapati and other gods and goddesses of Hindus nor shall I worship them.’
Yet republican ideals never really dented hyper-religiosity. By 1995, the convictions of two of India’s most influential anti-caste revolutionaries were largely forgotten. That September day, in homes and temples, pools of milk stagnated under idols; rivers flowed down the drains. Among believers, there was a febrile excitement. The miracle had evoked some sort of shared sensus divinitatis, one that people could see, and recreate for themselves, in real time. Regardless of rational explanations, how could eyes lie?
In the aftermath of the phenomenon, terms such as ‘mass hysteria’ and ‘religious frenzy’ cropped up frequently. Vasant Sathe, a former cabinet minister and an avowed rationalist,
said “In the age of computers, it is an insult to human intelligence to say that the gods are drinking milk.”
A scientists’ petition urged educated people to take on the responsibility to prevent a “form of primitive obscurantism… at the dawn of the twenty-first century.” But such appeals had equal and opposite reactions. The former electoral commissioner TN Seshan denounced them as “pseudo scientists.” Kamala Ganesh, then reader in cultural anthropology at Bombay University, said in an interview to *The Times of India*: “During every election, paeans are sung to the rationalism of the electorate, and yet the word ‘superstitious’ is now being used to characterise the believers. A godman performing tricks, I believe, is quite different from the present phenomenon, which has much more to do with belief than with magic tricks.” | true | true | true | For a few days in 1995, many Indians believed a religious idol had developed a lifelike ability to drink milk — a new story from India to the world, each week on Fifty Two. | 2024-10-12 00:00:00 | 2020-11-06 00:00:00 | article | null | FiftyTwo | null | null |
|
24,328,962 | https://henrikwarne.com/2020/08/30/deployed-to-production-is-not-enough/ | Deployed To Production Is Not Enough | null | You have developed a new feature. The code has been reviewed, and all the tests pass. You have just deployed this new feature to production. So on to the next task, right? Wrong. Most of the time, you should check that the feature behaves as expected in production. For a simple feature it may be enough to just try it out. But many features are not easily testable. They may be just one part of a complex flow of actions. Or they deal with external data fed into the system.
In such cases, checking if the feature is working means looking at the logs. Yet another reason for checking the logs is that the feature may be working fine most of the time, but given unanticipated data, it fails. Usually when I deploy something new to production, I follow up by looking at the logs. Often I find surprising behavior or unexpected data.
Many developers simply assume that the new feature they are deploying will work as expected. Ideally, the new feature has been tested in a production-like test environment. In my experience, it is not enough that all automatic tests pass. If the new feature has not been explored in a test system, there is a risk that it is not working properly. This is because the automatic tests focus on the code, but when exploring the feature in a test system, you consider the whole picture. It is the difference between checking and exploring.
But even when the feature has been tested properly before, there is a risk that it won’t work as intended in production. The main reason for this is that the environment in production is more complex than in the test system. There is usually **more traffic, more concurrency, and more diverse data**.
The key to finding out how the new feature behaves in the more complex production environment is logging. I have already written about what I think is needed for good logging. Of course, there needs to be logging in the first place. If you don’t log anything about how your feature is behaving, you are effectively blind. The only way to know if it is working is to test it, or to wait for trouble reports from customers. **If you do log how the feature behaves, you can be proactive**. After I have deployed a new feature, I usually look at the logs. Typically, most log entries are the expected cases. The interesting part is when you exclude all the expected cases, or search for error cases. That is usually when I find the corner cases that I had not anticipated.
I sometimes hear people saying that nothing should be logged when everything is working as expected. However, that stops you from finding all the cases where your code “*works*” but gives the wrong result. Often this is due to unanticipated data. For example, suppose an agreement should be deactivated when a user sets the exposure value to zero. However, what if the exposure value is sometimes set to zero by a system user too. Is that correct? If you are logging when it happens, you will notice that it is sometimes set by the system user as well, which is perhaps not the intended behavior. Without logging, you would not be able to see this difference. You could say the requirements were incomplete, but to discover that, logging was needed.
Another reason for logging what is happening (even if it is not errors) is that it **helps trouble shooting**. The system may be behaving as expected, but people are misunderstanding what should happen (not uncommon for a complex system). Checking the logs to see what happened will demonstrate that the system did the correct thing, even though it was not what we expected it to be.
### Continuing Testing In Production
Often, checking how a new feature behaves in production is referred to as **“testing in production”**. I think this label is misleading. It makes it sound like there is no testing done before deploying to production. But of course the responsible thing is to test thoroughly before deploying. I think **continuing testing in production** better describes what it is. Checking the logs after deployment is one aspect of this. But there are other ways of making sure that deploying a new feature does not cause any problems. Here are some ways we are using at work:
**Gradual rollout.** When you introduce a new feature, it doesn’t have to be all or nothing. If you want to be cautious, only turn it on for a small subset of users at a time. For example, we have used the starting letter of the party group name to decide if the new feature should be used. First, only party group names starting with **A – D**, then **A – H** and so on.
**Feature flags**. Another way of doing a gradual roll out. Only users that have the flag enabled get the new feature. If there are many different feature flags at the same time, there can be problems that only show up for a given combination of flags that are on and off. Therefore, it is good to remove feature flags that are no longer needed.
**Test accounts**. Having users in the production system that are only used for testing is really good. Then you can test features in production without impacting real customers.
**Fall back to previous**. If a feature introduces a new way of doing something, it is good to keep the old way of doing it for a while. Then you can add code that will fall back to the old solution if anything goes wrong with the new solution. These fallbacks should be tracked (metrics or logging), so that you can decide when it is safe to remove the old solution.
**Compare results**. A variation of fallback to previous. If you introduce a new way of, for example, calculating a margin call amount, then it is good to do it both ways and compare the results. If the results differ, you need to investigate why.
**Easy enable/disable**. When you introduce a new feature, it is good if it can be easily disabled, for example with a flag. That way, if there is a problem with the feature, it can just be disabled. An alternative is to roll back the change (deploying the old code again), but sometimes that is more complicated, for example if the database schema was changed.
**Heartbeat jobs**. Performing some basic round-trip action every few minutes or so, and raising an alarm if it fails, is really helpful. There are many ways a system can fail (connectivity issues, overload, bugs in the code etc.), and all of them are noticed if a heartbeat job then fails.
### Conclusion
Deploying to production is not the last step. You need to be proactive and make sure that what you have deployed works as expected, without any surprises. Also, there are many strategies to lower the risk when deploying new features to production.
Reddit discussion: https://www.reddit.com/r/programming/comments/ijrzha/deployed_to_production_is_not_enough/ | true | true | true | You have developed a new feature. The code has been reviewed, and all the tests pass. You have just deployed this new feature to production. So on to the next task, right? Wrong. Most of the time, … | 2024-10-12 00:00:00 | 2020-08-30 00:00:00 | article | henrikwarne.com | Henrik Warne's blog | null | null |
|
10,413,230 | http://www.theguardian.com/environment/2015/oct/19/oslo-moves-to-ban-cars-from-city-centre-within-four-years | Oslo moves to ban cars from city centre within four years | Agence France-Presse; Guardian staff reporter | Oslo’s new leftist city government said Monday it wants to ban private cars from the city centre by 2019 as part of a plan to slash greenhouse gas emissions.
The Labour Party and its allies the Socialist Left and the Green Party, winners of the 14 September municipal elections in the Norwegian capital, presented a platform focused on the environment and the fight against climate change.
The programme envisages a ban on private vehicles in the city centre which, according to the Verdans Gang newspaper, is home to only about 1,000 people but where some 90,000 work.
The new city government did not give details of how the plan would be implemented.
But the proposal has sparked concerns among local businessmen, who noted that 11 of the city’s 57 shopping centres are in the planned car-free zone.
The ban on automobiles is part of a plan to slash emissions of greenhouse gases by 50% by 2020 compared to 1990 levels.
The new city authorities also plan to divest fossil fuels from their pension funds, build more bicycle lanes, subsidise the purchase of electric bicycles and reduce automobile traffic over the city as a whole by 20% by 2019 and 30% by 2030.
“In 2030, there will still be people driving cars but they must be zero-emissions,” Lan Marie Nguyen Berg, a member of the Green Party, told a news conference.
Norwegian media said the largely ceremonial post of city mayor would go to Marianne Borgen of the Socialist Left and not Shoaib Sultan, the candidate of the Green Party, who thereby misses out on becoming one of the first Muslims to lead a major European city.
## Comments (…)
Sign in or create your Guardian account to join the discussion | true | true | true | Proposed ban on private vehicles is part of a plan to slash greenhouse gas emissions 50% by 2020 compared to 1990 levels | 2024-10-12 00:00:00 | 2015-10-19 00:00:00 | article | theguardian.com | The Guardian | null | null |
|
20,624,737 | https://blog.blazingdb.com/blazingsql-is-now-open-source-b859d342ec20?gi=d9d1313230b8 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,423,367 | http://vimeo.com/102632687 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,254,227 | https://generalassemb.ly/ | Tech Talent & Training Solutions | General Assembly | null | # LAND A NEW CAREER WITH NEXT-LEVEL TECH TRAINING
Yes. You can. Get on the path to a tech career in an in-demand field like software engineering, data, or user experience design. Our flexible course options make it happen. In just 12 weeks.
Hot skills summer is back. Get **$/£/€1,500 off a short course.**
Don't miss our Skill Seeker Series from Sept 30 - Oct 30.
Yes. You can. Get on the path to a tech career in an in-demand field like software engineering, data, or user experience design. Our flexible course options make it happen. In just 12 weeks.
Tap into a network of 110k+ alumni from top tech companies around the world — plus our diverse network of employers and alumni that trust GA grads. As part of the Adecco Group, we have unmatched reach — for unmatched career opportunities.
Get the real, hands-on tech skills you need to start a new career or move into a new role — plus the soft skills you need to be job-ready before your first day even hits.
Learn from instructors with real-world experience at top tech companies. They’ve been there — and they know what top employers are looking for in tech talent.
Work with a team of career experts.
It’s your call. Whether you want to launch a new career with a full-time bootcamp or build new skills while working your current job, you’ve got options.
From installment plans to 0% interest loans, pay-after-you’re-hired options, tuition discounts, and more, we make financing easy and accessible. Wave goodbye to financial pressure and hello to the tech skills you need to reinvent yourself.
“GA has a great reputation among bootcamps, especially since they have a large employer network. I wouldn’t be where I am now — which is in a much happier place — without GA.”
Senior UX Designer, JP Morgan Chase & Co
Want more info from our Admissions team or interested in applying for a course? Let’s chat.
Loading Form... | true | true | true | Break into a tech career with best-in-class tech bootcamps. Or build your company’s tech talent pipeline with tech talent and training solutions. | 2024-10-12 00:00:00 | 2024-08-19 00:00:00 | null | null | generalassemb.ly | General Assembly | null | null |
37,676,884 | https://badsoftwareadvice.substack.com/p/how-to-delay-automation-of-your-builds | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
27,166,503 | https://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-processing-an-unsorted-array | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,836,791 | https://perceptionbox.io/business/technology-transfer-for-startups-and-smbs/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,906,784 | https://github.com/dspinellis/unix-history-repo | GitHub - dspinellis/unix-history-repo: Continuous Unix commit history from 1970 until today | Dspinellis | The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2018 as a widely-used 30 million line system. The 1.5GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology.
You can read more details about the contents, creation, and uses of this repository through this link.
Two repositories are associated with the project:
- unix-history-repo is a repository representing a reconstructed version of the Unix history, based on the currently available data. This repository will be often automatically regenerated from scratch, so this is not a place to make contributions. To ensure replicability its users are encouraged to fork it or archive it.
- unix-history-make is a repository containing code and metadata used to build the above repository. Contributions to this repository are welcomed.
The project has achieved its major goal with the establishment of a continuous timeline from 1970 until today. The repository contains:
- snapshots of PDP-7, V1, V2, V3, V4, V5, V6, and V7 Research Edition,
- Unix/32V,
- all available BSD releases,
- the CSRG SCCS history,
- two releases of 386BSD,
- the 386BSD patchkit,
- the FreeBSD 1.0 to 1.1.5 CVS history,
- an import of the FreeBSD repository starting from its initial imports that led to FreeBSD 2.0, and
- the current FreeBSD repository.
The files appear to be added in the repository in chronological order according to their modification time, and large parts of the source code have been attributed to their actual authors. Commands like `git blame`
and `git log`
produce the expected results.
The repository contains a number of two-way merges.
- 3 BSD is merged from Unix/32V and Research Edition 6
- Various BSD releases are merged from the development branch and a time point of BSD-SCCS
- FreeBSD 1.0 is merged from Net/2 BSD and 386BSD-0.1-patchkit
- FreeBSD 2.0 is merged from BSD 4.4/Lite1 and FreeBSD 1.1.5
Blame is apportioned appropriately.
The following tags or branch names mark specific releases, listed in rough chronological order.
- Epoch
- Research-PDP7
- Research-V1–6
- BSD-1
- BSD-2
- Research-V7
- Bell-32V
- BSD-3, 4, 4_1_snap, 4_1c_2, 4_2, 4_3, 4_3_Reno, 4_3_Net_1, 4_3_Tahoe, 4_3_Net_2, 4_4, 4_4_Lite1, 4_4_Lite2 SCCS-END,
- 386BSD-0.0, 0.1, patchkit
- FreeBSD-release/1.0, 1.1, 1.1.5
- FreeBSD-release/2.0 2.0.5, 2.1.0, 2.1.5, 2.1.6, 2.1.6.1, 2.1.7, 2.2.0, 2.2.1, 2.2.2, 2.2.5, 2.2.6, 2.2.7, 2.2.8
- FreeBSD-release/3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 3.5.0
- FreeBSD-release/4.0.0 4.1.0, 4.1.1, 4.2.0, 4.3.0, 4.4.0, 4.5.0, 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.8.0, 4.9.0, 4.10.0, 4.11.0
- FreeBSD-release/5.0.0 5.1.0, 5.2.0, 5.2.1, 5.3.0, 5.4.0, 5.5.0
- FreeBSD-release/6.0.0, 6.1.0, 6.2.0, 6.3.0, 6.4.0
- FreeBSD-release/7.0.0, 7.1.0, 7.2.0, 7.3.0, 7.4.0
- FreeBSD-release/8.0.0, 8.1.0, 8.2.0, 8.3.0, 8.4.0
- FreeBSD-release/9.0.0, 9.1.0, 9.2.0, 9.3.0
- FreeBSD-release/10.0.0, 10.1.0, 10.2.0, 10.3.0, 10.4.0
- FreeBSD-release/11.0.0, 11.0.1, 11.1.0, 11.2.0, 11.3.0, 11.4.0
- FreeBSD-release/12.0.0, 12.1.0
A detailed description of the major tags is available in the file releases.md.
More tags and branches are available.
- The
`-Snapshot-Development`
branches denote commits that have been synthesized from a time-ordered sequence of a snapshot's files. - The
`-VCS-Development`
tags denote the point along an imported version control history branch where a particular release occurred.
The easiest thing you can do is to watch the repository's Gource Visualization.
If you have a broadband network connection and about 1.5GB of free disk space, you can download the repository and run Git commands that go back decades. Run
```
git clone https://github.com/dspinellis/unix-history-repo
git checkout BSD-Release
```
to get a local copy of the Unix history repository.
Running
`git log --reverse --date-order`
will give you commits such as the following
```
commit 64d7600ea5210a9125bd1a06e5d184ef7547d23d
Author: Ken Thompson <[email protected]>
Date: Tue Jun 20 05:00:00 1972 -0500
Research V1 development
Work on file u5.s
Co-Authored-By: Dennis Ritchie <[email protected]>
Synthesized-from: v1/sys
[...]
commit 4030f8318890a026e065bc8926cebefb71e9d353
Author: Ken Thompson <[email protected]>
Date: Thu Aug 30 19:30:25 1973 -0500
Research V3 development
Work on file sys/ken/slp.c
Synthesized-from: v3
[...]
commit c4999ec655319a01e84d9460d84df824006f9e2d
Author: Dennis Ritchie <[email protected]>
Date: Thu Aug 30 19:33:01 1973 -0500
Research V3 development
Work on file sys/dmr/kl.c
Synthesized-from: v3
[...]
commit 355c543c6840fa5f37d8daf2e2eaa735ea6daa4a
Author: Brian W. Kernighan <[email protected]>
Date: Tue May 13 19:43:47 1975 -0500
Research V6 development
Work on file usr/source/rat/r.g
Synthesized-from: v6
[...]
commit 0ce027f7fb2cf19b7e92d74d3f09eb02e8fea50e
Author: S. R. Bourne <[email protected]>
Date: Fri Jan 12 02:17:45 1979 -0500
Research V7 development
Work on file usr/src/cmd/sh/blok.c
Synthesized-from: v7
[...]
Author: Eric Schmidt <[email protected]>
Date: Sat Jan 5 22:49:18 1980 -0800
BSD 3 development
Work on file usr/src/cmd/net/sub.c
```
Run
```
git checkout Research-Release
git log --follow --simplify-merges usr/src/cmd/c/c00.c
```
to see dates on which the C compiler was modified.
Run
```
git blame -C -C usr/sys/sys/pipe.c
```
to see how the Unix pipe functionality evolved over the years.
```
3cc1108b usr/sys/ken/pipe.c (Ken Thompson 1974-11-26 18:13:21 -0500 53) rf->f_flag = FREAD|FPIPE;
3cc1108b usr/sys/ken/pipe.c (Ken Thompson 1974-11-26 18:13:21 -0500 54) rf->f_inode = ip;
3cc1108b usr/sys/ken/pipe.c (Ken Thompson 1974-11-26 18:13:21 -0500 55) ip->i_count = 2;
[...]
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 122) register struct inode *ip;
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 123)
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 124) ip = fp->f_inode;
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 125) c = u.u_count;
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 126)
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 127) loop:
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 128)
1f183be2 usr/sys/sys/pipe.c (Ken Thompson 1979-01-10 15:19:35 -0500 129) /*
9a9f6b22 usr/src/sys/sys/pipe.c (Bill Joy 1980-01-05 05:51:18 -0800 130) * If error or all done, return.
9a9f6b22 usr/src/sys/sys/pipe.c (Bill Joy 1980-01-05 05:51:18 -0800 131) */
9a9f6b22 usr/src/sys/sys/pipe.c (Bill Joy 1980-01-05 05:51:18 -0800 132)
9a9f6b22 usr/src/sys/sys/pipe.c (Bill Joy 1980-01-05 05:51:18 -0800 133) if (u.u_error)
9a9f6b22 usr/src/sys/sys/pipe.c (Bill Joy 1980-01-05 05:51:18 -0800 134) return;
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 135) plock(ip);
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 136) if(c == 0) {
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 137) prele(ip);
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 138) u.u_count = 0;
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 139) return;
6d632e85 usr/sys/ken/pipe.c (Ken Thompson 1975-07-17 10:33:37 -0500 140) }
```
You can help if you were there at the time, or if you can locate a source that contains information that is currently missing.
- If your current GitHub account is not linked to your past contributions, (you can search them through this page), associate your past email with your current account through your GitHub account settings. (Contact me for instructions on how to add email addresses to which you no longer have access.)
- Look for errors and omissions in the files that map file paths to authors.
- Look for parts of the system that have not yet been attributed in these files and propose suitable attributions. Keep in mind that attributions for parts that were developed in one place and modified elsewhere (e.g. developed at Bell Labs and modified at Berkeley) should be for the person who did the modification, not the original author.
- Look for authors whose identifier starts with
`x-`
in the author id to name map files for Bell Labs, and Berkeley, and provide or confirm their actual login identifier. (The one used is a guess.) - Contribute a path regular expression to contributor map file (see v7.map) for 4.2BSD, 4.3BSD, 4.3BSD-Reno, 4.3BSD-Tahoe, 4.3BSD-Alpha, and Net2.
- Import further branches, such as 2BSD, NetBSD, OpenBSD, and
*Plan 9 from Bell Labs*.
The -make repository is provided to share and document the creation process, rather than as a bullet-proof way to get consistent and repeatable results. For instance, importing the snapshots on a system that is case-insensitive (NTFS, HFS Plus with default settings) will result in a few files getting lost.
- Git
- Perl
- The Perl modules
`VCS::SCCS`
and`Git::FastExport`
(Install with`sudo cpanm VCS::SCCS Git::FastExport`
.) - If compiling patch under GNU/Linux and library
`libbsd`
(e.g. the`libbsd-dev`
package). - Sudo (and authorization to use it to mount ISO images)
The -repo repository can be created with the following commands.
```
make
./import.sh
```
If you want to add a new source without running the full import process, you can do the following.
- Prepare the source's maps and data
`cd`
to the repo directory- Checkout the repo at the point where the new source will branch out
- Run a Perl command such as the following.
```
perl ../import-dir.pl [-v] -m Research-V7 -c ../author-path/Bell-32V \
-n ../bell.au -r Research-V7 -i ../ignore/Bell-32V \
$ARCHIVE/32v Bell 32V -0500 | gfi
```
- Documented Unix facilities timeline
- edX open online course on Unix tools for data, software, and production engineering
- Scientific publications
- Diomidis Spinellis. A repository of Unix history and evolution.
*Empirical Software Engineering*, 2017. doi:10.1007/s10664-016-9445-5. HTML, PDF - Diomidis Spinellis. A repository with 44 years of Unix evolution. In
*MSR '15: Proceedings of the 12th Working Conference on Mining Software Repositories*, pages 13-16. IEEE, 2015. Best Data Showcase Award. PDF, HTML, poster. - Diomidis Spinellis and Paris Avgeriou. Evolution of the Unix system architecture: An exploratory case study.
*IEEE Transactions on Software Engineering*, 2020. http://dx.doi.org/10.1109/TSE.2019.2892149. - Warren Toomey, First Edition Unix: Its Creation and Restoration, in
*IEEE Annals of the History of Computing*, vol. 32, no. 3, pp. 74-82, July-Sept. 2010. doi:10.1109/MAHC.2009.55. PDF - Warren Toomey, The Restoration of Early UNIX Artifacts, in
*USENIX ATC '09: 2009 USENIX Annual Technical Conference*. 2009. PDF - Diomidis Spinellis, Panagiotis Louridas, and Maria Kechagia. An exploratory study on the evolution of C programming in the Unix operating system. In Qing Wang and Guenther Ruhe, editors,
*ESEM '15: 9th International Symposium on Empirical Software Engineering and Measurement*, pages 54–57. IEEE, October 2015. HTML, PDF - Diomidis Spinellis, Panos Louridas, and Maria Kechagia. The evolution of C programming practices: A study of the Unix operating system 1973–2015. In Willem Visser and Laurie Williams, editors,
*ICSE '16: Proceedings of the 38th International Conference on Software Engineering*, May 2016. Association for Computing Machinery. doi:10.1145/2884781.2884799. PDF, HTML - Diomidis Spinellis. Documented Unix facilities over 48 years. In
*MSR '18: Proceedings of the 15th Conference on Mining Software Repositories*. Association for Computing Machinery, May 2018. doi:10.1145/3196398.3196476 PDF, poster
- Diomidis Spinellis. A repository of Unix history and evolution.
- Research Edition Unix Manuals
- Wikipedia: The Free Encyclopedia
- TUHS: The Unix Heritage Society
- Historical documents and data
- Studies
- The following people helped with Bell Labs login identifiers.
- Brian W. Kernighan
- Doug McIlroy
- Arnold D. Robbins
- The following people helped with *BSD login identifiers.
- Clem Cole
- Era Eriksson
- Mary Ann Horton
- Warner Losh
- Kirk McKusick
- Jeremy C. Reed
- Ingo Schwarze
- Anatole Shaw
- The BSD SCCS import code is based on work by
- H. Merijn Brand
- Jonathan Gray
Data set versioned DOI: Software versioned DOI:
- Software URL: https://github.com/dspinellis/unix-history-make
- Software SHA: 86383f1340e3735552b58df0a42696dbcb9dac00
- Build timestamp: 2021-01-01 11:40:51 UTC | true | true | true | Continuous Unix commit history from 1970 until today - dspinellis/unix-history-repo | 2024-10-12 00:00:00 | 2014-07-18 00:00:00 | https://opengraph.githubassets.com/e2a7efcb83911356a20d62290990658e943e6dfcf86dd6a2fcb2ba106cc73261/dspinellis/unix-history-repo | object | github.com | GitHub | null | null |
5,404,751 | http://online.wsj.com/article_email/SB10001424127887324323904578370721114852766-lMyQjAxMTAzMDEwOTExNDkyWj.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,319,279 | http://www.economist.com/blogs/johnson/2012/07/language-and-computers | Why language isn't computer code | null | Culture | Language and computers
# Why language isn't computer code
## The differences between formal and natural languages are as big as the similarities
## Discover more
### Why the world is so animated about anime
Japan’s cartoons have conquered its screens, and more
### How a second nuclear disaster was avoided at Chernobyl in 2022
The Russian occupation underscored the risks posed by nuclear sites in wartime
### Han Kang wins the Nobel prize in literature for 2024
The South Korean author offers another example of the country’s cultural clout
### How complicated is brain surgery actually?
A doctor reveals the myths and realities of his profession
### Why you should read Mohamed Mbougar Sarr
The Senegalese novelist is one of the boldest writers working today
### Is TV’s next sure-fire hit, “Disclaimer”, a must-watch or a dud?
The glitzy new thriller is both | true | true | true | The differences between formal and natural languages are as big as the similarities | 2024-10-12 00:00:00 | 2012-07-31 00:00:00 | /engassets/og-fallback-image.png | Article | economist.com | The Economist | null | null |
26,512,760 | https://github.com/skiffos/SkiffOS | GitHub - skiffos/SkiffOS: Any Linux distribution, anywhere. | Skiffos | SkiffOS is a config package system for the Buildroot OS cross-compiler.
**Run any distribution anywhere**: decouples hardware support from user distro environments.**Reliable**: minimal read-only host system for unbreakable boot-ups and over-the-air updates.**Reproducible**: offline builds, pinned package versions, source-controlled custom configs.
Configuration packages are merged together to configure the system:
`SKIFF_CONFIG=pi/4,core/debian`
- run Debian desktop on a Raspberry Pi 4.`SKIFF_CONFIG=odroid/xu4,core/fedora`
- run Fedora desktop on a Odroid XU4.`SKIFF_CONFIG=virt/qemu,custom/config`
- run a custom config in a Qemu VM.
There is a project template you can use for version-controlled customizations.
Linux devices have varying requirements for kernel, firmware, and other hardware support packages. SkiffOS decouples this support from the containerized environments. The containers are portable across devices with the same CPU architecture, while ordinary OS images (Board Support Packages) are not.
Supports any Linux-compatible computer, ranging from RPi, Odroid, NVIDIA Jetson, to Desktop PCs, Laptops (i.e. Apple MacBook), Phones, Cloud VMs, and even Web Browsers.
The Buildroot OS cross-compiler can target any Linux-compatible device or virtual machine. These system configuration packages are available in the main SkiffOS repository:
System |
Config Package |
Bootloader |
Kernel |
---|---|---|---|
VirtualBox | virt/virtualbox | N/A | ✔ 6.11.3 |
Docker Img | virt/docker | N/A | N/A |
Incus | virt/incus | N/A | ✔ 6.11.3 |
Qemu | virt/qemu | N/A | ✔ 6.11.3 |
UTM on MacOS | apple/arm + virt/qemu | N/A | ✔ 6.11.3 |
V86 on WebAssembly | browser/v86 | V86 | ✔ 6.11.3 |
WSL on Windows | virt/wsl | N/A | N/A |
----------------------- | --------------------------- | ------------------ | ----------------- |
Allwinner Nezha | allwinner/nezha | ✔ U-boot 2022.10 | ✔ sm-6.1-rc3 |
Apple Macbook Intel | apple/intel | ✔ rEFInd | ✔ 6.11.3 |
Apple Silicon | apple/arm | ✔ UTM (as VM) | ✔ 6.11.3 |
BananaPi M1+/Pro | bananapi/m1plus | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BananaPi M1 | bananapi/m1 | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BananaPi M2 | bananapi/m2 | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BananaPi M2+ | bananapi/m2plus | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BananaPi M2 Ultra | bananapi/m2ultra | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BananaPi M3 | bananapi/m3 | ✔ U-Boot 2023.07 | ✔ 6.11.3 |
BeagleBoard X15 | beaglebone/x15 | ✔ U-Boot 2022.04 | ✔ 5.10.168-ti |
BeagleBone AI | beaglebone/ai | ✔ U-Boot 2022.04 | ✔ 5.10.168-ti |
BeagleBone Black | beaglebone/black | ✔ U-Boot 2022.04 | ✔ 5.10.168-ti |
BeagleBoard BeagleV | starfive/visionfive | ✔ U-Boot 2021.04 | ✔ sv-5.19-rc3 |
Intel x86/64 |
intel/desktop | ✔ rEFInd | ✔ 6.11.3 |
ModalAI Voxl2 | modalai/voxl2 | N/A | ✔ msm-4.19.125 |
NVIDIA Jetson AGX | jetson/agx | ✔ UEFI | ✔ nv-5.10.120 |
NVIDIA Jetson Nano | jetson/nano | ✔ U-Boot | ✔ nv-4.9.337 |
NVIDIA Jetson TX2 | jetson/tx2 | ✔ U-Boot | ✔ nv-4.9.337 |
Odroid C2 | odroid/c2 | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid C4 | odroid/c4 | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid H2 | odroid/h3 | ✔ rEFInd | ✔ 6.11.3 |
Odroid H2+ | odroid/h3 | ✔ rEFInd | ✔ 6.11.3 |
Odroid H3 | odroid/h3 | ✔ rEFInd | ✔ 6.11.3 |
Odroid H3+ | odroid/h3 | ✔ rEFInd | ✔ 6.11.3 |
Odroid HC1 | odroid/xu | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid HC2 | odroid/xu | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid HC4 | odroid/hc4 | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid M1 | odroid/m1 | ✔ U-Boot 2017.09 | ✔ tb-6.4.3 |
Odroid N2+ | odroid/n2 | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid N2L | odroid/n2l | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid U | odroid/u | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid XU3 | odroid/xu | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
Odroid XU4 | odroid/xu | ✔ U-Boot 2023.07 | ✔ tb-6.4.3 |
OrangePi Lite | orangepi/lite | ✔ U-Boot 2018.05 | ✔ 6.11.3 |
OrangePi Zero | orangepi/zero | ✔ U-Boot 2018.07 | ✔ 6.11.3 |
PcDuino 3 | pcduino/3 | ✔ U-Boot 2019.07 | ✔ 6.11.3 |
PcEngines APU2 | pcengines/apu2 | ✔ CoreBoot | ✔ 6.11.3 |
Pi 0 | pi/0 | N/A | ✔ rpi-6.6.45 |
Pi 1 | pi/1 | N/A | ✔ rpi-6.6.45 |
Pi 3 + 1, 2 | pi/3 | N/A | ✔ rpi-6.6.45 |
Pi 4 | pi/4 | N/A | ✔ rpi-6.6.45 |
Pi 4 (32bit mode) | pi/4x32 | N/A | ✔ rpi-6.6.45 |
Pi 5 | pi/5 | N/A | ✔ rpi-6.6.45 |
Pine64 H64 | pine64/h64 | ✔ U-Boot 2022.04 | ✔ megi-6.6-pre |
PineBook A64 | pine64/book_a64 | ✔ U-Boot (bin) | ✔ megi-6.6-pre |
PineBook Pro | pine64/book | ✔ U-Boot (bin) | ✔ megi-6.6-pre |
PinePhone | pine64/phone | ✔ U-Boot (bin) | ✔ megi-6.6-pre |
PinePhone Pro | pine64/phone_pro | ✔ U-Boot (bin) | ✔ megi-6.6-pre |
Rock64 rk3328 | pine64/rock64 | ✔ U-Boot 2022.04 | ✔ megi-6.6-pre |
RockPro64 | pine64/rockpro64 | ✔ U-Boot (bin) | ✔ megi-6.6-pre |
Sipeed LicheeRV | allwinner/licheerv | ✔ U-Boot 2022.07 | ✔ sm-5.19-rc1 |
VisionFive | starfive/visionfive | ✔ U-Boot 2021.04 | ✔ sv-5.19-rc3 |
VisionFive2 v1.2 | starfive/visionfive2_12 | ✔ U-Boot 2024.07 | ✔ 6.11.3 |
VisionFive2 v1.3 | starfive/visionfive2 | ✔ U-Boot 2024.07 | ✔ 6.11.3 |
USBArmory Mk2 | usbarmory/mk2 | ✔ U-Boot 2020.10 | ✔ 6.11.3 |
Valve Steam Deck | valve/deck | N/A | ✔ valve-6.5.0 |
Wandboard | freescale/wandboard | ✔ U-Boot 2022.04 | ✔ 6.11.3 |
Adding support for a board involves creating a Skiff configuration package for
the board, as described above. If you have a device that is not yet supported by
SkiffOS, please **open an issue.**
Buildroot dependencies must be installed as a prerequisite, assuming apt:
```
sudo apt-get install -y \
bash \
bc \
binutils \
build-essential \
bzip2 \
cpio \
diffutils \
file \
findutils \
gzip \
libarchive-tools \
libncurses-dev \
make \
patch \
perl \
rsync \
sed \
tar \
unzip \
wget
```
This example uses `pi/4`
for the Raspberry Pi 4, see Supported Systems.
Create a SSH key on your development machine. Add the public key to your build
with `cp ~/.ssh/*.pub ./overrides/root_overlay/etc/skiff/authorized_keys`
. This
will be needed to enable SSH access.
```
$ git submodule update # make sure buildroot is up to date
$ make # lists all available options
$ export SKIFF_WORKSPACE=default # optional: supports multiple SKIFF_CONFIG at once
$ export SKIFF_CONFIG=pi/4,skiff/core
$ make configure # configure the system
$ make compile # build the system
```
After you run `make configure`
your `SKIFF_CONFIG`
selection will be saved. The
build can be interrupted and later resumed with `make compile`
.
`SKIFF_WORKSPACE`
defaults to `default`
and is used to compile multiple
`SKIFF_CONFIG`
simultaneously.
There are many other utility commands made available by Buildroot, which can be
listed using `make br/help`
, some examples:
```
$ make br/menuconfig # explore Buildroot config menu
$ make br/sdk # build relocatable SDK for target
$ make br/graph-size # graph the target packages sizes
```
There are other application packages available i.e. `apps/podman`
and `apps/crio`
.
Once the build is complete, it's time to flash the system to a SD card. You will
need to switch to `sudo bash`
for this on most systems.
```
$ sudo bash # switch to root
$ blkid # look for your SD card's device file
$ export PI_SD=/dev/sdz # make sure this is right!
$ make cmd/pi/common/format # tell skiff to format the device
$ make cmd/pi/common/install # tell skiff to install the os
```
The device needs to be formatted only one time, after which, the install command can be used to update the SkiffOS images without clearing the persistent data. The persist partition is not touched in this step, so anything you save there, including all Docker containers and system configuration, will not be modified.
Connect using SSH to `root@my-ip-address`
to access the SkiffOS system, and
connect to `core@my-ip-address`
to access the "Core" system container. See the
section above about SSH public keys if you get a password prompt.
The mapping between users and containers can be edited in the
`/mnt/persist/skiff/core/config.yaml`
file.
The system can then be upgraded over-the-air (OTA) using the rsync script:
`$ ./scripts/push_image.bash root@my-ip-address`
The SkiffOS upgrade (or downgrade) will take effect on next reboot.
Building directly on MacOS is not yet possible, particularly due to the case-insensitivity of the MacOS file system. You can use Lima to build the OS:
Install Lima, then:
```
limactl start --name=skiffos-build https://raw.githubusercontent.com/skiffos/SkiffOS/master/build/lima/lima.yaml
limactl shell skiffos-build
```
Then in the lima shell:
```
cd
git clone https://github.com/skiffos/skiffos
cd skiffos
```
Proceed with usual build sequence.
See the apple/arm docs for building a VM to run on MacOS.
Use the `apps/podman`
configuration package to enable Podman support.
SkiffOS Core runs Linux distributions in privileged containers:
- YAML configuration format for mapping containers, images, and users.
- systemd and/or other init processes operate as PID 1 inside the container.
- images can be pulled or compiled from scratch on first boot.
Adding **skiff/core** to `SKIFF_CONFIG`
enables Debian Sid with an XFCE desktop.
Other distributions and images supported:
Distribution |
Config Package |
Notes |
---|---|---|
Alpine | core/alpine | OpenRC |
Arch Linux | core/arch | Minimal desktop |
Debian Sid | skiff/core | Default: XFCE desktop |
Fedora | core/fedora | Minimal desktop |
Gentoo | core/gentoo | Based on latest stage3 |
Ubuntu | core/ubuntu | Snaps & Ubuntu Desktop |
Other less frequently updated images:
Distribution |
Config Package |
Notes |
---|---|---|
DietPi | core/dietpi | DietPi applications tool |
NASA cFS Framework | core/nasa_cfs | Flight software framework |
NASA Fprime Framework | core/nasa_fprime | Flight software framework |
NixOS | core/nixos | |
NixOS with XFCE | core/nixos_xfce |
There are also core images specific to pine64/phone and pine64/book and jetson/common.
The default configuration creates a user named "core" mapped into a container,
but this can be adjusted with the `skiff-core.yaml`
configuration file:
```
containers:
core:
image: skiffos/skiff-core-gentoo:latest
[...]
users:
core:
container: core
containerUser: core
[...]
```
The provided example configs for the above supported distros are a good starting point for further customization.
To customize a running system, edit `/mnt/persist/skiff/core/config.yaml`
and
run `systemctl restart skiff-core`
to apply. You may need to delete existing
containers and restart skiff-core to re-create them after changing their config.
The configuration format and skiff-core source is in the skiff-core repo.
SkiffOS supports modular configuration packages: kernel & buildroot configs, root filesystem overlays, patches, hooks, and other resources.
Layers are named as `namespace/name`
. For example, a Raspberry Pi 4
configuration would be `pi/4`
and Docker is `apps/docker`
.
```
├── cflags: compiler flags in files
├── buildroot: buildroot configuration fragments
├── buildroot_ext: buildroot extensions (extra packages)
├── buildroot_patches: extra Buildroot global patches
│ ├── <packagename>: patch files for Buildroot <packagename>
│ └── <packagename>/<version>: patches for package version
├── busybox: busybox configuration fragments
├── extensions: extra commands to add to the build system
│ └── Makefile
├── hooks: scripts hooking pre/post build steps
│ ├── post.sh
│ └── pre.sh
├── kernel: kernel configuration fragments
├── kernel_patches: kernel .patch files
├── root_overlay: root overlay files
├── metadata: metadata files
│ ├── commands
│ ├── dependencies
│ ├── description
│ └── unlisted
├── resources: files used by the configuration package
├── scripts: any scripts used by the extensions
├── uboot: u-boot configuration fragments
├── uboot_patches: u-boot .patch files
└── users: additional buildroot user config files
```
All files are optional.
To add custom users, add files in the "users" dir with the makeuser syntax.
You can set the following env variables to control this process:
`SKIFF_CONFIG_PATH_ODROID_XU`
: Set the path for the ODROID_XU config package. You can set this to add new packages or override old ones.`SKIFF_EXTRA_CONFIGS_PATH`
: Colon`:`
separated list of paths to look for config packages.`SKIFF_CONFIG`
: Name of skiff config to use, or comma separated list to overlay, with the later options taking precedence
These packages will be available in the SkiffOS system.
It's often useful to be able to adjust the configs during development without actually creating a new configuration layer. This can be easily done with the overrides layer.
The overrides directory is treated as an additional configuration layer. The layout of the configuration layers is described above. Overrides is ignored by Git, and serves as a quick and easy way to modify the configuration.
To apply the changes & re-pack the build, run "make configure compile" again.
Use Workspaces to compile multiple `SKIFF_CONFIG`
combinations simultaneously.
The `SKIFF_WORKSPACE`
environment variable controls which workspace is selected.
The directory at `workspaces/$SKIFF_WORKSPACE`
contains the Buildroot build directory.
Configuration files in `overrides/workspaces/$SKIFF_WORKSPACE/`
will override
settings for that workspace using the configuration package structure.
The virt/ packages are designed for running Skiff in various virtualized environments.
Here is a minimal working example of running Skiff in Qemu:
```
$ SKIFF_CONFIG=virt/qemu,util/rootlogin make configure compile
$ make cmd/virt/qemu/run
```
The `util/rootlogin`
package is used here to enable logging in as "root" on the
qemu debug console shown when running "cmd/virt/qemu/run".
Qemu can emulate other architectures, for example, riscv64:
```
export SKIFF_WORKSPACE=qemu
export SKIFF_CONFIG=virt/qemu,core/gentoo,util/rootlogin
mkdir -p ./overrides/workspaces/qemu/buildroot
echo "BR2_riscv=y" > ./overrides/workspaces/qemu/buildroot/arch
make compile
```
Most Buildroot-supported architectures can be selected & emulated.
The parameters for running the VM can also be adjusted:
```
export ROOTFS_MAX_SIZE=120G
export QEMU_MEMORY=8G
export QEMU_CPUS=8
make cmd/virt/qemu/run
```
Here is a minimal working example of running SkiffOS in Docker:
```
$ SKIFF_CONFIG=virt/docker,skiff/core make configure compile
$ make cmd/virt/docker/buildimage
$ make cmd/virt/docker/run
# inside container
$ su - core
```
The build command compiles the image, and run executes it.
You can execute a shell inside the container with:
```
$ make cmd/virt/docker/exec
# alternatively
$ docker exec -it skiff sh
```
Or run the latest demo release on Docker Hub:
```
docker run -t -d --name=skiff \
--privileged \
--cap-add=NET_ADMIN \
--security-opt seccomp=unconfined \
--stop-signal=SIGRTMIN+3 \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v $(pwd)/skiff-persist:/mnt/persist \
skiffos/skiffos:latest
```
SkiffOS can also be configured with files in the "persist" partition.
Set the hostname by placing the desired hostname in the `skiff/hostname`
file on
the persist partition. You could also set this in one of your config packages by
writing the desired hostname to `/etc/hostname`
.
You can use `nmcli`
on the device to manage `NetworkManager`
, and any connection
definitions written by `nmcli device wifi connect`
or similar will automatically
be written to the persist partition and persisted to future boots.
To connect to WiFi: `nmcli device wifi connect myssid password mypassword.`
The configuration file format for these connections is documented here with examples.
Example for a WiFi network called `mywifi`
with password `mypassword`
:
```
[connection]
id=mywifi
uuid=12f6c21d-f077-4b95-a4cb-bf41555d87a5
type=wifi
[wifi]
mode=infrastructure
ssid=mywifi
[wifi-security]
key-mgmt=wpa-psk
psk=mypassword
[ipv4]
method=auto
[ipv6]
addr-gen-mode=stable-privacy
method=auto
```
Network configuration files are plaintext files located at either of:
`/etc/NetworkManager/system-connections/`
inside the build image`/mnt/persist/skiff/connections/`
on the persist partition.
To add the above example to your build:
`gedit ./overrides/root_overlay/etc/NetworkManager/system-connections/mywifi`
- paste the above plaintext & save
- run "make compile" to update the image with the changes.
The system will generate the authorized_keys file for the users on startup.
It takes SSH public key files (`*.pub`
) from these locations:
`/etc/skiff/authorized_keys`
from inside the image`skiff/keys`
from inside the persist partition
Your SSH public key will usually be located at `~/.ssh/id_ed25519.pub`
.
To mount a Linux disk, for example an `ext4`
partition, to a path inside a
Docker container, you can use the Docker Volumes feature:
```
# create a volume for the storage drive
docker volume create --driver=local --opt device=/dev/disk/by-label/storage storage
# run a temporary container to view the contents
docker run --rm -it -v storage:/storage --workdir /storage alpine:edge sh
```
The volume can be mounted into a Skiff Core container by adding to the mounts
list in `/mnt/persist/skiff/core/config.yaml`
:
```
containers:
core:
image: skiffos/skiff-core-gentoo:latest
mounts:
- storage:/mnt/storage
```
After adding the mount, delete and re-create the container:
```
docker rm -f core
systemctl restart skiff-core
```
The SkiffOS Whitepaper overviews the project motivation and goals.
Community contributions are welcomed!
Please file a GitHub issue and/or Join Discord with any questions.
... or feel free to reach out on Matrix Chat! | true | true | true | Any Linux distribution, anywhere. Contribute to skiffos/SkiffOS development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2016-04-22 00:00:00 | https://opengraph.githubassets.com/087fe1bfcdcb5478e2ec7bea3fc105ed936c83693f461cbf02800cebcf0e3f99/skiffos/SkiffOS | object | github.com | GitHub | null | null |