id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
345,800 | http://www.feedbeater.com | Feedbeater - A Mechanical Keyboards Guide Blog | Antonia Zivcic | MangaPlaza is a popular online platform for reading manga, offering an extensive library of titles across various genres. Whether you’re a fan of...
Feedbeater.com Welcomes You to the World of Mechanical Keyboards!
## How Do I Cancel My Call Truth Subscription?
If you’re looking to cancel your Call Truth subscription, you’re not alone. Whether you’ve found a better service, no longer need...
## Is Your Online Identity found on the Dark Web?
In a world where our digital footprints stretch far and wide, the question of online identity takes on a new urgency. With every click, like, and...
In the ever-evolving landscape of the internet, privacy and accessibility have become increasingly crucial concerns. With the growth of surveillance, geo...
Putlocker has long been a popular platform for streaming movies and TV shows. However, users occasionally encounter the frustrating error message: “Error...
Yes, you can definitely use Canva to make a poster! Canva is a popular online graphic design platform known for its user-friendly interface and wide range of...
GOKU.TO has become a go-to site for many anime enthusiasts who seek free, easy access to their favorite shows. Offering a vast library of anime content, the...
Securing your website with an SSL certificate is essential to protect user data and ensure trustworthiness. SSL certificates encrypt data transmitted between a... | true | true | true | Feedbeater – A Mechanical Keyboards Guide Blog was last modified: October 17th, 2020 by Atish Ranjan | 2024-10-12 00:00:00 | 2020-10-17 00:00:00 | null | website | feedbeater.com | FeedBeater | null | null |
35,349,615 | https://www.humanetech.com/podcast/the-ai-dilemma | The AI Dilemma | null | You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities - yet it has already been deployed to the public.
At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.
AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.
Tristan Harris and Aza Raskin sit with Lester Holt to discuss the dangers of developing AI without regulation
“Submarines” is a collaboration between musician Zia Cora (Alice Liu) and Aza Raskin. The music video was created by Aza in less than 48 hours using AI technology and published in early 2022
This made-for-television movie explored the effects of a devastating nuclear holocaust on small-town residents of Kansas
Moderated by journalist Ted Koppel, a panel of present and former US officials, scientists and writers discussed nuclear weapons policies live on television after the film aired | true | true | true | At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing. | 2024-10-12 00:00:00 | 2023-03-24 00:00:00 | null | null | HumaneTech_ | null | null |
|
5,658,936 | http://www.danielchatfield.com/articles/sms/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,375,640 | http://ethanglover.biz/blog/soylent-day1.php | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
330,154 | http://www.microsoft.com/web/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,608,952 | https://en.wikipedia.org/wiki/G%C3%A4vle_goat | Gävle goat - Wikipedia | null | # Gävle goat
The **Gävle Goat** (Swedish: *Gävlebocken*, Swedish pronunciation: [ˈjɛ̌ːvlɛbɔkːɛn]) is a traditional Christmas display erected annually at Slottstorget (Castle Square) in central Gävle, Sweden. The display is a giant version of a traditional Swedish Yule goat figure made of straw. It is erected each year by local community groups at the beginning of Advent over a period of two days.[1][2]
The Gävle Goat has been the subject of repeated arson attacks; despite security measures and a nearby fire station, the goat has been burned to the ground most years since its first appearance in 1966. As of December 2023,[update] 42 out of 58 goats have been destroyed or damaged in some way. Burning or destroying the goat is illegal, and the Svea Court of Appeal has stated that the offence should normally carry a 3-month prison sentence.[3]
Since 1986, two separate Yule goats have been built in Gävle: the Gävle Goat by the Southern Merchants and the Yule Goat built by the Natural Science Club of the School of Vasa.
## History
[edit]The Gävle Goat is erected every year on the first day of Advent, which according to Western Christian tradition is in late November or early December, depending on the calendar year. In 1966, an advertising consultant, Stig Gavlén (1927–2018),[4] came up with the idea of making a giant version of the traditional Swedish Yule Goat and placing it in the square.[5] The design of the first goat was assigned to the then chief of the Gävle fire department, Gavlén's brother Jörgen Gavlén. The construction of the goat was carried out by the fire department, and they erected the goat each year from 1966 to 1970 and from 1986 to 2002. The first goat was financed by Harry Ström. On 1 December 1966, a 13-metre (43 ft) tall, 7-metre (23 ft) long, 3-tonne goat was erected in the square. On New Year's Eve, the goat was burnt down,[6][7] and the perpetrator was found and convicted of vandalism. The goat was insured, and Ström got all of his money back.[7][8][9]
A group of businessmen known as the Southern Merchants (*Söders Köpmän*) financed the building of the goat in subsequent years. In 1971, the Southern Merchants stopped building the goats. The Natural Science Club (*Naturvetenskapliga Föreningen*) of the School of Vasa (*Vasaskolan*) began building the structure. Their goat was around 2 metres (6.6 ft). Due to the positive reaction their Yule Goat received that year, they built another one the following year and from then on.[10] The Southern Merchants began building their own goats again in 1986.[7]
The cost for the 1966 goat was 10,000 Swedish kronor (SEK) (equivalent to SEK 84,355 in 2009).[11] The price tag for constructing the goat in 2005 was around SEK 100,000. The city pays one-third of the cost while the Southern Merchants pay the remaining sum.
## Repeated destruction
[edit]The display has become notable for being a recurring target for vandalism by arson, and has been destroyed many times since the first goat was erected in 1966.[12] Because the fire station is close to the location of the goat, most of the time the fire can be extinguished before the wooden skeleton is severely damaged. If the goat is burned down before 13 December, the feast day of Saint Lucia, the goat is rebuilt. The skeleton is then treated and repaired, and the goat reconstructed over it, using straw which the Goat Committee has pre-ordered.[11] As of 2005,[update] four people had been caught or convicted for vandalizing the goat.[13] In 2001, the goat was burned down by a 51-year-old American visitor from Cleveland, Ohio, who spent 18 days in jail and was subsequently convicted and ordered to pay SEK 100,000 (US$11,655.01; equivalent to US$20,055 in 2023) in damages. The court confiscated his cigarette lighter with the argument that he clearly was not able to handle it. He stated in court that he was no "goat burner", and believed that he was taking part in a completely legal goat-burning tradition. After he was released from jail he returned to the US without paying his fine.[8][14][15][16][17]
In 1996, the Southern Merchants introduced camera surveillance to monitor the goat 24 hours a day. On 27 November 2004 the Gävle Goat's homepage was hacked, and one of the two official webcams changed.[18] One year, while security guards were posted around the goat in order to prevent further vandalism, the temperature dropped far below zero. As the guards ducked into a nearby restaurant to escape the cold, the vandals struck.[14]
During the weekend of 3–4 December 2005, a series of attacks on public Yule Goats across Sweden were carried out; the Gävle Goat was burnt on 3 December. The Visby goat on Gotland burned down, the Yule Goat in Söderköping, Östergötland was torched, and there was an attack on a goat located in Lycksele, Västerbotten.[19][20]
The Christmas season of 2006 marked the 40th anniversary of the Gävle Goat, and, on Sunday 3 December, the city held a large celebration in honor of the goat. The Goat Committee fireproofed the goat with "Fiber ProTector Fireproof", a fireproofing substance that is used in airplanes. In earlier years when the goat had been fireproofed, the dew had made the liquid drip off the goat. To prevent this from happening in 2006, "Fireproof ProTechtor Solvent Base" was applied to the goat.[8][21][22][23] Despite their efforts, the goat has been damaged or destroyed a total of 38 times. On 27 November 2016 an arsonist equipped with petrol burned it down just hours after its inauguration.[24][25] After a few flame-free years under 24-hour security, the goat was again burned on 17 December 2021.[26] In 2023, it was severely pecked at for grain by jackdaws, due to the straw used to construct the goat containing higher than usual amounts of seeds.
## Natural Science Club's Yule Goat
[edit]Since 1986 there have been two Yule Goats built in Gävle: the Gävle Goat by the Southern Merchants and the Yule Goat built by the Natural Science Club of the School of Vasa. Until 1985 the Southern Merchants held the world record for the largest Yule Goat, but over the years the Natural Science Club's goat increased in size, and in 1985 their Yule Goat made it into the *Guinness Book of Records* with an official height of 12.5 metres (41 ft). The creator of the original 1966 goat, Stig Gavlén, thought that the Natural Science Club's goat had unfairly won the title of the largest Yule Goat because the goat was not as attractive as the Southern Merchants' goat and the neck was excessively long. The next year there was a Goat war: the Southern Merchants understood the publicity value, and erected a huge goat, the Natural Science Club erected a smaller one in protest. The Southern Merchants had intended that their huge goat would reclaim the world record, but the measurement of the goat showed it fell short. Over the following seven years there were no further attempts on the world record, but there was some hostility between the Natural Science Club and the Southern Merchants, evidenced by the fact that the Natural Science Club put up a sign near their goat wishing a Merry Christmas to everyone, except the Southern Merchants.[10]
In 1993 the Southern Merchants again announced that they were going to attempt the world record. The goat stood 10.5 metres (34 ft) when completed. The Natural Science Club's Yule Goat that year measured 14.9 metres (49 ft), which earned them another place in the *Guinness Book of Records*.[10]
## Timeline
[edit]### 1966–1969
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
1966 | 31 December[7]
|
Fire | ||
1967 | Survived[7]
|
|||
1968 | Fence added.[7]
|
Survived[7]
|
||
1969 | Inside of goat protected by chicken-wire netting.[7]
|
31 December[7]
|
Fire |
### 1970–1979
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
1970 | Six hours after construction[7]
|
Fire | The goat's destruction was blamed on two drunk teenagers. With help from several financial contributors, the goat was reassembled out of lake reed. | |
1971 | ? | Smashed to pieces[27]
|
The Southern Merchants became tired of their goats being burned down, and stopped constructing them. The Natural Science Club from the School of Vasa took over and built a miniature goat.[28]
| |
1972 | ? | Collapsed[7]
|
||
1973 | ? | Stolen[29][30]
|
The goat was stolen by a man, who then placed it in his backyard. He was later sentenced to two years in prison for aggravated theft.[30]
| |
1974 | ? | Fire[7]
|
||
1975 | ? | Collapsed[31]
|
The goat collapsed under its own weight.[31]
| |
1976 | ? | Hit by a car[15][32]
|
A student rammed the hind legs of the goat with a Volvo Amazon, collapsing the structure.[7]
| |
1977 | ? | Fire | ||
1978 | ? | Kicked to pieces[7]
|
||
1979 | After the first goat was burned, a second was fireproofed. | Prior to assembly[7]
|
Fire / Broken[7]
|
The goat was burned down before construction even finished.[7] A second goat was constructed and finished, but was later destroyed and broken into pieces.[28]
|
### 1980–1989
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
1980 | 24 December[7]
|
Fire[7]
|
||
1981 | Survived[7]
|
|||
1982 | 13 December[7]
|
Fire[7]
|
||
1983 | ? | Legs destroyed[7]
|
||
1984 | 12 December[7]
|
Fire[7]
|
||
1985 | Enclosed by a 2 metres (6.6 ft) high metal fence, guarded by Securitas and soldiers from the Gävle I 14 Infantry Regiment. | January[7]
|
Fire[7]
|
The 12.5-metre (41 ft) tall goat of the Natural Science Club was featured in the Guinness Book of Records for the first time.[7][8]
|
1986 | 23 December[7]
|
Fire[7]
|
The Southern Merchants built their first goat since 1971, and it was burned. From 1986 onwards two goats were built each year, one by the Southern Merchants' and one by the School of Vasa.[7]
| |
1987 | Heavily fireproofed.[27]
|
? | Fire[27]
|
|
1988 | Survived[7]
|
Gamblers were for the first time able to gamble on the fate of the goat with English bookmakers.[28]
| ||
1989 | Prior to assembly / January[7]
|
Fire / Fire | Financial contributions from the public were raised to rebuild a goat, and the second goat was burnt down in January. In March 1990 another goat was built, this time for the shooting of a Swedish motion picture called Black Jack.[7]
|
### 1990–1999
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
1990 | The goat was guarded by many volunteers.[7]
|
Survived[7]
|
||
1991 | 24 December[7]
|
Fire | The goat was joined by an advertising sled, that turned out to be illegally built. It was later rebuilt to be taken to Stockholm as a part of a protest campaign against the closing of the I 14 Infantry Regiment.[7][8]
| |
1992 | After 8 days, and again on 20 December[7]
|
Fire / Fire | Both the Natural Science Club and Southern Merchants' goats burned down on the same night. The latter was rebuilt, and burned down on 20 December. The perpetrator of the three attacks was caught and sent to jail. The Goat Committee was founded in 1992.[7][8]
| |
1993 | Guarded by taxis[ and the Swedish Home Guard
clarification needed] |
Survived | Once more the goat was featured in the Guinness Book of Records, the School of Vasa's goat measured 14.9 metres (49 ft).[7][8]
| |
1994 | Survived | The goat followed the Swedish national hockey team to Italy for the World Championship in hockey.[7][8]
| ||
1995 | 25 December | Fire | A Norwegian was arrested for attempting to burn down the goat. It was rebuilt for the 550th anniversary of Gävleborg County.[7][8]
| |
1996 | Monitored by webcams.[7]
|
Survived | ||
1997 | Survived with damage | Damaged by fireworks. The Natural Science Club's goat was attacked too, but survived with minor damage.[15]
| ||
1998 | 11 December | Fire | Burned down during a major blizzard, and was rebuilt.[7]
| |
1999 | Within hours | Fire | The Southern Merchants' goat was rebuilt again before Lucia. The Natural Science Club's goat was also burnt down.[7]
|
### 2000–2009
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
2000 | Late December | Fire | In addition to the Southern Merchants' goat being burned, the Natural Science Club's goat was thrown into the Gävle river.[7]
| |
2001 | 23 December | Fire | A visitor from Cleveland, Ohio, in the United States, was arrested for burning the goat. The Natural Science Club's goat was also burnt down.[14][8][15][16][17]
| |
2002 | On Saint Lucy's Day, the goat was guarded by Swedish radio and TV personality Gert Fylking.[33]
|
Survived with damage | A 22 year-old from Stockholm tried to set the Southern Merchants' goat on fire, but failed, the goat receiving only minor damage. | |
2003 | 11 December[7]
|
Fire | A second goat was put up a week after the first one burned down. The second goat survived without any known incidents.[7]
| |
2004 | 21 December[7]
|
Fire | ||
2005 | 3 December[15]
|
Fire | Burnt by unknown vandals reportedly dressed as Santa and the gingerbread man, by shooting a flaming arrow at the goat.[15][34] Reconstructed on 5 December. The hunt for the arsonist responsible for the goat-burning in 2005 was featured on the weekly Swedish live broadcast TV3's "Most Wanted" ("Efterlyst") on 8 December.
| |
2006 | Survived | The Southern Merchants' goat survived New Year's Eve and was taken down on 2 January. It is now stored in a secret location.[35] Meanwhile, Natural Science Club's goat was burned.
| ||
2007 | Survived | The Natural Science Club's goat was toppled on 13 December and was burned on the night of 24 December.[36] The Southern Merchants' goat survived.
| ||
2008 | 27 December | Fire | 10,000 people turned out for the inauguration of one of the goats. No back-up goat was built to replace the main goat should the worst happen, nor was the goat treated with flame repellent (Anna Östman, spokesperson of the Goat-committee said the repellent made it look ugly in the previous years, like a brown terrier).[37] On 16 December the Natural Science Club's Goat was vandalised and later removed. On 26 December there was an attempt to burn down the Southern Merchants' Goat but passers-by managed to extinguish the fire. The following day the goat finally succumbed to the flames ignited by an unknown assailant at 03:50 CET.
| |
2009 | 23 December | Fire | A person attempted to set the Southern Merchants' goat on fire the night of 7 December.[38] An unsuccessful attempt was made to throw the Natural Science Club's goat into the river the weekend of 11 December. The culprit then tried, again without success, to set the goat on fire.[39] Someone stole the Natural Science Club's goat using a truck on the night of 14 December.[40] On the night of 23 December before 04:00 a.m. the South Merchant goat was set on fire and was burned to the frame, even though it had a thick layer of snow on its back.[41] The goat had two online webcams which were put out of service by a DoS attack, instigated by computer hackers just before the burning.[42]
|
### 2010–2019
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
2010 | Survived | On the night of 2 December, arsonists made an unsuccessful attempt to burn the Natural Science Club's goat.[43] On 17 December, a Swedish news site reported that one of the guards tasked with protecting the Southern Merchants' goat had been offered 50,000 SEK to leave his post so that the goat could be stolen via helicopter and transported to Stockholm.[44] Both goats survived and were dismantled and returned to storage in early January 2011.[45]
| ||
2011 | The goat was sprayed with water to create a coating of ice.[46]
|
2 December[47]
|
Fire | Mild weather resulted in the protective ice melting. The Natural Science Club Goat was also burned. |
2012 | 12 December[48][49]
|
Fire | ||
2013 | The goat was soaked in flame-retardant.[50]
|
21 December[51]
|
Fire | |
2014 | Survived | At least three arson attempts were made.[52] The Natural Science Club goat was collapsed.
| ||
2015 | 27 December[53]
|
Fire | A 26 year-old man fleeing the scene with a singed face, smelling of gasoline, and holding a lighter in his hand was arrested. Under questioning, he admitted to committing the offence, adding that he was drunk at the time and that in retrospect, it was an "extremely bad idea".[54] He was sentenced in January 2018 to probation by an appellate court with a 6,000 SEK fine and 80,000 SEK in damages. The Natural Science Club goat was also burned.
| |
2016 | 27 November[24]
|
Fire | The goat was destroyed by an arsonist equipped with petrol on its inauguration day,[25] just hours after its 50th "birthday party".[55] Organizers said they would not rebuild the goat in 2016.[24] The 21 year-old was sentenced to probation by Gävle Tingsrätt and was sentenced to a fine and to pay roughly 100,000 SEK in damages. The evidence mainly revolved around a hat that the perpetrator dropped during his escape. The police later DNA-matched it with the 21-year-old local. It was replaced by the smaller Natural Science Club goat[56] built by local high school students.[57] This goat was later hit by a car.[57] YouTuber Tom Scott covered the Gävle goat in a video released on November 28, 2016; principal filming of the goat itself took place mere hours before the goat was burned down.[58]
| |
2017 | Double fence, cameras, guards[59]
|
Survived | The goat was inaugurated on 3 December. No reported attempts to burn the goat were made.[60]
| |
2018 | Fencing, cameras, guards, taxi rank to increase numbers of people nearby[61]
|
Survived with damage | The goat was inaugurated on 2 December.[61] An attempted burning of the Natural Science Club's goat occurred on the night of 15 December, resulting in minor damage to its left front leg.[62]
| |
2019 | Double fence, 24-hour CCTV. Two guards patrolled around the goat frequently, 24 hours a day, along with a K9 unit. | Survived | The goat was again inaugurated on 1 December.[63] On 13 December fire crews responded to a call that the 'little goat' was burning, only to discover it was in fact a miniature Yule goat somebody had brought and torched at the scene.[64] The Natural Science Club goat was burned but not destroyed in the early hours of 27 December.[65] A suspect was taken into custody.[66] This was the first time ever that the goat survived more than two years in a row.[67]
|
### 2020–current
[edit]Year | Security additions | Date of destruction | Method of destruction | Notes |
---|---|---|---|---|
2020 | Guards,[68] double fence, 24-hour CCTV, public webcam feed.[69]
|
Survived | The goat was inaugurated on 29 November.[67] Due to the COVID-19 pandemic, the inauguration was digital, and members of the public were advised not to gather around the goat. There was no traditional celebration.[68] The goat was not harmed during the 2020 holiday season, making 2020 the fourth consecutive year of the goat's survival.[69]
| |
2021 | 17 December | Fire | The Natural Science Club goat was burned in the early hours of 12 December.[70] The larger goat burned in the early hours of 17 December. A 40 year-old man was arrested[71] and later sentenced to six months in prison and ordered to pay SEK 109,000 in damages to the municipality.[72]
| |
2022 | New 24-hour real-time public webcam feed, double fence, 24-hour guards | Survived | The goat was inaugurated on 27 November.[73] Due to a new city centre being built at Slottstorget square, the goat's traditional location, the goat was moved a few blocks north to Rådhusesplanaden.[74] On January 1, the stream was discontinued, and the goat was taken down the day after.[75]
| |
2023 | 24-hour guards on site, double fence, 24-hour public webcam stream. | Little by little throughout December | Pecked to pieces by jackdaws (frame remained) | Due to the straw used to construct the goat containing higher than usual amounts of seed, while not destroyed, the goat was severely damaged by flocks of jackdaws foraging for food.[76]
|
## See also
[edit]## References
[edit]**^**"Goat film". Mer Jul i Gävle. Archived from the original on 28 May 2013. Retrieved 1 December 2012.**^**"bocken".*Mer Jul i Gävle*. Archived from the original on 20 March 2012. Retrieved 1 December 2012.**^**Engelro, Erik (4 January 2018). "Bockbrännare får höjt straff".*SVT Nyheter*.**^**"Gävlebockens pappa är död". November 2018. Archived from the original on 1 November 2018. Retrieved 1 November 2018.**^**"The biggest Christmas Goat in the world". Gävle Tourist Office. Archived from the original on 25 September 2013. Retrieved 29 November 2012.**^**"Christmas 2012: The Swedish goat that takes Christmas by the horns".*The Daily Telegraph*. 28 November 2012. Archived from the original on 30 November 2012. Retrieved 30 November 2012.- ^
**a****b****c****d****e****f****g****h****i****j****k****l****m****n****o****p****q****r****s****t****u****v****w****x****y****z****aa****ab****ac****ad****ae****af****ag****ah****ai****aj****ak****al****am****an****ao****ap****aq****ar****as****at****au****av****aw**"Gävlebocken".**ax***Gävle City Guide*(in Swedish). CityGuide. 2003. Archived from the original on 13 June 2016. Retrieved 24 August 2006. - ^
**a****b****c****d****e****f****g****h****i**Forsberg, Rose-Marie. "The famous christmasgoat of Sweden". Archived from the original on 6 December 2006. Retrieved 6 December 2006.**j** **^**"The Gävle goat timeline" (in Swedish). Archived from the original on 12 June 2008.- ^
**a****b**"Julbocken" (in Swedish). Naturvetenskapliga Föreningen. Archived from the original on 17 February 2012. Retrieved 26 August 2006.**c** - ^
**a**"New goat is already on the way" (in Swedish). Arbetarbladet. 6 December 2005. Archived from the original on 27 September 2007.**b** **^**"Gävlebocken på plats – hur länge får den stå?".*Expressen*. Archived from the original on 5 January 2014. Retrieved 23 December 2013.**^**"TV 3's Most Wanted is now eager to solve the goat mystery" (in Swedish). Arbetarbladet. 7 December 2005. Archived from the original on 27 September 2007.- ^
**a****b**"The goat is burning!" (in Swedish). Dagens Nyheter. 12 December 2003. Archived from the original on 1 October 2007.**c** - ^
**a****b****c****d****e**"Vandals Burn Swedish Christmas Goat, Again".**f***The Washington Post*. Stockholm. Associated Press. 4 December 2005. Archived from the original on 15 October 2016. Retrieved 3 November 2017. - ^
**a**"Santa and gingerbread man get Gävle's goat".**b***The Local*. 4 December 2005. Archived from the original on 8 December 2008. - ^
**a**"That's why I burned the goat in Gävle" (in Swedish). Aftonbladet. 17 December 2003. Archived from the original on 28 January 2007.**b** **^**"Gävle Goat gets hacked" (in Swedish). Aftonbladet. 27 November 2004. Archived from the original on 25 January 2007.**^**"The night of the goat-burners" (in Swedish). Göteborgs-Posten. 3 December 2005. Archived from the original on 30 September 2007.**^**"Police receives tips about the goat-burnings" (in Swedish). Göteborgs-Posten. 5 December 2005. Archived from the original on 30 September 2007.**^**"Mer Jul i Gävle". Archived from the original on 15 August 2013. Retrieved 6 December 2006.**^**"Not even napalm can set fire to the goat now" (in Swedish). Aftonbladet. 1 December 2006. Archived from the original on 26 January 2007.**^**"Swedish city strives to safeguard Christmas straw goat from vandals".*Santa Fe New Mexican*. Associated Press. 4 December 2006.[*permanent dead link*]- ^
**a****b**"Sweden's Christmas goat burned down on opening day".**c***The Local*. 28 November 2016. Archived from the original on 28 November 2016. Retrieved 28 November 2016. - ^
**a**"Gävle Christmas goat burns down on opening day".**b***Dalaras tidningar*. 27 November 2016. Archived from the original on 28 November 2016. Retrieved 28 November 2016. **^**"Sweden's Gävle goat torched... again". 17 December 2021.- ^
**a****b**"Weird ritual of the burning goat". BBC News. 4 December 2005. Archived from the original on 30 June 2006. Retrieved 24 August 2006.**c** - ^
**a****b**"Santa torched the giant goat!".**c***Sploid*. 4 December 2005. Archived from the original on 17 November 2006. **^**"14 saker du inte visste om julbocken i Gävle".*Allt om Resor*(in Swedish). Retrieved 1 December 2021.- ^
**a**Gartéus, Madeleine (27 November 2016). "Gävlebockens liv och död".**b***gp.se*(in Swedish). Retrieved 1 December 2021. - ^
**a**"Gävlebocken på plats".**b***Dagens Nyheter*(in Swedish). 26 November 2015. ISSN 1101-2447. Retrieved 1 December 2021. **^**BBC News (27 December 2008). "Festive goat up in flames again". BBC News. Archived from the original on 17 December 2013. Retrieved 1 December 2009.**^**"The goat is burning-year after year" (in Swedish). Aftonbladet. 13 December 2003. Archived from the original on 17 May 2004.**^**"Christmas straw goat burnt in Sweden". NBC News. 27 December 2008. Retrieved 1 December 2009.**^**"The unburnable goat says thanks" (in Swedish). Arbetarbladet. 3 January 2007. Archived from the original on 27 September 2007.**^**"Gävle Goat Blog". Archived from the original on 16 April 2012. Retrieved 14 December 2007.**^**"Yule Goat of Gävle was inaugurated" (in Swedish). Svenska Dagbladet. 30 December 2008.**^**"Attacked last night!". Archived from the original on 20 March 2012. Retrieved 8 February 2011.**^**"Little brother not well". Archived from the original on 20 March 2012. Retrieved 8 February 2011.**^**Gävlebocken [@Gavlebocken] (15 December 2009). "Last sign of little brother: He was conveyed by a truck! Someone saw you!" (Tweet) – via Twitter.**^**"Swedish Christmas straw goat burnt".*BBC News*. 23 December 2009. Archived from the original on 24 December 2009. Retrieved 23 December 2009.**^**"Vandals torch Swedish Yuletide straw goat for 24th time".*USA Today*. 23 December 2009. Retrieved 3 May 2010.**^**"Little brother attacked". Archived from the original on 8 July 2011. Retrieved 8 December 2010.**^**"Helikopterkupp planerades mot Gävlebocken".*Arbetarbladet*. 17 December 2010. Archived from the original on 2 December 2023. Retrieved 18 December 2010.**^**"Gävlebockens blogg" [The blog of the Gavle Goat]. Archived from the original on 8 July 2011.**^**"Gävle Goat to be saved by ice".*Radio Sweden*. 22 November 2011. Archived from the original on 27 December 2013. Retrieved 27 November 2011.**^**Martin, Rebecca (2 December 2011). "Sweden's Christmas goat succumbs to flames".*The Local*. Archived from the original on 4 December 2011. Retrieved 2 December 2011.**^**"Mer Jul i Gävle – kl 16 vid Slottstorget". Merjuligavle.se. Retrieved 1 December 2012.[*permanent dead link*]**^**"Reklamvärde i attentat mot Gävlebock" (in Swedish). 13 December 2012.**^**"Gävle confident its 2013 Xmas goat won't burn".*thelocal.se*. 30 November 2013. Archived from the original on 3 December 2013. Retrieved 1 December 2013.**^**Staff (21 December 2013). "Vandals torch Sweden's giant Christmas goat for the 27th time".*Global News*. Archived from the original on 23 December 2013. Retrieved 22 December 2013.**^**von Kügelgen, Michaela; Nordenswan, Hanna (24 December 2014). "Gävlebocken står fortfarande".*YLE Nyheter*(in Swedish). Retrieved 3 May 2017.**^**"Gävlebocken överlevde inte nyåret".*Dagens Nyheter*. 27 December 2015. Archived from the original on 27 December 2015. Retrieved 27 December 2015.**^**"Trial of drunken Christmas goat burner begins in Sweden".*The Local*. 11 November 2016. Archived from the original on 7 January 2017. Retrieved 6 January 2017.**^**"Gävlebockens historia".*visitGävle*(in Swedish). Gävle Turistcenter. Archived from the original on 19 January 2016. Retrieved 5 February 2017.**^**"A tribute to Sweden's gigantic Christmas goat, killed by fire".*Public Radio International*. Archived from the original on 2 December 2016. Retrieved 3 December 2016.- ^
**a**"No fire, but: Gävle's baby yule goat run over by car".**b***The Local SE*. 5 December 2016. Archived from the original on 9 December 2016. Retrieved 9 December 2016. **^**"Arson as a Christmas Tradition: The Gävle Goat".*YouTube*. 28 November 2016. Retrieved 3 February 2022.**^**"'Secret' plan to protect Gävle Christmas goat from arsonists". 4 December 2017. Archived from the original on 7 December 2017. Retrieved 6 December 2017.**^**"Arson-prone straw goat stands unscathed". news.com.au. 25 December 2017. Archived from the original on 24 December 2017. Retrieved 25 December 2017.- ^
**a**"Sweden's Gävle Christmas goat ready to return for festive season". The Local Sweden. 1 December 2018. Archived from the original on 3 December 2018. Retrieved 3 December 2018.**b** **^**"Smaller Gävle goat set on fire but big sibling unscathed".*The Local Sweden*. 16 December 2018. Retrieved 16 December 2018.**^**"IN PICTURES: Sweden's infamous Christmas goat returns for the festive season". The Local Sweden. 2 December 2019. Retrieved 3 December 2019.**^**Roos, Jimmy; Gävleborg, P4 (13 December 2019). "Eldade upp mini-bock på Slottstorget i Gävle – P4 Gävleborg".*Sveriges Radio*(in Swedish). Retrieved 13 December 2019.`{{cite news}}`
: CS1 maint: numeric names: authors list (link)**^**"JUST NU: Lillbocken i Gävle i lågor: "Vittnen pekade ut personen som misstänks för att tänt eld på"" (in Swedish). Gefle Dagblad. 27 December 2019. Retrieved 27 December 2019.**^**"Lilla Gävlebocken sattes i brand" (in Swedish). Sydsvenskan. 27 December 2019. Retrieved 27 December 2019.- ^
**a**"Gävlebocken invigs – så skyddas den".**b***Aftonbladet*(in Swedish). 29 November 2020. Retrieved 6 December 2020. - ^
**a**"Gävlebocken: Julbocken i Gävle på plats – invigdes digitalt".**b***www.expressen.se*(in Swedish). 29 November 2020. Retrieved 2 December 2020. - ^
**a**"Grattis Gävlebocken – överlevde för fjärde året i rad".**b***Aftonbladet*(in Swedish). 4 January 2021. Retrieved 4 January 2021. **^**"Lilla Gävlebocken brann i natt: "Inga vittnen"".*www.expressen.se*(in Swedish). 12 December 2021. Retrieved 12 December 2021.**^**"Gävlebocken i lågor – man gripen".*Expressen*(in Swedish). 17 December 2021. Retrieved 17 December 2021.**^**Nyheter, S. V. T.; Norton, Nadya (20 January 2022). "Bockbrännaren döms till fängelse".*SVT Nyheter*(in Swedish). Retrieved 25 November 2022.**^**"Gävle: Sara Mc Manus inviger Gävlebocken" (in Swedish). 7 November 2022. Retrieved 1 December 2022.**^**"Sweden's arson-afflicted Christmas goat is moving after 56 years".*The Local Sweden*. 3 November 2022. Retrieved 2 January 2023.**^**"Gävle Goat". 27 November 2022. Retrieved 2 January 2023.**^**"I år är det inte elden som tar Gävlebocken – utan fåglarna" (in Swedish). 13 December 2023. Retrieved 14 December 2023.
## External links
[edit]- Gävle goat webcam
- Gävle goat blog
- Gävle goat history Archived 19 January 2016 at the Wayback Machine | true | true | true | null | 2024-10-12 00:00:00 | 2006-08-24 00:00:00 | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
|
27,602,474 | https://www.youtube.com/watch?v=AFVDZeg4RVY | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,002,843 | https://www.guessthatproduct.com/ | Guess That Product - Millions | null | Music: ON
Sound: ON | true | true | true | What's the product behind the blurred image? Take a guess and win! | 2024-10-12 00:00:00 | null | website | guessthatproduct.com | Guess That Product - Millions | null | null |
|
41,763,275 | https://www.bbc.com/future/article/20241002-how-greenpeaces-mindbomb-photos-stopped-the-commercial-whaling-industry | The 'mind bomb' photos that led to a global whaling ban | Lucy Sherriff | # The 'mind bomb' photos that led to a global whaling ban
**In the 1970s, a small group of Greenpeace activists had a unique idea for how they could put an end to commercial whaling.**
A large Soviet vessel, harpoon gun poised to fire, looms over a whale immediately below the bow, the animal's large gaping wound oozing blood into the cold Pacific Ocean.
It's an image that changed the world and marked the beginning of Greenpeace's "mind bomb" campaign, says Rex Weyler, the photographer behind the 1975 photo (scroll down to see that iconic shot below).
Weyler was among the early members of Greenpeace who met in Canada during the 1970s. Many of them, including Weyler, had left the US to escape being drafted for the Vietnamese War. "It's so hard to imagine today but at that time there was no ecology movement," he remembers. "There was the peace movement, the women's movement, the civil rights movement, and we felt there needed to be a conservation movement on the same scale as those."
Weyler found out about the plight of the whales through a Canadian author doing a book tour at the time.
"At the time people thought whaling was this Moby Dick image. Little men in these little boats facing these giant whales. The whales were Goliath," he says. "We wanted to flip that Goliath image around because by the 1970s, whalers had giant boats with fast diesel motors and exploding 250lb (113kg) harpoons. And we wanted to capture that image."
Whaling peaked in the 1960s, with approximately 80,000 whales being hunted every year. Hunting technology had advanced and whalers had harpoons and ships that could outrun whales.
Despite various whaling protection measures being introduced, including the International Agreement of the Regulation of Whaling in 1937, some countries, such as Russia and Japan, ignored them and caught whales illegally. Between 1900 and 1999, an estimated 2.9 million large whales were caught and killed by the industrial whaling operations, although the actual numbers are believed to be much higher.
"Already in the 1960s, the rapid decline of whale stocks worldwide was clear," says Árni Finnsson, chair of the Iceland Nature Conservation Association and a lifelong ocean activistwho worked for Greenpeace Sweden in the early 1990s. "Although the hunt for blue whales in the North Atlantic was banned by the International Whaling Commission (IWC) in 1954, the management structure was weak and the excessive whaling continued."
In 1972, Finnsson explains, the UN Conference on the Human Environment called for a 10-year moratorium on whaling. "The IWC disregarded this call by the international community and Greenpeace took action, at sea, in order to protest this ruthless exploitation, notably against Soviet whaling vessels in the Pacific," he says.
The newly formed Greenpeace had spent two or three years preparing the boat and a crew. "In 1975 we set out in little Zodiac inflatable boats to find a whaling fleet," Weyler says. "The whole idea of our campaigns at that time was media. So we decided we were going to blockade the whaling boats, and get between the whales and the harpoon. So off we go, looking for the whalers, and we found the Russian whaling fleet off the coast of California."
Many of the group had some experience with the media and they knew they needed striking photos for this confrontation to become a global news story. They soon termed these kinds of campaigns "mind bombs" – an idea that would become the cornerstone of Greenpeace campaigning, Weyler explains. The mind bomb, something dreamt up by Bob Hunter, another early member of Greenpeace, revolved around the media being a global delivery system for ideas. Revolution wasn't an armed struggle, according to Hunter, it was a communications struggle, where mind bombs were the weapons – rather than actual bombs.
"We knew we had to take photos, we knew it had to be dramatic. That was the whole purpose of making this dramatic gesture – to record it so this would become a news story and we could talk about the plight of the world's whales," says Weyler.
But it wasn't easy. "We all had to learn how to take photographs and shoot films from a small moving boat on a choppy ocean," he says. "We stood at the bow of the Zodiac with a line around our waist to secure us. We could lean back against that line and kind of become our own tripod and absorb the movement of the boat so we could hold the camera steady."
**More like this: **
Catching the whales on camera was the biggest challenge. "They were being chased by harpoon boats so they would rise to the surface, breathe out, breathe in, and dive again." There wasn't much to photograph, Weyler adds, until the whales had been hit by a harpoon.
"My awareness was mostly focused on exposure, shutter speed, those things," he says. It was only after he'd put down his camera that what he had witnessed hit him.
"It was devastating. We had never seen anything like it. We saw [the whaling boats] harpoon whales, there was massive amounts of blood in the water, and whales flapping and splashing as they struggled and then died.
"We were ripped up inside. I recall being on the deck of the main Greenpeace boat and looking out across the water and feeling gut-wrenched. Devastated at the reality of what we had just witnessed."
When the crew landed at a port in San Francisco, the dock was swamped with media, Weyler recalls. "We weren't prepared for that kind of response. Everybody wanted the images."
Weyler remembers waking up early the next morning and going to a newsstand to see if his image had made the papers. "I was so excited because I was wondering if we'd made it. And I walked the streets until I found a newsstand, and the photograph was on the front page of virtually every newspaper. It was a stunning moment. It had been our dream."
For Weyler, the point of the photographs was to show protestors "not just standing up for human rights or peace" but for other species. "Change comes when the population insists on it," he says. "And we were attempting to inspire the population to rise up and to say to the government, 'We have to preserve these species.' I feel that in many ways this particular image really helped create the global climate movement we have today."
After Greenpeace's whaling campaign, a moratorium on commercial whaling was finally implemented by the International Whaling Committee in 1982, followed by a ban in 1985. As a result, some of the bigger whaling species, such as humpbacks, have made a remarkable recovery – to 93% of their pre-industrial-hunting populations.
Commercial whaling does still happen, though – namely by Iceland, Norway and Japan.
"Greenpeace's actions inspired the public," Finnsson says, "and in 1982 Greenpeace and other non-profits were instrumental in achieving a moratorium on commercial whaling, which still stands."
--
*For essential climate news and hopeful developments to your inbox, sign up to the **Future Earth newsletter,** while **The Essential List** delivers a handpicked selection of features and insights twice a week.* | true | true | true | In the 1970s, a small group of Greenpeace activists had a unique idea for how they could put an end to commercial whaling. | 2024-10-12 00:00:00 | 2024-10-03 00:00:00 | newsarticle | bbc.com | BBC | null | null |
|
9,999,569 | https://www.eff.org/sls/about | About the SLS Team | null | The Electronic Frontier Foundation's Street-Level Surveillance project is a collaboration between staff attorneys, technologists, and activists to shine light on privacy-invasive police technology, limit how the technologies are used, and hold agencies accountable for their abuse.
If you represent a local organization interested in grassroots advocacy on surveillance issues, please consider joining the Electronic Frontier Alliance. For legal matters, please see our Criminal Defense Resources page or email EFF's intake coordinator at [email protected]. For press inquiries, please email [email protected].
### Credits
SLS was written in collaboration with investigative reporter Yael Grauer.
The Street-Level Surveillance guide to police technology is licensed under Creative Commons (CC BY), which means you can and should share and republish this resource. Please attribute to the Electronic Frontier Foundation. | true | true | true | The Electronic Frontier Foundation's Street-Level Surveillance project is a collaboration between staff attorneys, technologists, and activists to shine light on privacy-invasive police technology, limit how the technologies are used, and hold agencies accountable for their abuse.If you represent a... | 2024-10-12 00:00:00 | 2017-10-23 00:00:00 | article | eff.org | Electronic Frontier Foundation | null | null |
|
10,412,959 | http://www.uraimo.com/2015/10/08/Swift2-map-flatmap-demystified/ | Swift 3: Map and FlatMap Demystified | null | # Swift 3: Map and FlatMap Demystified
Posted on October 8, 2015 中文版
**Update 12/16:***This post has been verified with Swift 3, minimal changes were required.*
*Get this and other playgrounds from GitHub or zipped.*
Swift is a language still slightly in flux, with new functionalities and alterations of behavior being introduced in every release. Much has already been written about the functional aspects of Swift and how to approach problems following a more “pure” functional approach.
Considering that the language is still in its infancy, often, trying to understand some specific topics you’ll end up reading a lot of articles referring to old releases of the language, or worst, descriptions that mix up different releases. Sometimes, searching for articles on `flatMap`
, you could even fortuitously find more than one really good articles explaining Monads in the context of Swift.
Add to the lack of comprehensive and recent material the fact that many of these concepts, even with examples or daring metaphors, are not obvious, especially for someone used to the imperative way of thinking.
With this short article (part of a series on Swift and the functional approach) I’ll try to give a clear and throughout explanation of how `map`
and especially `flatMap`
work for different types, with references to the current library headers.
### Contents
## Map
Map has the more obvious behavior of the two *map functions, it simply performs a closure on the input and, like `flatMap`
, it can be applied to Optionals and Sequences (i.e. arrays, dictionaries, etc..).
### Map on Optionals
For Optionals, the map function has the following prototype:
```
public enum Optional<Wrapped> : ... {
...
/*
- Parameter transform: A closure that takes the unwrapped value
of the instance.
- Returns: The result of the given closure. If this instance is `nil`,
returns `nil`.
*/
public func map<U>(_ transform: (Wrapped) throws -> U) rethrows -> U?
...
}
```
The map function expects a closure with signature `(Wrapped) -> U`
, if the optional has a value applies the function to the unwrapped optional and then wraps the result in an optional to return it (an additional declaration is present for implicitly unwrapped optionals, but this does not introduce any difference in behavior, just be aware of it when map doesn’t actually return an optional).
Note that the output type can be different from the type of the input, that is likely the most useful feature.
Straightforward, this does not need additional explanations, let’s see some real code from the playground for this post:
```
var o1:Int? = nil
var o1m = o1.map({$0 * 2})
o1m /* Int? with content nil */
o1 = 1
o1m = o1.map({$0 * 2})
o1m /* Int? with content 2 */
var os1m = o1.map({ (value) -> String in
String(value * 2)
})
os1m /* String? with content 2 */
os1m = o1.map({ (value) -> String in
String(value * 2)
}).map({"number "+$0})
os1m /* String? with content "number 2" */
```
Using map on optionals could save us an if each time we need to modify the original optional (map applies the closure to the content of the optional only if the optional has a value, otherwise it just returns nil), but the most interesting feature we get for free is the ability to concatenate multiple map operations that will be executed sequentially, thanks to the fact that a call to `map`
always return an optional. Interesting, but quite similar and more verbose than what we could get with optional chaining.
### Map on Sequences
But it’s with `Sequences`
like arrays and dictionaries that the convenience of using map-like functions is hard to miss:
```
var a1 = [1,2,3,4,5,6]
var a1m = a1.map({$0 * 2})
a1m /* [Int] with content [2, 4, 6, 8, 10, 12] */
let ao1:[Int?] = [1,2,3,4,5,6]
var ao1m = ao1.map({$0! * 2})
ao1m /* [Int] with content [2, 4, 6, 8, 10, 12] */
var a1ms = a1.map({ (value) -> String in
String(value * 2)
}).map { (stringValue) -> Int? in
Int(stringValue)
}
a1ms /* [Int?] with content [.Some(2),.Some(4),.Some(6),.Some(8),.Some(10),.Some(12)] */
```
This time we are calling the .map function defined on `Sequence`
as follow:
```
/*
- Parameter transform: A mapping closure. `transform` accepts an
element of this sequence as its parameter and returns a transformed
value of the same or of a different type.
- Returns: An array containing the transformed elements of this
sequence.
*/
func map<T>(_ transform: (Element) throws -> T) rethrows -> [T]
```
The transform closure of type `(Element) -> T`
is applied to every member of the collection and all the results are then packed in an array with the same type used as output in the closure and returned. As we did in the optionals example, sequential operation can be pipelined invoking `map`
on the result of a previous `map`
operation.
This basically sums up what you can do with `map`
, but before moving to `flatMap`
, let’s see three additional examples:
```
var s1:String? = "1"
var i1 = s1.map {
Int($0)
}
i1 /* Int?? with content 1 */
var ar1 = ["1","2","3","a"]
var ar1m = ar1.map {
Int($0)
}
ar1m /* [Int?] with content [.Some(1),.Some(2),.Some(3),nil] */
ar1m = ar1.map {
Int($0)
}
.filter({$0 != nil})
.map {$0! * 2}
ar1m /* [Int?] with content [.Some(2),.Some(4),.Some(6)] */
```
Not every String can be converted to an Int, so our integer conversion closure will always return an Int?.
What happens in the first example with that Int??, is that we end up with an optional of an optional, for the additional wrapping performed by map. To actually get the contained value will need to unwrap the optional two times, not a big problem, but this starts to get a little inconvenient if we need to chain an additional operation to that map. As we’ll see, `flatMap`
will help with this.
In the example with the array, if a String cannot be converted as it happens for the 4th element of `ar1`
the that element in the resulting array will be nil. But again, what if we want to concatenate an additional map operation after this first map and apply the transformation just to the valid (not nil) elements of our array to obtain a shorter array with only numbers?
Well, we’ll just need intermediate filtering to sort out the valid elements and prepare the stream of data to the successive map operations. Wouldn’t it be more convenient if this behavior was embedded in `map`
?
We’ll see that this another use case for `flatMap`
.
## FlatMap
The differences between `map`
and `flatMap`
could appear to be minor but they are definitely not.
While `flatMap`
is still a map-like operation, it applies an additional step called `flatten`
right after the mapping phase.
Let’s analyze `flatMap`
’s behavior with some code like we did in the previous section.
### FlatMap on Optionals
The definition of the function is a bit different, but the functionality is similar, as the reworded comment implies:
```
public enum Optional<Wrapped> : ... {
...
/*
- Parameter transform: A closure that takes the unwrapped value
of the instance.
- Returns: The result of the given closure. If this instance is `nil`,
returns `nil`.
*/
public func flatMap<U>(_ transform: (Wrapped) throws -> U?) rethrows -> U?
...
}
```
There is a substantial difference regarding the closure, `flatMap`
expects a `(Wrapped) -> U?)`
this time.
With optionals, flatMap applies the closure returning an optional to the content of the input optional and after the result has been “flattened” it’s wrapped in another optional.
Essentially, compared to what `map`
did, `flatMap`
also unwraps one layer of optionals.
```
var fo1:Int? = nil
var fo1m = fo1.flatMap({$0 * 2})
fo1m /* Int? with content nil */
fo1 = 1
fo1m = fo1.flatMap({$0 * 2})
fo1m /* Int? with content 2 */
var fos1m = fo1.flatMap({ (value) -> String? in
String(value * 2)
})
fos1m /* String? with content "2" */
var fs1:String? = "1"
var fi1 = fs1.flatMap {
Int($0)
}
fi1 /* Int? with content "1" */
var fi2 = fs1.flatMap {
Int($0)
}.map {$0*2}
fi2 /* Int? with content "2" */
```
The last snippet contains and example of chaining, no additional unwrapping is needed using `flatMap`
.
As we’ll see again when we describe the behavior with Sequences, this is the result of applying the flattening step.
The `flatten`
operation has the sole function of “unboxing” nested containers. A container can be an array, an optional or any other type capable of containing a value with a container type. Think of an optional containing another optional as we’ve just seen or array containing other array as we’ll see in the next section.
This behavior adheres to what happens with the `bind`
operation on Monads, to learn more about them, read here and here.
### FlatMap on Sequences
Sequence provides the following implementations of `flatMap`
:
```
/// - Parameter transform: A closure that accepts an element of this
/// sequence as its argument and returns a sequence or collection.
/// - Returns: The resulting flattened array.
///
public func flatMap<SegmentOfResult : Sequence>(_ transform: (Element) throws ->: SegmentOfResult) rethrows -> [SegmentOfResult.Iterator.Element]
/// - Parameter transform: A closure that accepts an element of this
/// sequence as its argument and returns an optional value.
/// - Returns: An array of the non-`nil` results of calling `transform`
/// with each element of the sequence.
///
public func flatMap<ElementOfResult>(_ transform: (Element) throws -> ElementOfResult?) rethrows -> [ElementOfResult]
```
`flatMap`
applies those transform closures to each element of the sequence and then pack them in a new array with the same type of the input value.
These two comments blocks describe two functionalities of `flatMap`
: sequence flattening and nil optionals filtering.
Let’s see what this means:
```
var fa1 = [1,2,3,4,5,6]
var fa1m = fa1.flatMap({$0 * 2})
fa1m /*[Int] with content [2, 4, 6, 8, 10, 12] */
var fao1:[Int?] = [1,2,3,4,nil,6]
var fao1m = fao1.flatMap({$0})
fao1m /*[Int] with content [1, 2, 3, 4, 6] */
var fa2 = [[1,2],[3],[4,5,6]]
var fa2m = fa2.flatMap({$0})
fa2m /*[Int] with content [1, 2, 3, 4, 6] */
```
While the result of the first example doesn’t differ from what we obtained using `map`
, it’s clear that the next two snippets show something that could have useful practical uses, saving us the need for convoluted manual flattening or filtering.
In the real world, there will be many instances where using `flatMap`
will make your code way more readable and less error-prone.
And an example of all this is the last snippet from the previous section, that we can now improve with the use of `flatMap`
:
```
var far1 = ["1","2","3","a"]
var far1m = far1.flatMap {
Int($0)
}
far1m /* [Int] with content [1, 2, 3] */
far1m = far1.flatMap {
Int($0)
}
.map {$0 * 2}
far1m /* [Int] with content [2, 4, 6] */
```
I may look just a minimal improvement in this context, but with longer chain it would become something that greatly improves readability.
And let me reiterate this again, in this context too, the behavior of swift flatMap is aligned to the `bind`
operation on Monads (and “flatMap” is usually used as a synonym of “bind”), you can learn more about this reading here and here.
Learn more about Sequence and IteratorProtocol protocols in the next article in the series.
###### Drawing inspired by emacs-utils documentation.
**Did you like this article? Let me know on Twitter!** | true | true | true | Swift is a language still slightly in flux, with new functionalities and alterations of behavior being introduced in every release. Much has already been written about the functional aspects of Swift and how to approach problems following a more functional approach.<br/>This short article will try to give a clear and complete explanation of how <i>map</i> and especially <i>flatMap</i> work for different types in Swift 2.0 and 3.0, with references to the current library headers. | 2024-10-12 00:00:00 | 2015-10-08 00:00:00 | article | uraimo.com | uraimo.com | null | null |
|
22,701,896 | https://twitter.com/slightlylate/status/1233275220275818498 | x.com | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | X (formerly Twitter) | null | null |
7,219,618 | http://blockade.readthedocs.org/en/latest/ | blockade¶ | null | # blockade¶
Blockade is a utility for testing network failures and partitions in distributed applications. Blockade uses Docker containers to run application processes and manages the network from the host system to create various failure scenarios.
A common use is to run a distributed application such as a database or cluster and create network partitions, then observe the behavior of the nodes. For example in a leader election system, you could partition the leader away from the other nodes and ensure that the leader steps down and that another node emerges as leader.
Blockade features:
- A flexible YAML format to describe the containers in your application
- Support for dependencies between containers, using named links
- A CLI tool for managing and querying the status of your blockade
- When run as a daemon, a simple REST API can be used to configure your blockade
- Creation of arbitrary partitions between containers
- Giving a container a flaky network connection to others (drop packets)
- Giving a container a slow network connection to others (latency)
- While under partition or network failure control, containers can freely communicate with the host system – so you can still grab logs and monitor the application.
Blockade was originally developed by the Dell Cloud Manager (formerly Enstratius) team. Blockade is inspired by the excellent Jepsen article series.
Get started with the Blockade Guide!
## Reference Documentation¶
## Development and Support¶
Blockade is available on github. Bug reports should be reported as issues there. | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | readthedocs.io | blockade 0.4.0 documentation | null | null |
20,754,113 | https://arxiv.org/abs/1908.06121 | CFO: A Framework for Building Production NLP Systems | Chakravarti; Rishav; Pendus; Cezar; Sakrajda; Andrzej; Ferritto; Anthony; Pan; Lin; Glass; Michael; Castelli; Vittorio; Murdock; J William; Florian; Radu; Roukos; Salim; Sil; Avirup | # Computer Science > Computation and Language
[Submitted on 16 Aug 2019 (v1), last revised 19 Jun 2020 (this version, v3)]
# Title:CFO: A Framework for Building Production NLP Systems
View PDFAbstract:This paper introduces a novel orchestration framework, called CFO (COMPUTATION FLOW ORCHESTRATOR), for building, experimenting with, and deploying interactive NLP (Natural Language Processing) and IR (Information Retrieval) systems to production environments. We then demonstrate a question answering system built using this framework which incorporates state-of-the-art BERT based MRC (Machine Reading Comprehension) with IR components to enable end-to-end answer retrieval. Results from the demo system are shown to be high quality in both academic and industry domain specific settings. Finally, we discuss best practices when (pre-)training BERT based MRC models for production systems.
## Submission history
From: Rishav Chakravarti [view email]**[v1]**Fri, 16 Aug 2019 18:19:59 UTC (218 KB)
**[v2]**Fri, 30 Aug 2019 15:01:20 UTC (220 KB)
**[v3]**Fri, 19 Jun 2020 20:24:05 UTC (217 KB)
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**. | true | true | true | This paper introduces a novel orchestration framework, called CFO (COMPUTATION FLOW ORCHESTRATOR), for building, experimenting with, and deploying interactive NLP (Natural Language Processing) and IR (Information Retrieval) systems to production environments. We then demonstrate a question answering system built using this framework which incorporates state-of-the-art BERT based MRC (Machine Reading Comprehension) with IR components to enable end-to-end answer retrieval. Results from the demo system are shown to be high quality in both academic and industry domain specific settings. Finally, we discuss best practices when (pre-)training BERT based MRC models for production systems. | 2024-10-12 00:00:00 | 2019-08-16 00:00:00 | /static/browse/0.3.4/images/arxiv-logo-fb.png | website | arxiv.org | arXiv.org | null | null |
820,070 | http://www.nytimes.com/2009/09/14/business/energy-environment/14borlaug.html?_r=2&hp | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,315,614 | https://rixx.de/blog/on-running-a-mastodon-instance/ | On Running a Mastodon Instance | Rixx | # On Running a Mastodon Instance
Together with a friend, I’m running the Mastodon/Fediverse instance chaos.social. We’ve been at it for nearly five years now, and I feel like it’s time to reflect on the good, the bad, the ugly. Please note that you'll be reading an edited version of this post with most of the cursing removed, so please be understanding of the bits that I kept – they are load-bearing.
#### Le me
Some context, for those who don't know me: I didn't start running chaos.social unprepared. I run both other web applications, and more importantly, I moderate other community spaces, including several conferences, the chat of a large Twitch account, some Discord servers and a mailing list. I also used to be involved in running a hackerspace. I’ll inevitably compare my experience as a Fediverse admin and moderator to these, so I might as well be upfront.
### The Fact Sheet
Leah and I started chaos.social when Mastodon started to take off, in April 2017, as
an instance for a community intentionally left vague. The tagline (“… **because anarchy is more fun with friends**”)
points in the right direction: “we” are roughly the German Chaos community and people who are or want to be associated
with it. It’s a left-ish, queer (and not) group of German (and not) nerds (and not-nerds). Sorry to be so helpful.
We started with open signups, and eventually closed down when the instance grew too fast. (That’s much easier said than done – see more below.) New users join through existing users, who can extend invites. We also occasionally give out open invites during CCC events. This policy has been working out well: chaos.social is active without being unmanageable. We’re at about 7k users at the moment, 2k of which have been active in the last month, with a total of just under 2.5 million posts.
### The Work
In pure hours spent on it, the instance has not been terribly demanding after things settled down from the initial
growth. I’d guess that I put **two hours per week** on average into the project. During normal weeks, I'll get away with
just thirty minutes, mostly moderation stuff (see below) and responding to emails (people wanting an invite and needing
to reset 2FA). But then, of course, there are stupid hell weeks (see allll the way below) that eat hours like Slack eats
memory.
Leah, who is the professional sysadmin of the two of us, handles most of the technical side, including monitoring and maintenance work, and also owns the servers and infrastructure – I occasionally nudge the server, I add and perform maintenance jobs, and I was involved in the setup and some database hacks, but it’s a negligible part of my workload.
That said, once you get Mastodon up and running, the technical side isn’t really the one you have to worry about. I run several other web applications other than chaos.social, and Mastodon is pretty middle-of-the-road in terms of effort required, with the added bonus that the #mastoadmin community is happy to help out and share when anybody runs into problems (and is often much more helpful than the official issue tracker).
Now that you know what I generally do, let me share which parts are good, which are bad, and which turn ugly. I'm not sure if each of these categories tend towards extremes, or if I just remember the outliers best, but it seems to me that the highs have been very high, and have been matched by appropriately low lows.
### The Good
Running a community space is always rewarding to some degree, and online spaces are no exception. I don’t want to be trite and say “the people are what makes this worth it”, so let me be more precise and explain some of the good things that I have encountered in the past years running chaos.social.
#### Feels
First off, it’s fulfilling (yeah, stupid word, but it *fits*) to see people flourish in a space that you have made. They
are enjoying good days, finding help on bad days, share their life and their opinions, they have fun and get angry, play
games, create art for us, and sometimes even
improve life for others – all through something Leah and I are responsible for. That’s just a really nice feeling, and I
like feeling it.
#### Community
Then there’s the explicit appreciation: Occasionally, people remember the work we’re doing, and will thank us without special occasion. They ask if the instance is healthy, if we need help, and generally behave with concern and decency. When we post about our finances, they chip in, and we haven’t had to worry about carrying the financial burden for the instance since that post. Feeling like you're a part of a responsible, mature community is a pretty basic need (I think, with the power of the armchair psychologist), and while online spaces can't replace real-life communities, it still feels reassuring to see people step up like this. (Thanks, folks <3)
#### Rules
It’s also great to have power in a setting like this. Yes, I know what that sounds like. Let me explain! It feels good to enforce rules that make people safe, especially when most people respond well to moderation – we have banned only a tiny number of people over the years. Moderation typically involves talking to people about the things we find disagreeable, and in most cases they can either see things from our perspective, or decide that they would like to move to an instance with rules better suited to them, with no hard feelings. Both these outcomes feel good, especially knowing that things could get Bad or Ugly.
The good parts of moderation also include the community being receptive to rule updates – we have worked pretty hard to come up with a good combination of rules and guidelines, and any feedback has been constructive and respectful. Furthermore, the nature of moderation means that most of our work is invisible, since we talk directly to people breaking our rules, without making details public. Nevertheless, people have time and again chosen to explicitly trust our judgement, without needing to see proof (that we are usually not at liberty to share). I can’t stress enough how reassuring this is.
#### Mwahaha
Power also has more positive, creative uses: Being able to add cool emotes that fit our interests and styles (pride flags and bread emotes, it’s up to you to guess who added which), or changing the instance look for pride month, or just playing with a lot of tiny modifications like that.
### The Bad
Some parts of running a community space are always less good, ranging from “meh” into all out “bad bad BAD” territory. I’m going to skip the annoying stuff (support emails for lost logins and the like), as those don’t really matter even in the short run. The rest I’ve listed here in increasing level of badness:
#### Spam
The outermost ring of badness are spammers and the like. They’re easily reported and blocked, but it’s constant work that needs to be done. Bonus points for spammy instances, as they’re even easier to block – though we document every blocked instance, so that’s additional work (plus we have to make sure that the instance is actually spammy as a whole, not just a handful of bot users they are racing to ban).
#### Assholes
The next level are terrible people being terrible. You can’t avoid this stuff when you moderate a community, so you better be prepared for it. It's obviously bad when people are outright homophobic, or Nazis, or spread conspiracy theories, and all of this will happen, a lot, in worse ways than you expect. But there are also worse ways (for moderators, that is) to be an asshole . For one, I don’t know all the idiocy that circulates out there, so occasionally I have to read up on niche idiocy to judge if somebody needs to get blocked, or what they are even talking about. Add to that that not everybody uses the report feature that allows you to include one or multiple posts in the report, and this can involve some very icky scrolling. Plus, your mood kinda just takes a dive when you read worst-of-Internet content like personalised calls for queer people to kill themselves, extreme fatshaming, sudden gore pictures under unrelated hashtags, holocaust photos and so on.
#### asshole instances
Again, bonus points for entire instances of this, because we have to look at at least one other account (of an admin or moderator) to see if we want to defederate. Nothing I’ve seen in the course of this has topped what I’ve seen in other parts of the Internet, but it still leaves a bad taste in your mouth, and it can happen on any given day.
This comes with some semi-mandatory constant work keeping up with new known bad instances: People let each other know when a new Nazi/crypto-scam/loli/… instance spawns, and it’s somewhat expected that instance administrators block these instances proactively. We’re doing so-so on this one, as it’s a LOT of work – we usually wait for the first bad interaction to be reported, to keep the workload manageable, at the expense of at least one user’s experience.
#### at home
The last level of badness is when people do any of the above on our instance. Having to interact with Internet-level badness is never fun, but it’s much much worse when you feel responsible for it. This has been exceedingly rare, because having an invite-only instance has kept a general sense of community and behaviour norms alive, thankfully. We've had some close calls, but at least since closing registrations, I don’t think that we’ve seen somebody being openly malicious.
#### tired
There’s one last aspect to the bad stuff, more of an addendum than an extra layer: We’re never entirely off-duty, and it
can be draining. Most people understand and use the report button, but people are going to be people, and so there’s
always somebody who @s Leah or me instead, which makes it hard to carve out time that is really just for enjoyment. I
know it doesn’t sound all that bad, at least comparatively, but this combined with the resulting sense of constant
responsibility was what led me to use Twitter again, just to have a place where I can **not** feel responsible for
people being ~~idiots~~ misguided.
### The Ugly
Now, if you wonder what could be worse than the spoilered stuff above, let me reassure you: the ugly parts are not
*more* bad, they are *different* bad. In particular, the ugly parts are the ones with complexity and ambiguity (screw
ambiguity in particular). I’m again going from less to more ugly.
#### defence
The first ugly part is the stress. I’ve alluded to it above, talking about being always on duty, but this is more
specific: In a given situation, when Leah and I have to make a decision, we’re under a lot of pressure to make the (or
a) right decision, to prove we deserve the trust people generously placed in us. More than that: We have to make a
decision that we’re still willing to live with then it’s thrown back at us by a malicious troll twisting the facts,
while we can’t respond properly without disclosing messages we’re committed to protect. You start second-guessing every
word and try to see what everything would look like when it’s taken out of context. It’s a defensive and indefensible
position. This *sucks*, and it never stops sucking.
#### grrrr
When you put people under pressure for a long time, they stop being happy and cuddly, and as running chaos.social is only something we do in our spare time, it’s inevitable that there are times when neither Leah nor I are in the mood to deal with whatever the world has thrown at us. But we’ve still got to. Short tempers and tense situations can go wrong in the blink of an eye, and while 99% of the time one of us is happy to take over moderation while the other deals with life stuff, there are situations that are just miserable for everybody. (We have a lot of practice at this point, and we’re still very good friends, but that doesn’t mean that it has been easy getting here).
#### headaches
The next ugly part are the instance policy decisions. There are many good ways to run an instance (and many more bad ways), and it's hard to figure out what's good, what's necessary, and what's fun. We took some tries to figure out our policy on things like crossposting and bots (result: not on the local timeline), the exact line between hard rules and softer guidelines, how we handle reports, if registrations should be open or closed, and if closed, how to deal with invites (result: everybody can invite friends, but we prevent unlimited invites), and so on. As mentioned above, the people using our instance have been patient and understanding throughout rule changes (in part because we always explain our reasoning, and keep the changes to a minimum), but it's still complex and unpleasant and not very nice.
#### software
Then there’s the software ~~bullshit~~ discussions. People have strong opinions on software (nothing new here), and at
times, Pleroma vs Mastodon sounded a lot like vim vs emacs, only with a fair share of “all of them are Nazis” thrown in.
Trying to figure out why a feature is not working on your server could easily lead to having to figure out who all those
people arguing in not-quite-connected GitHub issues were (random internet people plus maintainers plus frequent
contributors plus PR authors who never got their stuff merged and who were bitter plus central community members whose
opinions were everywhere and whose allegiances kept shifting – a delightful mix). I *think* this kind of drama has died
down a little, but maybe I’m just better at blocking it out. Regardless, having to stay somewhat up to date on this was
draining and idiotic and I regret every bit of time I’ve spent reading essays about the society-wide implications of
quote-retweets.
#### instances
The next level of ugly shit in the Fediverse are instance ~~wars~~ disputes. Instance blocks are in themselves a
complicated topic, with opinions ranging from “bad, destroying the Fediverse” to “necessary, I don’t need to read your
Nazi shit”. Sometimes we see other instances blocking each other and calling for mass instance blocks. This leaves us to
dig through long discussion chains, and to figure out if, for example, one instance really is transphobic, or if they
have a transphobic ex-moderator, or if somebody just had a stupid opinion, or if none of that happened and they were
just misrepresented by concern trolls.
#### blocks
We do employ instance blocks ourselves, and they are usually not controversial. In the past years, we have only been involved in this kind of drama once, and it was an exhausting, stupid process that took weeks to untangle, often involving long conversations with people who sounded reasonable on the surface, and yet felt deeply off. Doing this work under public scrutiny was only possible because our last line of defence consists of the end of our rules: “We are maintaining this instance on our spare time, hardware and nerves. Don't push either of those.”, so we reserve the right to use the ban hammer Just Because. But we don’t want to (again, we want people to trust us for a reason), and everything about all of this is stressful and annoying and makes for a couple of really bad days or weeks. And since none of these cases are easy to decide (otherwise they’d be Bad, not Ugly), we spend half the time second-guessing and double-checking our decisions. Rigor may be good for the soul, but nobody said anything about fun. (‘cause it’s not, rigor can walk into the sea together with ambiguity.)
#### bans
The worst of the ugly cases are capable individual trolls. If we make a wrong call about an instance, that *sucks*, but
an instance has its own community, and it can go through other (annoying, painful, but available) channels to talk to
us. All this is harder for an individual, so it’s much more important that we make the right call. At the same time,
individuals are much more likely to be bad actors (because having an instance is work, and being a troll is free).
Again, with people on different instances, this is ultimately not terrible to get wrong – at worst, they can’t pick
fights with people on our instance anymore. But with people on our own instance? There are days when I’d rather get
punched in the face than figure out if one of our users just needs a stern talking-to, or if we should have banned them
a year ago, or if the report was taking things out of context.
#### questions
A lot of it is figuring out nuance, and nuance is annoying and exhausting: Has this user been a problem before? How much? When? Have we warned them? (Early Mastodon moderation tools sucked, so this is extra hard to keep track of.) Do they act in bad faith? Are they oblivious? Does it matter? Do we want them to go? Do we want to close their account? With or without a grace period to migrate their data? Do we want to delete their post, or them to delete their post? Does anybody deserve an apology? How do we want to respond if they don’t take our message well – do we negotiate because our interpretation of their behaviour is not set in stone? Do we ask them to leave the instance? Do we have to make a public statement to make it clear that we, as moderators, are taking action, or would that put an unnecessary spotlight on a bad situation? (99% of the time we say nothing, of course, but we also need to make sure people don’t start to see our instance as a place filled with assholes just because they can’t see the moderation actions behind the scenes.)
#### relentless
It doesn’t stop. It never stops. It’s in the nature of people to never stop and to have nuance and motivations and a low amount of malice and an often terrifying amount of clumsiness. We work with all this in the face of people poised to assume the worst (of us, of our users, of other people). We found our way of navigating this ambiguity by trying very hard to do the right thing, while also falling back on the rule of thumb of “This is our living room, please behave like it.”, and it took a lot of time and effort to arrive at this point. I’m glad we did.
But man, I’d have **also** been glad to arrive here with less bullshit along the way.
### The End
On balance, running a Fediverse instance is worth it, I think, or rather: I’m glad I started doing it five years ago. Said balance is complex and, on bad days, doesn’t look like it’s all that good. Thankfully, the bad days are a tiny (if vocal) minority.
If you're thinking about running an online space and are feeling intimidated: good, you're taking this seriously. Now go back up and re-read the Good part. Then go back down and re-read the Ugly part. Good luck deciding.
**Thank you** everybody who was part of this journey that was equal parts stupid, hilarious and educational. I could
have done it without you, but I would have turned into a bitter, jaded asshole along the way, and this blog post would
have been very different. And thank you, Leah, for being a bit less stupid and a bit more hilarious than me.
*If you want to comment, feel free to @ me on Fedi or
Twitter.* | true | true | true | I've been running chaos.social for nearly 5 years. A reflection. | 2024-10-12 00:00:00 | 2021-12-21 00:00:00 | null | article | rixx.de | rixx.de | null | null |
10,331,272 | https://twitter.com/jack/status/651003190628872192 | x.com | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | X (formerly Twitter) | null | null |
17,382,980 | https://medium.com/@karankurani/whales-cars-law-technology-and-you-ac28d9dbed3c | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,953,615 | http://www.scribd.com/doc/3932344/40-Sleep-Hacks-The-Geeks-Guide-to-Optimizing-Sleep | 40 Sleep Hacks: The Geek's Guide to Optimizing Sleep | null | 40 Sleep Hacks
The geek’s guide to optimizing sleep
This eBook is
FREE!
There’s only one price: If you like it, you have to pass it on
. Post it on your website or e-mail it to your friends. However, this e-book must
not
2 ©2008 SleepWarrior.com. Some rights reserved.
Table of contents
Section I: Hacking Sleep Schedules
1.
Wake up at the same time every morning 2.
Try free-running sleep 3.
The 28-hour day 4.
Polyphasic sleep 5.
Keep a sleep log 6.
Give your schedules 10 days to ‘click’ 7.
Reduce your sleep need
Section II: Diet
8.
Eat whole foods, unprocessed foods, and raw foods 9.
Eat light in the evening 10.
Eat a small pre-bedtime snack 11.
Drink caffeine in the morning, not at night 12.
Eat breakfast 13.
Control your cortisol 14.
Avoid foods you may be sensitive to
Section III: Napping
15.
Master the art of napping 16.
Caffeine nap 17.
Pzizz your way to sleep 18.
Create your own nap mp3
Section IV: Dreaming and Creativity
19.
Learn to lucid dream 20.
Use lucid dreaming to cultivate peak performance, solve problems, and overcome fears. 21.
Explore hypnagogia 22.
Keep a dream journal
Section V: Sleep Environment
23.
Sleep in complete darkness 24.
Sleep in the cold
Section VI: Sleep Gadgets
25.
Use noise cancelling headphones 26.
Use a bright light alarm 27.
Use a sun box 28.
The SleepTracker watch 29.
Use a sleep mask 30.
Use an mp3 alarm clock
Section VII: Psychology
31.
Use your brain’s internal alarm clock 32.
Set up morning rewards 33.
Write down tomorrow’s to-do list 34.
Change your attitude toward sleep 35.
Train your brain to wake up to alarms 36.
Set two alarms 37.
Maintain a positive attitude toward life 38.
Wake up to euphoric music
Section VIII: Lifestyle
39.
Meditate 40.
Fall in love
3 ©2008 SleepWarrior.com. Some rights reserved.
About this ebook
40 Sleep Hacks: The Geek’s Guide to Optimizing Sleep
You may distribute this eBook freely and/or bundle it as a free bonus with other products, as long as it is left completely intact, unaltered and delivered via the PDF file. You may republish excerpts from this eBook as long as they are accompanied by an attribution link back to
July 2008
for newer editions. Health/legal disclaimer: The information presented in this book is taken from sources believed to be accurate. All of these sleep hacks aim to improve sleep quality and thus health and quality of life. However, you must use them at your own discretion. In the end there’s probably no better sleep advice than this: listen to your body; 9 times out of 10 it knows what’s right. Enough of that… let’s get started. Enjoy!
## Reward Your Curiosity
Everything you want to read.
Anytime. Anywhere. Any device.
No Commitment. Cancel anytime. | true | true | true | Improve your life with better sleep. Enhance your energy, happiness, and creativity with these "sleep hacks." Visit www.sleepwarrior.com for more. | 2024-10-12 00:00:00 | 2021-08-05 00:00:00 | https://imgv2-2-f.scribdassets.com/img/document/3932344/original/7851a8ed12/1728658697?v=1 | website | scribd.com | Scribd | null | null |
31,342,266 | https://www.reqview.com/blog/requirements-traceability-analysis-neo4j/ | How to Analyze Requirements Traceability in Neo4j Graph Database | ReqView Blog | Eccam s r o; Libor Buš | # How to Analyze Requirements Traceability in Neo4j Graph Database
Learn how to query, visualize and validate requirements traceability for ReqView projects using free tools based on the Neo4j graph database.
Detailed and consistent requirements traceability *increases product quality* because it enables requirements verification and validation. For safety-critical products, requirements traceability is the key to compliance with functional safety standards.
When we invest quite a lot of effort in linking requirements across the development cycle and ensuring traceability consistency why not query and analyze traceability effectively?
In ReqView, you can manage and review end-to-end traceability easily. You can customize traceability views for impact and coverage analysis, browse the traceability graph, filter requirements with missing links, export traceability reports to Word, Excel, and other formats. For more information see How to Use Requirements Traceability Matrix (RTM).
Let’s see how you can further extend ReqView by a powerful Neo4j graph database to query and visualize traceability graphs.
## Requirements Traceability Graphs
Requirements traceability can be intuitively modeled as a directed graph, in which:
*Nodes*represent records storing project artifacts — e.g., requirements, tests, risks, design elements.*Edges*represent relationships between project artifacts — e.g., parent/child links, traceability links, origin/copy links.
**Example**: The epic "Define Requirements" contains the user story "Requirement Description", which is further decomposed to several functional requirements as depicted by the following traceability view.
The corresponding traceability graph has 7 nodes for the epic *NEEDS-8*, user story *NEEDS-68*, and functional requirements *SRS-x*. The directed edge from the user story to the epic represents the parent/child relationship, and the directed edges from the functional requirements to the user story represent *satisfaction* links.
## Neo4j Graph Database
Neo4j is a popular graph database allowing to store traceability graphs including requirement attributes and types of traceability links. Neo4j can be an effective solution for requirements traceability analysis in large projects.
In Neo4j, you can:
**Query traceability**— for instance, find all user stories impacted by a change in a selected SW module.**Visualize traceability graphs**— for instance, create a figure for your Powerpoint presentation displaying a coverage tree view for a selected user story.**Validate traceability consistency**— for instance, detect all satisfaction links with the wrong direction or leading between wrong documents automatically.
You can install and use Neo4j Desktop on your PC for free. It includes a self-managed Neo4j Graph Database with the Neo4j Enterprise Developer license and a few other useful tools explained later.
## Cypher Graph Query Language
Neo4j offers favored Cypher graph query language allowing to query and explore complex graphs using an intuitive syntax:
**Nodes**:
`(n)`
node referred by variable`n`
used further in the query`(:Label)`
node with a given`Label`
corresponding to ReqView document ID, e.g.`(:NEEDS)`
`(n:Label {property: value})`
node`n`
with a given`Label`
having`property`
set to`value`
, e.g.`(n:NEEDS {type: "STORY"})`
**Edges**:
`[e]`
edge referred by variable`e`
used further in the query`[:Type]`
edges with a given`Type`
corresponding to ReqView traceability link type, e.g.`[:satisfaction]`
`(:Label1)-[:Type]->(Label2)`
edges of type`Type`
from nodes having`Label1`
to nodes with`Label2`
, e.g.`(SRS)-[:satisfaction]->(NEEDS)`
The following examples demonstrate a few Cypher queries for the requirements traceability graph of the ReqView Demo project.
**Example**: For the user story "Requirement Description" (*NEEDS-68*) shown in the previous example, query the related parent epic and derived software requirements.
`MATCH (epic:NEEDS)<-[:parent]-(story {id:"NEEDS-68"})<-[:satisfaction]-(req:SRS)RETURN epic,story,req`
**Example**: Query IDs of all user stories not satisfied by any software requirement.
`MATCH (story:NEEDS {type:"STORY"}) WHERE NOT (story)<-[:satisfaction]-(:SRS)RETURN story.id`
**Example**: Count all software requirements not verified by any test case.
`MATCH (req:SRS {type:"FR"}) WHERE NOT (req)<-[:verification]-(:TESTS {type:"CASE"})RETURN COUNT(req)`
**Example**: Query all satisfaction links with the wrong source or target document, i.e., the link source is not from the *SRS* document or it target is not from the *NEEDS* document.
`MATCH (start)-[:satisfaction]->(target) WHERE NOT start:SRS OR NOT target:NEEDSRETURN start.id, target.id`
For more information see Neo4j Cypher Manual.
## Import Requirements Traceability
You can import requirements traceability from ReqView into a new Neo4j graph database in two steps. First, export selected documents from your ReqView project into a Neo4j Cypher file. Then, run the Cypher file in Neo4j Desktop.
### Export Cypher File From ReqView
- Download custom export template file ReqView-Neo4jTemplate.cypher.
- Start ReqView with your requirements project.
- Open ReqView documents to be exported.
- Export requirements traceability for the open documents into a Cypher file — click
**File**, mouseover**Export**, select**Custom Export**. Then choose the downloaded template file, check*Export multiple documents in the current project*, and*Export into a single file*. Finally, press**OK**and select the location of the exported Cypher file.
### Run Cypher File in Neo4j Desktop
-
Start Neo4j Desktop.
-
Create a new project and database storing requirements traceability graphs from ReqView:
- Open
*Projects*pane, click**New**and select**Create project**to create a new project. Then set the project name. - Click
**Add**, select**Local DBMS**, enter a database name and a password, and click**Create**to create a new database.
- Open
-
Start the database — click
**Start**in the project view. -
Add the Cypher file to the project — click
**Add**, select**File**, and choose the*.cypher*file exported from ReqView. -
Open the Neo4j Browser — click
**Open**next to the new database name.**Note:**Do not use**Open**button next to newly added*.cypher*file, because it will open the Neo4j Browser without**Project Files**icon in menu. -
Create a separate database storing a new version of your requirements traceability graph — run
`create or replace database v01`
command in the Neo4j Browser. -
Switch to the new database — run
`:use v01`
command in the Neo4j Browser. -
Run the Cypher file to fill the new database — click the
**Project Files**icon on the left of Neo4j Browser and click**Run**next to the*.cypher*file name.
For more information on how to use Neo4j Browser to query the traceability graph see the next section.
## Visualize Requirements Traceability Graphs
### Neo4j Browser
Neo4j Browser is a basic tool for querying and simple visualization of graphs bundled with Neo4j Desktop. To start the tool, open **Graph Apps** tab in Neo4j Desktop and click **Neo4j Browser**.
You can run Cypher queries to display traceability graphs. You can re-arrange graphs interactively by dragging nodes and zooming. You can customize graph appearance by choosing the color, size, or caption of nodes. When you select a node you can see its properties in the right pane. For more information see Neo4j Browser User Interface Guide.
**Example**: Visualize the traceability graph for the user story "Requirements Description" (*NEEDS-68*) including its parent epic and linked software requirements. See properties of the user story in the right pane.
### Neo4j Bloom
Neo4j Bloom is a more advanced tool for interactive exploration of graphs, which is available with Neo4j Desktop installation. To start the tool, open **Graph Apps** tab and click **Neo4j Bloom**.
Neo4j Bloom offers several useful features for exploring traceability graphs, which are not available in Neo4j Browser:
**Multiple Perspectives**— Define one or more perspectives to explore the same traceability graph using different views. For instance, create one perspective for exploring the coverage of top-level user stories by software requirements and design elements and another perspective for exploring coverage of software by verification tests.**Search Bar**— Add nodes into the scene interactively without running Cypher queries.**Expand Nodes**— Select one or more nodes and use the context menu to expand nodes linked by a given relation type.**Map**— Understand which part of the whole scene is visible when exploring large traceability graphs.**Hierarchical Layout**— Click the**Layout**button above the map to switch between a default force-directed layout and a hierarchical layout, which is more suitable for presenting traceability graphs. Click the**Rotate Layout**button to rotate the scene by 90 degrees.
When you start Neo4j Bloom the first time it creates an empty perspective for the default database. Create a new perspective and explore a traceability graph stored in your custom Neo4j database.
- Click
**Perspective**button in the top left corner and then click the name of the current perspective in the header of the pane. - In the
*Perspective Gallery*choose your Neo4j database, click**Create Perspective**and then click**Generate Perspective**. - Click
**Use perspective**to explore the database using the new perspective. - Click
**Search graph**and enter a node label, relation type or even a complex pattern of nodes and their relations. For instance, enter "NEEDS NEEDS" to see the tree structure of the*NEEDS*document given by*parent*links between sections, epics, and stories. - Select one or more nodes, open the context menu by right-mouse click, choose
**Expand**and a relation type to expand the displayed traceability graph. - Click the
**Layout**button above the map to switch between a default force-directed layout and a hierarchical layout, which is more suitable for presenting traceability graphs. Click the**Rotate layout**button to rotate the scene by 90 degrees. - Click
**Presentation mode**button to hide the search bar, left and right panes. - Click
**Export visualization**button in to top right corner and select**Export screenshot**to share the traceability graph visualization.
For more information see Neo4j Bloom User Interface Guide.
**Example**: Visualize the traceability graph for user story "Requirements Description" (*NEEDS-68*) including its parent epic and linked software requirements using a hierarchical layout.
### yWorks Data Explorer
yWorks Explorer is a free tool for exploring Neo4j graph databases offered by yWorks. To install the tool into Neo4j Desktop, open **Graph Apps** tab, enter https://www.yworks.com/neo4j-explorer/ into the *File or URL* text field, and click **Install**.
Neo4j Explorer offers the most control on graph styling. You can customize node style including its shape(s), colors, and displayed properties. For more information see yFiles Neo4j Explorer: Advanced Node Styling.
Open Data Explorer from Neo4j Desktop
- Click the "Open" combo box next to the active database
- Connect the database
**Example**: Visualize the traceability graph for user story "Requirements Description" (*NEEDS-68*) including its parent epic and linked software requirements. See properties of the user story in the right pane.
## Validate Consistency of Requirements Traceability
We have demonstrated in the previous sections how to query and explore requirements traceability graphs manually. You can build an automated script, which connects to a Neo4j graph database, runs custom Cypher queries checking the consistency of requirements traceability, and outputs a list of found inconsistencies. There are SDKs for most of the popular languages today (including Javascipt, Python, and Java), see Neo4j Documentation - Drivers and APIs.
**Example**: Run an HTML report validating requirements traceability for the ReqView Demo project using a Neo4j database.
-
Import the requirements traceability graph for the ReqView Demo project as described in Import Requirements Traceability above.
- Download example report
-
Unzip the downloaded package to a local folder.
-
Open file
*Reqview-Neo4jTraceabilityValidation.js*in a text editor and set`dbUrl`
,`dbName`
,`dbUser`
and`dbPassword`
constants declared at the beginning of the file to match your local Neo4j database storing the requirements traceability graph. -
Open file
*Reqview-Neo4jTraceabilityValidation.html*in your browser and see verification results with detailed information about identified traceability inconsistencies. Click a ReqView URL link to open the corresponding requirement in ReqView. -
Open file
*Reqview-Neo4jTraceabilityValidation-Queries.js*in a text editor and customize Cypher queries used by the report.
## Summary
We presented how you can enhance ReqView powerful traceability features by exporting requirements traceability into a Neo4j graph database.
We demonstrated several free tools based on the Neo4j database to visualize requirements traceability graphs. Each of the presented tools has its advantages and disadvantages:
*Neo4j Browser*is great for exploring results for custom Cypher queries.*Neo4j Bloom*helps with visualization of traceability graphs interactively without running Cypher queries.*yWorks Data Explorer*is the best for visualization of traceability graphs because it provides the most control over graph styling.
Finally, we provided an example HTML report querying a Neo4j graph database to validate consistency of requirements traceability links automatically.
## Acknowledgment
We want to thank our customer Günther Weidenholzer from Oberaigner Powertrain, for sharing his ideas and use cases for extending ReqView by Neo4j. Oberaigner develops safety-critical products for automotive OEMs. They need to comply with ISO 26262 functional safety standards and manage traceability between safety goals, stakeholder needs, requirements, verification & validation, and safety risks in ReqView.
Günther found a unique solution to fulfill strict requirements for validating traceability consistency given by the standard. He implemented a Python script exporting requirements traceability from ReqView to Neo4j, querying traceability graphs, and automatically generating Excel sheets with validation reports. Our approach slightly differs, but Günthers' ideas were a great inspiration for this blog post.
## Try ReqView
**Do you need to analyze requirements traceability using the Neo4j graph database?**
Find out how ReqView can simplify your daily requirements traceability tasks.
## Give us Feedback
Would you like to explore requirements traceability graphs visually directly in ReqView? Please upvote Graphical Traceability View feature request, or contact us and describe your usecase. | true | true | true | Learn how to query, visualize and validate requirements traceability for ReqView projects using free tools based on the Neo4j graph database. | 2024-10-12 00:00:00 | 2022-05-05 00:00:00 | article | reqview.com | ReqView | null | null |
|
19,094,221 | https://www.sciencemag.org/news/2019/02/pictionary-playing-computer-connects-humans-deep-thoughts | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
663,091 | http://www.chubbybrain.com/blog/2009/06/233-million-has-flown-to-twitter-based-startups-%E2%80%93-is-this-just-the-beginning/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,120,808 | https://www.sfchronicle.com/culture/article/Jerry-Lawson-revolutionized-video-gaming-from-his-15726001.php | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,594,843 | http://poorbuthappy.com/ease/archives/2010/08/11/4750/google-chrome-warns-me-that-googlecom-is-unsafe | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,400,601 | https://dropbox.tech/infrastructure/atlas--our-journey-from-a-python-monolith-to-a-managed-platform | Atlas: Our journey from a Python monolith to a managed platform | Naphat Sanguansin | Dropbox, to our customers, needs to be a reliable and responsive service. As a company, we’ve had to scale constantly since our start, today serving more than 700M registered users in every time zone on the planet who generate at least 300,000 requests per second. Systems that worked great for a startup hadn’t scaled well, so we needed to devise a new model for our internal systems, and a way to get there without disrupting the use of our product.
In this post, we’ll explain why and how we developed and deployed Atlas, a platform which provides the majority of benefits of a Service Oriented Architecture, while minimizing the operational cost that typically comes with owning a service.
### Monolith should be by choice
The majority of software developers at Dropbox contribute to server-side backend code, and all server side development takes place in our server monorepo. We mostly use Python for our server-side product development, with more than 3 million lines of code belonging to our monolithic Python server.
It works, but we realized the monolith was also holding us back as we grew. Developers wrangled daily with unintended consequences of the monolith. Every line of code they wrote was, whether they wanted or not, shared code—they didn’t get to choose what was smart to share, and what was best to keep isolated to a single endpoint. Likewise, in production, the fate of their endpoints was tied to every other endpoint, regardless of the stability, criticality, or level of ownership of these endpoints.
In 2020, we ran a project to break apart the monolith and evolve it into a serverless managed platform, which would reduce code tangles and liberate services and their underlying engineering teams from being entwined with one another. To do so, we had to innovate both the architecture (e.g. standardizing on gRPC and using Envoy’s gRPC-HTTP transcoding) and the operations (e.g. introducing autoscaling and canary analysis). This blog post captures key ideas and learnings from our journey.
## Metaserver: The Dropbox monolith
Dropbox’s internal service topology as of today can be thought of as a “solar system” model, in which a lot of product functionality is served by the monolith, but platform-level components like authentication, metadata storage, filesystem, and sync have been separated into different services.
About half of all commits to our server repository modify our large monolithic Python web application, Metaserver.
Metaserver is one of our oldest services, created in 2007 by one of our co-founders. It has served Dropbox well, but as our engineering team marched to deliver new features over the years, the organic growth of the codebase led to serious challenges.
### Tangled codebase
Metaserver’s code was originally organized in a simple pattern one might expect to see in a small open source project—library, model, controllers—with no centralized curation or guardrails to ensure the sustainability of the codebase. Over the years, the Metaserver codebase grew to become one of the most disorganized and tangled codebases in the company.
```
//metaserver/controllers/ …
//metaserver/model/ …
//metaserver/lib/ …
```
Metaserver Code Structure
Because the codebase had multiple teams working on it, no single team felt strong ownership over codebase quality. For example, to unblock a product feature, a team would introduce import cycles into the codebase rather than refactor code. Even though this let us ship code faster in the short term, it left the codebase much less maintainable, and problems compounded.
### Inconsistent push cadence
We push Metaserver to production for all our users daily. Unfortunately, with hundreds of developers effectively contributing to the same codebase, the likelihood of at least one critical bug being added every day had become fairly high. This would necessitate rollbacks and cherry picks of the entire monolith, and caused an inconsistent and unreliable push cadence for developers. Common best practices (for example, from Accelerate) point to fast, consistent deploys as the key to developer productivity. We were nowhere close to ideal on this dimension.
Inconsistent push cadence leads to unnecessary uncertainty in the development experience. For example, if a developer is working towards a product launch on day X, they aren’t sure whether their code should be submitted to our repository by day X-1, X-2 or even earlier, as another developer’s code might cause a critical bug in an unrelated component on day X and necessitate a rollback of the entire cluster completely unrelated to their own code.
### Infrastructure debt
With a monolith of millions of lines of code, infrastructure improvements take much longer or never happen. For example, it had become impossible to stage a rollout of a new version of an HTTP framework or Python on only non-critical routes.
Additionally, Metaserver uses a legacy Python framework unused in most other Dropbox services or anywhere else externally. While our internal infrastructure stack evolved to use industry standard open source systems like gRPC, Metaserver was stuck on a deprecated legacy framework that unsurprisingly had poor performance and caused maintenance headaches due to esoteric bugs. For example, the legacy framework only supports HTTP/1.0 while modern libraries have moved to HTTP/1.1 as the minimum version.
Moreover, all the benefits we developed or integrated in our internal infrastructure, like integrated metrics and tracing, had to be hackily redone for Metaserver which was built atop different internal frameworks.
Over the past few years, we had spun up several workstreams to combat the issues we faced. Not all of them were all successful, but even those we gave up on paved the way to our current solution.
## SOA: the cost of operating independent services
We tried to break up Metaserver as part of a larger push around a Service Oriented Architecture (SOA) initiative. The goal of SOA was to establish better abstractions and separation of concerns for functionalities at Dropbox—all problems that we wanted to solve in Metaserver.
The execution plan was simple: make it easy for teams to operate independent services in production, then carve out pieces of Metaserver into independent services.
Our SOA effort had two major milestones:
- Make it possible and easy to build services outside of Metaserver
- Extract core functionalities like identity management from the monolith and expose them via RPC, to allow new functionalities to be built outside of Metaserver
- Establish best practices and a production readiness process for smoothly and scalably onboarding new multiple services that serve customer-facing traffic, i.e. our live site services
- Break up Metaserver into smaller services owned and operated by various teams
The SOA effort proved to be long and arduous. After over a year and a half, we were well into the first milestone. However, the experience from executing that first milestone exposed the flaws of the second milestone. As more teams and services were introduced into the critical path for customer traffic, we found it increasingly difficult to maintain a high reliability standard. This problem would only compound as we moved up the stack away from core functionalities and asked product teams to run services.
### No one solution for everything
With this insight, we reassessed the problem. We found that product functionality at Dropbox could be divided into two broad categories:
- large, complex systems like all the logic around sharing a file
- small, self-contained functionality, like the homepage
For example, the “Sharing” service involves stateful logic around access control, rate limits, and quotas. On the other hand, the homepage is a fairly simple wrapper around our metadata store/filesystem service. It doesn’t change too often and it has very limited day to day operational burden and failure modes. In fact, operational issues for most routes served by Dropbox had common themes, like unexpected spikes of external traffic, or outages in underlying services.
This led us to an important conclusion:
**Small, self contained functionality doesn’t need independently operated services.**This is why we built Atlas.- It’s unnecessary overhead for a product team to plan capacity, set up good alerts and multihoming (automatically running in multiple data centers) for small, simple functionality. Teams mostly want a place where they can write some logic, have it automatically run when a user hits a certain route, and get some automatic basic alerts if there are too many errors in their route. The code they submit to the repository should be deployed consistently, quickly and continuously.
- Most of our product functionality falls into this category. Therefore, Atlas should optimize for this category.
- Large components should continue being their own services, with which Atlas happily coexists.
- Large systems can be operated by larger teams that sustainably manage the health of their systems. Teams should manage their own push schedules and set up dedicated alerts and verifiers.
## Atlas: a hybrid approach
With the fundamental sustainability problems we had with Metaserver, and the learning that migrating Metaserver into many smaller services was not the right solution for everything, we came up with Atlas, a managed platform for the self-contained functionality use case.
Atlas is a hybrid approach. It provides the user interface and experience of a “serverless” system like AWS Fargate to Dropbox product developers, while being backed by automatically provisioned services behind the scenes.
As we said, the goal of Atlas is to provide the majority of benefits of SOA, while minimizing the operational costs associated with running a service.
Atlas is “managed,” which means that developers writing code in Atlas only need to write the interface and implementation of their endpoints. Atlas then takes care of creating a production cluster to serve these endpoints. The Atlas team owns pushing to and monitoring these clusters.
This is the experience developers might expect when contributing to a monolith versus Atlas:
## Goals
We designed Atlas with five ideal outcomes in mind:
**Code structure improvements**
Metaserver had no real abstractions on code sharing, which led to coupled code. Highly coupled code can be the hardest to understand and refactor, and the most likely to sprout bugs when modified. We wanted to introduce a structure and reduce coupling so that new code would be easier to read and modify.**Independent, consistent pushes**
The Metaserver push experience is great when it works. Product developers only have to worry about checking in code which will automatically get pushed to production. However, the aforementioned lack of push isolation led to an inconsistent experience. We wanted to create a platform where teams were not blocked on push due to a bug in unrelated code, and create the foundation for teams to push their own code in the future.**Minimized operational busywork**
We aimed to keep the operational benefits of Metaserver while providing some of the flexibility of a service. We set up automatic capacity management, automatic alerts, automatic canary analysis, and an automatic push process so that the migration from a monolith to a managed platform was smooth for product developers.**Infrastructure unification**
We wanted to unify all serving to standard open source components like gRPC. We don’t need to reinvent the wheel.**Isolation**
Some features like the homepage are more important than others. We wanted to serve these independently, so that an overload or bug in one feature could not spill over to the rest of Metaserver.
We evaluated using off-the-shelf solutions to run the platform. But in order to de-risk our migration and ensure low engineering costs, it made sense for us to continue hosting services on the same deployment orchestration platform used by the rest of Dropbox.
However, we decided to remove custom components, such as our custom request proxy Bandaid, and replace them with open source systems like Envoy that met our needs.
## Technical design
The project involved a few key efforts:
**Componentization**
- De-tangle the codebase by feature into components, to prevent future tangles
- Enforce a single owner per component, so new functionality cannot be tacked onto a component by a non-owner
- Incentivize fewer shared libraries and more code sharing via RPC
**Orchestration**
- Automatically configure each component into a service in our deployment orchestration platform with <50 lines of boilerplate code
- Configure a proxy (Envoy) to send a request for a particular route to the right service, instead of simply sending each request to a Metaserver node
- Configure services to speak to one another in gRPC instead of HTTP
**Operationalization**
- Automatically configure a deployment pipeline that runs daily and pushes to production for each component
- Set up automatic alerts and automatic analysis for regressions to each push pipeline to automatically pause and rollback in case of any problems
- Automatically allocate additional hosts to scale up capacity via an autoscaler for each component based on traffic
Let’s look at each of these in detail.
### Componentization
**Logical grouping of routes via servlets
**Atlas introduces Atlasservlets (pronounced “atlas servlets”) as a logical, atomic grouping of routes. For example, the home Atlasservlet contains all routes used to construct the homepage. The nav Atlasservlet contains all the routes used in the navigation bar on the Dropbox website.
In preparation for Atlas, we worked with product teams to assign Atlasservlets to every route in Metaserver, resulting in more than 200 Atlasservlets across more than 5000 routes. Atlasservlets are an essential tool for breaking up Metaserver.
```
//atlas/home/ …
//atlas/nav/ …
//atlas/<some other atlasservlet>/ …
```
Atlas code structure, organized by servlets
Each Atlasservlet is given a private directory in the codebase. The owner of the Atlasservlet has full ownership of this directory; they may organize it however they wish, and no one else can import from it. The Atlasservlet code structure inherently breaks up the Metaserver code monolith, requiring every endpoint to be in a private directory and make code sharing an explicit choice rather than an unexpected outcome of contributing to the monolith.
Having the Atlasservlet codified into our directory path also allows us to automatically generate production configs that would normally accompany a production service. Dropbox uses the Bazel build system for server side code, and we enforced prevention of imports through a Bazel feature called visibility rules, which allows library owners to control which code can use their libraries.
**Breakup of import cycles
**In order to break up our codebase, we had to break most of our Python import cycles. This took several years to achieve with a bunch of scripts and a lot of grunt work and refactoring. We prevented regressions and new import cycles through the same mechanism of Bazel visibility rules.
## Orchestration
In Atlas, every Atlasservlet is its own cluster. This gives us three important benefits:
**Isolation by default**A misbehaving route will only impact other routes in the same Atlasservlet, which is owned by the same team anyway.
**Independent pushes**Each Atlasservlet can be pushed separately, putting product developers in control of their own destiny with respect to the consistency of their pushes.
**Consistency**Each Atlasservlet looks and behaves like any other internal service at Dropbox. So any tools provided by our infrastructure teams—e.g. periodic performance profiling—will work for all other teams’ Atlasservlets.
**gRPC Serving Stack
**One of our goals with Atlas was to unify our serving infrastructure. We chose to standardize on gRPC, a widely adopted tool at Dropbox. In order to continue to serve HTTP traffic, we used the gRPC-HTTP transcoding feature provided out of the box in Envoy, our proxy and load balancer. You can read more about Dropbox’s adoption of gRPC and Envoy in their respective blog posts.
In order to facilitate our migration to gRPC, we wrote an adapter which takes an existing endpoint and converts it into the interface that gRPC expects, setting up any legacy in-memory state the endpoint expects. This allowed us to automate most of the migration code change. It also had the benefit of keeping the endpoint compatible with both Metaserver and Atlas during mid-migration, so we could safely move traffic between implementations.
### Operationalization
Atlas’s secret sauce is the managed experience. Developers can focus on writing features without worrying about many operational aspects of running the service in production, while still retaining the majority of benefits that come with standalone services, like isolation.
The obvious drawback is that one team now bears the operational load of all 200+ clusters. Therefore, as part of the Atlas project we built several tools to help us effectively manage these clusters.
**Automated Canary Analysis
**Metaserver (and Atlas by extension) is stateless. As a result one of the most common ways a failure gets introduced into the system is through code changes. If we can ensure that our push guardrails are as airtight as possible, this eliminates the majority of failure scenarios.
We automate our failure checking through a simple canary analysis service very similar to Netflix’s Kayenta. Each Atlas service consists of three deployments: canary, control, and production, with canary and control receiving only a small random percentage of traffic. During the push, canary is restarted with the newest version of the code. Control is restarted with the old version of the code but at the same time as canary to ensure the operate from the same starting point.
We automatically compare metrics like CPU utilization and route availability from the canary and control deployments, looking for metrics where canary may have regressed relative to control. In a good push, canary will perform either equal to or better than control, and the push will be allowed to proceed. A bad push will be stopped automatically and the owners notified.
In addition to canary analysis, we also have alerts set up which are checked throughout the process, including in between the canary, control, and production pushes of a single cluster. This lets us automatically pause and rollback the push pipeline if something goes wrong.
Mistakes still happen. Bad changes may slip through. This is where Atlas’s default isolation comes in handy. Broken code will only impact its one cluster and can be rolled back individually, without blocking code pushes for the rest of the organization.
**Autoscaling and capacity planning
**Atlas's clustering strategy results in a large number of small clusters. While this is great for isolation, it significantly reduces the headroom each cluster has to handle increases in traffic. Monoliths are large shared clusters, so a small RPS increase on a route is easily absorbed by the shared cluster. But when each Atlasservlet is its own service, a 10x increase in route traffic is harder to handle.
Capacity planning for 200+ clusters would cripple our team. Instead, we built an autoscaling system. The autoscaler monitors the utilization of each cluster in real time and automatically allocates machines to ensure that we stay above 40% free capacity headroom per cluster. This allows us to handle traffic increases as well as remove the need to do capacity planning.
The autoscaling system reads metrics from Envoy’s Load Reporting Service and uses request queue length to decide cluster size, and probably deserves its own blog post.
## Execution
### Stepping stones, not milestones
Many previous efforts to improve Metaserver had not succeeded due to the size and complexity of the codebase. This time around, we wanted to deliver value to product developers even if we didn’t succeed in fully replacing Metaserver with Atlas.
The execution plan for Atlas was designed with stepping stones, not milestones (as elegantly described by former Dropbox engineer James Cowling), so that each incremental step would provide sufficient value in case the next part of the project failed for any reason.
A few examples:
- We started off by speeding up testing frameworks in Metaserver, because we knew that an Atlas serving stack in tests might cause a regression in test times.
- We had a constraint to significantly improve memory efficiency and reduce OOM kills when we migrated from Metaserver to Atlas, since we would be able to pack more processes per host and consume less capacity during the migration. We focused on delivering memory efficiency purely to Metaserver instead of tying the improvements to the Atlas rollout.
- We designed a load test to prove that an Atlas MVP would be able to handle Metaserver traffic. We reused the load test to validate Metaserver’s performance on new hardware as part of a different project.
- We backported workflow simplifications as much as feasible to Metaserver. For example, we backported some of the workflow improvements in Atlas to our web workflows in Metaserver.
- Metaserver development workflows are divided into three categories based on the protocol: web, API, and internal gRPC. We focused Atlas on internal gRPC first to de-risk the new serving stack without needing the more risky parts like gRPC-HTTP transcoding. This in turn gave us an opportunity to improve workflows for internal gRPC independent of the remaining risky parts of Atlas.
### Hurdles
With a large migration like this, it’s no surprise that we ran into a lot of challenges. The issues faced could be their own blog post. We’ll summarize a few of the most interesting ones:
- The legacy HTTP serving stack contained quirky, surprising, and hard to replicate behavior that had to be ported over to prevent regressions. We powered through with a combination of reading the original source code, reusing legacy library functions where required, relying on various existing integration tests, and designing a key set of tests that compare byte-by-byte outputs of the legacy and new systems to safely migrate.
- While splitting up Metaserver had wins in production, it was infeasible to spin up 200+ Python processes in our integration testing framework. We decided to merge the processes back into a monolith for local development and testing purposes. We also built heavy integration with our Bazel rules, so that the merging happens behind the scene and developers can reference Atlasservlets as regular services.
- Splitting up Metaserver in production broke many non-obvious assumptions that could not be caught easily in tests. For example, some infrastructure services had hardcoded the identity of Metaserver for access control. To minimize failures, we designed a meticulous and incremental migration plan with a clear understanding of the risks involved at each stage, and slowly monitored metrics as we rolled out the new system.
- Engineering workflows in Metaserver had grown organically with the monolith, arriving at a state where engineers had to page in an enormous amount of context to get the simplest work done. In order to ensure that Atlas prioritizes and solves major engineering pain points, we brought on key product developers as partners in the design, then went through several rounds of iteration to set up a roadmap that would definitively solve both product and infrastructural needs.
## Status
Atlas is currently serving more than 25% of the previous Metaserver traffic. We have validated the remaining migration in tests. We’re on a clear path to deprecate Metaserver in the near future.
## Conclusion
The single most important takeaway from this multi-year effort is that well-thought-out code composition, early in a project’s lifetime, is essential. Otherwise, technical debt and code complexity compounds very quickly. The dismantling of import cycles and decomposition of Metaserver into feature based directories was probably the most strategically effective part of the project, because it prevented new code from contributing to the problem and also made our code simpler to understand.
By shipping a managed platform, we took a thoughtful approach on how to break up our Metaserver monolith. We learned that monoliths have many benefits (as discussed by Shopify) and blindly splitting up our monolith into services would have increased operational load to our engineering organization.
In our view, developers don’t care about the distinction between monoliths and services, and simply want the lowest-overhead way to deliver end value to customers. So we have very little doubt that a managed platform which removes operational busywork like capacity planning, while providing maximum flexibility like fast releases, is the way forward. We’re excited to see the industry move toward such platforms.
### We’re hiring!
If you’re interested in solving large problems with innovative, unique solutions—at a company where your push schedule is more predictable : ) —please check out our open positions.
### Acknowledgements
Atlas was a result of the work of a large number of Dropboxers and Dropbox alumni, including but certainly not limited to: Agata Cieplik, Aleksey Kurkin, Andrew Deck, Andrew Lawson, David Zbarsky, Dmitry Kopytkov, Jared Hance, Jeremy Johnson, Jialin Xu, Jukka Lehtosalo, Karandeep Johar, Konstantin Belyalov, Ivan Levkivskyi, Lennart Jansson, Phillip Huang, Pranay Sowdaboina, Pranesh Pandurangan, Ruslan Nigmatullin, Taylor McIntyre, and Yi-Shu Tai. | true | true | true | null | 2024-10-12 00:00:00 | 2021-03-04 00:00:00 | article | dropbox.tech | dropbox.tech | null | null |
|
16,890,074 | http://theaijournal.com/2018/04/20/pytorch-for-computer-vision-1-introduction-to-computer-vision-and-deep-learning/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,030,598 | http://www.askthevc.com/blog/archives/2009/12/is-an-inside-ro.php?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+askthevc+%28Ask+the+VC%29 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,840,643 | http://www.pipredictor.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,340,825 | https://cryptovest.com/news/zclassic-zcl-gets-delisted-from-bittrex-news-crashes-price-to-record-lows/ | null | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
40,939,964 | https://teiolass.gitlab.io/posts/zig_stack/ | Alessio Marchetti | null | This blogpost is the result of an hour of funny mess-around that took place a few days ago just before starting the work day. I am no way an expert in this field, this is just couple of things that I found working when thinking about the concept of “why can’t I change the location of the stack in a program?”. I hope this post can encourage some people to experiment with low level code, not for any performance or intricate reasons, but because messing with “how does this work” is fun!
Every program compiled with a reasonable compiler runs putting some of its data in the so called
stack. It’s fairly easy: when you call a function, the function puts its data on top of the stack;
when the function returns the data is removed so that the stacks now exposes the data of the caller
function. In the x86 architecture stack entries (the so called stack frames) are managed with the
`rbp`
and `rsp`
registers. A more detailed introduction can be found here.
**Note**: all the code in this blogpost is written in Zig. However the same
things can be replicated in every language that allows to write inline assembly in it. The C
language is a very good example of that.
Okay, now let’s get our hands dirty. And GodBolt is a very nice place to cover your hands in the most muddy dirt.
We start with a simple function, it should be the default function when you open the website:
```
export fn square(num: i32) i32 {
return num * num;
}
```
Let’s have a look at the generated assembly now:
```
square:
push rbp
mov rbp, rsp
sub rsp, 16
mov dword ptr [rbp - 4], edi
imul edi, edi
mov dword ptr [rbp - 8], edi
seto al
jo .LBB0_1
jmp .LBB0_2
.LBB0_1:
movabs rdi, offset __anon_1406
mov esi, 16
xor eax, eax
mov edx, eax
movabs rcx, offset .L__unnamed_1
call example.panic
.LBB0_2:
mov eax, dword ptr [rbp - 8]
add rsp, 16
pop rbp
ret
```
I removed some parts because they are not really relevant to what we are going to do.
If it’s your first time reading some assembly code, here’s a few ideas on how it works:
`square:`
” are labels. They do nothing, but are useful to say things
like “go to label `square`
”, or “go to label `LBB0_1`
”.`ret`
), some do (like `imul`
).`sub rsp, 16`
means “Take the value that is in the `rsp`
register, subtract `16`
, and then put the
result in the `rsp`
register again”.Now let’s dissect the assembly code, starting from the first instructions **(1)**:
```
push rbp
mov rbp, rsp
sub rsp, 16
```
These lines manipulates the stack pointers. The `rbp`
registers holds a pointer to the start of the
current frame, while the `rsp`
register holds a pointer to the its end. Since the stack grows from
higher memory locations to lower memory locations, the value of `rbp`
is greater or equal to that of
`rsp`
.
The `push`
instruction “decrements the stack pointer and then stores the source operand on the top
of the stack” (see this). So the first line saves `rbp`
on
the top of the stack.
The `mov`
instruction overwrites the value of its first operand with the value of the second
operand. Thus `rbp`
now points to the end of the previous stack frame.
Finally the third lines subtracts 16 from `rsp`
, meaning that the new stack frame is going to have a
size of 16 bytes.
Let’s continue with a second batch of lines **(2)**:
```
mov dword ptr [rbp - 4], edi
imul edi, edi
mov dword ptr [rbp - 8], edi
```
Here the operands of `mov`
are not registers but memory locations. The operand `dword ptr [rbp - 4]`
means “whatever the memory location `(rbp - 4)`
contains, of the size of a dword (which 4 bytes)”.
In other words we are moving the content of the `edi`
register in the first 4 bytes of the current
stack frame. Usually the `edi`
register contains the first argument of the function, that in this
case is the integer value `num`
in the original Zig code. Let’s notice that `num`
is of type `i32`
,
which has size exactly 4 bytes, a dword!
The `imul`
instruction works similarly to the `sub`
function, performing a signed integer
multiplication between the two operands and storing the result in the first operand.
The third instruction saves the result of the multiplication in a second slot on the stack frame.
We do not really care for the next set of three lines **(3)**:
```
seto al
jo .LBB0_1
jmp .LBB0_2
```
It is used to handle the overflow in the multiplication. If the overflow happens, the program will
continue at the label `LBB0_1`
, otherwise the normal flow will proceed at the `LBB0_2`
label. More
details on the overflow path are left to the reader to figuere them out 😃.
Let’s have a look at the non-overflow path then **(4)**:
```
mov eax, dword ptr [rbp - 8]
add rsp, 16
pop rbp
ret
```
The first line moves the result of the multiplication into the `eax`
register, which is commonly
used for the return values. The next two lines revert the changes to the stack pointer and base
stack pointer. The instruction `add rsp, 16`
closes the parenthesis opened at `sub rsp, 16`
and `pop rbp`
does the same thing with `push rbp`
. The stack is thus in the same condition as we started,
masking all the computations that happened since the beginning of the `square`
function.
A last instruction `ret`
gives back control to the caller function (see this).
What’s the point of all that stack juggling in such a simple function? Well… mostly none. In fact
let’s run an optimizing compiler by passing the `-O ReleaseFast`
argument to the Zig compiler.
The generated assembly will be much simple:
```
square:
mov eax, edi
imul eax, edi
ret
```
With this option we can see we also disabled the overflow check.
Having understood the basic mechanisms of the stack pointers we are almost ready to present our final piece of code.
We start with a dummy function, it’s not really important what it does, I just want it to fill a few stack frames.
```
fn fibonacci(n: u32) u32 {
if (n <= 1) return n;
const a = fibonacci(n - 1);
const b = fibonacci(n - 2);
return a + b;
}
```
Then I want a wrapper function to call `fibonacci`
from another stack. Here it is:
```
fn new_stack(n: u32) u32 {
const stack = std.heap.page_allocator.alloc(u8, 1024) catch unreachable;
const stack_ptr = stack.ptr;
const old_rbp = asm (
"": [ret] "={rbp}" (-> [*]u8),
::
);
const old_rsp = asm (
"": [ret] "={rsp}" (-> [*]u8),
::
);
const frame_data = old_rsp[0..@intFromPtr(old_rbp ) - @intFromPtr(old_rsp)];
@memcpy(stack_ptr[0..frame_data.len], frame_data);
const new_rsp = &stack_ptr[frame_data.len];
asm volatile (
\\ mov %%rbp, %%rax
\\ mov %%rsp, %%rdx
::
[stack_ptr] "{rax}" (stack_ptr),
[new_rsp] "{rdx}" (new_rsp),
: "rax", "rdx"
);
const r = fibonacci(n);
asm volatile (
\\ mov %%rbp, %%rax
\\ mov %%rsp, %%rdx
::
[old_rbp] "{rax}" (old_rbp),
[old_rsp] "{rdx}" (old_rsp),
: "rax", "rdx"
);
return r;
}
```
In the firtst two lines we create a new stack of size 1024 bytes. For people unfamiliar with Zig,
`std.heap.page_allocator.alloc`
works more or less like a C `malloc`
, but with just a minimal wrap
around the `mmap`
syscall. An excellent post on allocators and why you would benefit from something
different from `malloc`
can be found here.
The next step is to retrieve the two stack pointers. I do it by injecting some assembly in the code.
**Note**: At the writing time of this post (2024-07-11) the asm structure is signaled as highly
unstable, take the exact syntax with even more grains of salt than the rest of the article.
An assembly block in Zig (similarly to C) is made of four parts, separated by a colon:
Thus the code
```
const old_rbp = asm (
"": [ret] "={rbp}" (-> [*]u8),
::
);
```
contains only the output value, that moves the `rbp`
register into the result of the asm block (thus
the `old_rbp`
variable), giving it a type of an array of bytes.
After the first two asm block, we have saved the values of the two stack pointers in the `new_stack`
function stack frame.
```
const frame_data = old_rsp[0..@intFromPtr(old_rbp ) - @intFromPtr(old_rsp)];
```
With this line we are declaring an array of bytes in the locations corresponding to the stack frame
of the `new_stack`
function: it’s an array starting from `rsp`
and going up to `rbp`
.
Next we execute a `memcpy`
to copy the current stack frame into the freshly allocated memory. It’s
useful because we will want to have a workign environment when we will return from the `fibonacci`
function.
We also compute the end of the copied stack frame, and save it in the `new_rsp`
variable.
Then we let magic happen:
```
asm volatile (
\\ mov %%rbp, %%rax
\\ mov %%rsp, %%rdx
::
[stack_ptr] "{rax}" (stack_ptr),
[new_rsp] "{rdx}" (new_rsp),
: "rax", "rdx"
);
```
Lines starting with `\\`
are the cumbersome way to write multi line strings in Zig. In this block we
have two input variables: `stack_ptr`
is moved into register `rax`
, `new_rsp`
is moved into register
`rdx`
. We then moves these values into `rbp`
and `rsp`
.
**Note**: This code can be written much shortly by putting the varieable values directly into their
final registers. However it’s fun to explore the correct whole syntax, it’s a first time to me with
Zig.
We can finally call the child function, `fibonacci`
, and restore the stack to the original value, as
nothing had happened. This last bit of code should be familiar by now.
Let’s try it!
```
pub fn main() !void {
std.debug.print("fibonacci(12): {}\n", .{new_stack(12)});
}
```
By compiling this code and running it we get the correct result:
```
fibonacci(12): 144
```
Sounds like we got it!
Comments can be submitted at HackerNews: HERE.
As said in the beginning, this post is an encouragement for people to get on a keyboard and try to test funny things, in a light-hearted quest to understand the world where we live. I don’t expect the knowledge I gained in this little experiment to be useful in a near future. What’s important is the friends we made along the way.
*PS. Do not try to compile the final code with ReleaseFast, and if you do, do not try to run the
executable.*
With love,
– Alessio | true | true | true | null | 2024-10-12 00:00:00 | 2023-07-11 00:00:00 | null | null | null | null | null | null |
1,279,817 | http://www.kellycreativetech.com/Blog/entry/Curious_Apple_UX_Choices_on_the_iPad/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
890,517 | http://oscon.blip.tv/file/324976 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,411,431 | https://9to5google.com/2021/03/09/shadow-game-streaming-bankruptcy/ | Shadow PC game streaming service under new ownership after declaring bankruptcy [Updated] | Ben Schoon | Cloud services have expanded pretty massively over the past few years, and game streaming has seen that boom over the past year too with products such as Google Stadia and Amazon Luna drumming up interest in the space. Shadow by Blade takes a different but useful approach to game streaming, but a boom in popularity of the service may be the company’s downfall as it’s forced Blade to file for bankruptcy due to debt.
**Update 4/30:** Blade has been acquired.
In an announcement today, Blade announced that plans for Shadow are back on. The company is under new ownership by Jezby Ventures which will “provide Shadow with a capacity for growth that is both sustainable and profitable.” Financial terms of the deal have not been confirmed.
Our original coverage of Blade declaring bankruptcy from early March follows:
*GamesIndustry.biz* first picked up on the news that Blade had filed for bankruptcy with the company looking for investment from another company to stay afloat.
Shadow was founded in 2015 and launched to the public in 2019, just ahead of the COVID-19 pandemic. The service was originally positioned as a game streaming service, but in a call with the company just a few weeks ago, we were told that the pandemic saw the service used by many for general work. Offloading heavier workloads to Shadow’s powerful hardware and speedy internet connections could genuinely save time without adding much cost. Shadow plans start at $11.99.
Unfortunately, that massive increase in users has resulted in this news. In a statement posted to its blog late last week, Shadow says it has become a “victim of its success” with debt piling up to allow the service to work for the thousands of current users as well as the thousands of users waiting to get in on the platform. Now, Shadow is working on getting new investment dollars to give the service a “fresh start.”
It’s extremely important to note, though, that Shadow is *not* closing its doors. In a FAQ, Shadow explained that current subscribers can still access the service as normal, and those in line for a subscription will still get their turn as well. The only major change as a result of this news is that Shadow Ultra and Shadow Infinite will be held in limbo in Europe and “on hold” in US markets as well.
On the other hand, there’s grim news, too. One potential source of funding, Octave Klaba, the founder of OVHcloud, says he is interested in buying Shadow by Blade, but with the intention to build a competitor to Google Workspace and Office 365 rather than to provide a game-streaming platform.
2CRSi, the server provider that makes Shadow’s streaming tech possible, said in a statement that they have the right to take over the €30.2 million worth of hardware that Shadow uses over the current €3.7 million debt they’re owed. Meanwhile, the tight supply of computer components during the pandemic has other customers interested in the hardware 2CRSi is currently using for Shadow.
Things aren’t looking particularly bright for the service, which is a shame, as the tech behind it works shockingly well for an independent company.
## More on Game streaming:
- Microsoft is reportedly upgrading xCloud streaming w/ Xbox Series X backend, 1080p
- What should Google Stadia have done differently in its first year? [Poll]
- Amazon Luna game streaming expands to Fire TV owners w/o invite as controller price hits $70
*FTC: We use income earning auto affiliate links.* More.
## Comments | true | true | true | Shadow by Blade, a useful PC/game streaming service, has filed for bankruptcy over debt incurred from a boom in popularity during COVID-19. | 2024-10-12 00:00:00 | 2021-04-30 00:00:00 | article | 9to5google.com | 9To5Google | null | null |
|
4,663,058 | http://www.forbes.com/sites/tomiogeron/2012/10/16/uber-closes-yellow-taxi-cab-service-in-new-york-city/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,079,907 | http://www.salon.com/2014/01/17/robots_are_stealing_your_job_how_technology_threatens_to_wipe_out_the_middle_class/ | Robots are stealing your job: How technology threatens to wipe out the middle class | Andrew Leonard | A few hours before I interviewed Erik Brynjolfsson and Andrew McAfee, the co-authors of "The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies," the U.S. Department of Labor released a disappointing jobs report. The U.S. economy had only created 70,000 new jobs, the lowest monthly total since 2011. More alarming, the labor-force participation ratio (the share of Americans working or looking for work) had fallen to 62.8 percent -- the lowest mark since 1977.
The news was a depressing, but timely, reminder of why "The Second Machine Age" is an important book. Brynjolfsson is the director of the MIT Center for Digital Business. McAfee is a principal research scientist at the same institution. Their first co-authored book, "Race Against the Machine," made a compelling case that recent advances in technology are placing workers under unprecedented pressure. Automation is destroying jobs, but in contrast to past history, new jobs are not being created in adequate compensation for what's lost, (a point all too well underlined by the latest jobs report). "The Second Machine Age" reexamines this relentless march of the robots, but in the context of a technological landscape in which change is accelerating significantly faster than what could even have been imagined just a few years ago.
The emergence of Big Data, the exponential growth unleashed by decades of Moore's Law (more and more computing power for less and less cost), and the logic of what the authors call "recombinant innovation" -- the mixing and matching of our powerful new tools into a bewildering array of even newer, even more powerful tools -- have replaced hype with a bewildering new reality. We're headed somewhere new at high speed, and with no apparent ability to put on the brakes.
"The Second Machine Age" is fascinating because it is simultaneously hopeful and wary about how technological change is remaking our lives. The authors make a compelling case that the second machine age will deliver vast "bounty" to humankind. The overall size of the economic pie is sure to grow. As consumers, the options available to us will beggar description. But McAfee and Brynjolfsson are also quite clear-eyed about the alarming reality of how that pie is being sliced up and distributed. The numbers can't be ignored: The bounty is growing, but so is what the authors call "the spread" -- growing income inequality, greater concentration of wealth in fewer hands, unprecedented pressure on labor markets.
How did this happen, and what should we do about it? Brynjolfsson and McAfee spoke with Salon by phone to discuss the challenges, and opportunities, of the second machine age.
**You write: "The Second Machine Age will make mockery of all that comes before." What does that mean? What is the second machine age?**
Andy: The first machine age is the Industrial Revolution. The first machine age was essentially about amplifying our muscle power. What Erik and I are saying in "The Second Machine Age" is that we are now in the early stages of a parallel acceleration and amplification of our cognitive abilities. The line you quoted is an allusion to something Ian Morris said in his fantastic book, "Why the West Rules for Now." As Morris was graphing human history, his lines went from basically horizontal to vertical at the outset of the Industrial Revolution. "It made mockery out of everything that had come before": wars and empires and everything else.
Erik: These titanic changes have had a huge effect on human living standards, and in many ways we can learn from how things changed with the first Industrial Revolution and the first machine age. But there are also some key differences. As we augmented and automated muscle power and our ability to use that power to manipulate the world -- not just with the steam engine but with subsequent general-purpose technologies like the internal combustion engine and electricity -- it acted largely as a complement to human decision making. The power wasn't very valuable unless you had someone controlling it and deciding what to do with it.
With the second machine age we are augmenting and automating a lot of cognitive tasks and it is not as clear that humans will be a complement to those kinds of machines. In some cases the machines will be substituting for humans. And that has a different kind of effect on the labor markets as well as on economic output.
**So what, exactly, happened? For decades people have been making extravagant claims for the transformations that were sure to come from networks and digital computing. But for most of that period hype outpaced reality. Now, in the last four or five years, thing really seem to be speeding up. What changed?**
Erik: We need to keep some historical perspective here. The early revolutions took a century or more to play out as each new general purpose technology kicked in. So we still have a lot ahead of us. I think we still have a century or more of complementary innovations to play out. But specifically to your point of what has happened recently -- Andy and I have been astonished at how quickly things have just started happening in the past five or 10 years.
We were teaching a course where we talked about what machines could do well and what humans could do well, and we gave fine motor control and pattern recognition, and specifically driving a car, as examples of things that humans were particularly adept at and we didn't think that machines would do any time soon.
Six years later we were riding down Route 101 in one of Google's driverless cars! So we were wrong in how long that would take.
A big part of that has been the big data revolution, the ability to get around or sidestep some thorny problems that researchers have been working on for decades in language recognition and the perceivable world, not by using the traditional methods, but really just by throwing huge amounts of data at the problem. That's one of the three big forces changing the world right now, along with exponential growth and what we call combinatorial growth.
Andy: There's a great quote from Hemingway about how a man goes broke. "It's gradually ... and then suddenly." And that really characterizes how digital technical progress has unfolded. Like you say, people have been making these big claims about what computers and robots were going to be doing for about half a century. And now we're kind of there. That's what motivated us to write the book. We were like, wait a minute: We didn't think our cars were going to drive ourselves or our phones would talk to us without a person at the other end of the line. The exponential part of the story has to do with the fact that after Moore's Law has been going on long enough, a difference in degree does become a difference in kind. And finally there is this idea of innovation as a combinatorial process. What people are doing is taking all these previous building blocks and adding digital blocks to them, and that's led to this growth spurt that we've been seeing.
The other thing that we've been stressing is that both of us believe we ain't seen nothing yet. What people are going to continue to do with these tools is just going to blow us away.
**But is that something to be excited about or terrified of? Isn't it a staple of science fiction that the kind of multi-recombinant exponential growth that you describe generally leads to Skynet or the Matrix?**
Erik: Well it depends on which kind of science fiction you watch. There are at least two big genres out there. There's "Star Trek" as well. But you're right. We see two camps. There are the dystopians who see really bad outcomes. A lot of economists point to the relative stagnation of median income as a warning sign. But there is also some really good news. Innovation creates wealth. We are at record levels of wealth in this country, we are up to $77 trillion in household wealth. That's a new all-time high. That was also part of what motivated Andy and me to work on this; this confusion we had about how you could have so many good things happening and so many bad things happening simultaneously. After digging into the data and talking to people we concluded that both the good and bad events had a common cause -- this technological digitization of the economy.
The key economic fact is that technology can make the pie bigger, it can make the economy richer, but it doesn't necessarily help everybody, in fact some people, even the majority of people, can be made worse off, even if the pie grows bigger.
That's not what happened for the past 200 years, with the first machine age, but it does seem to be what's happening now. In the second machine age we have a bigger pie but also more concentration of wealth. The median income is now lower than it was in the 1990s.
Andy: There's no law that says technology makes everybody better off. Some people appear to be worse off in their role as people who want to offer their labor to an employer or to the economy. But the point about the pie growing is still a really important point, because in our roles as citizens or consumers, technological progress is fantastic news. Even as the music industry has shrunk we're all listening to more music. Warren Buffett can't buy more Wikipedia than anyone else can buy. Our health outcomes are improving a great deal.
You brought up Skynet. As Erik and I looked around we found ourselves becoming less worried about machines becoming self-aware and rising up against us, but we found ourselves more concerned about how, in our roles as workers, things are really getting challenging for a lot of people.
**That does seem to be the defining question of the moment.**
Erik: But none of these futures that we describe as possible are inevitable. You can have some really bad outcomes, you can have some really good outcomes -- it is all going to depend on how we respond. The technology is only going to accelerate. If we respond correctly, then this is going to be good, good news. But it's not going to happen automatically.
**Whenever we talk about technology and jobs, some economists, and certainly lots of voices on the more conservative side of the political spectrum, will argue that any jobs that get lost in one sector of the economy will be more than compensated for by new job creation in other sectors. But that assumption seems to breaking down.**
Erik and Andy: Yup!
Andy: There seem to be two things going on. Number one is that those commentators that you mention are looking back at the historical pattern and getting a great deal of confidence from it -- as they should. The historical pattern is one of a succession of pretty smart people saying large-scale technological unemployment is right around the corner, and basically being wrong. The question for us, and Erik and I spend a lot of time on this: Is this time different?
What those commentators get wrong is the fact that while technological progress does grow the pie, there is honestly no economic law that says that growth is going to float all boats the same way equally.
Erik: This is the big question: whether or not the jobs will be there going forward. In the past they always have been, but Andy and I don't think that's automatic. Technology has always been destroying jobs, and it's always been creating jobs, and it's been roughly a wash for the last 200 years. But starting in the 1990s the employment to population ration really started plummeting and it's now fallen off a cliff and not getting back up. We think that it should be the focus of policymakers right now to figure out how to address that, because it is likely that technology is going to have an even bigger impact going forward. So we can't just ignore it.
**So what can we do?**
Erik: There are three high-level categories: education, entrepreneurship and tax policy. Each of those things needs to be reinvented and rethought. We need to fundamentally reinvent education, just like media and publishing and retailing and manufacturing and finance and just about every other industry has been reinvented. Our industry has been a real laggard. We need to rethink the philosophy of having people sit in rows and learn how to follow instructions; we should be fostering creativity, because the rote kind of skills are exactly what's being automated. On tax policy, instead of punishing people hiring workers we should be rewarding them. Right now we tax labor, and one of the basic laws of economics is that if you tax things you get less of them. That wasn't much of a problem for much of the previous history, but now it is having some really negative effects.
Andy: And the only thing I want to add on to that is that in the short term the robots and the androids and the AIs are not about to take all of our jobs in the next month or the next year. The progress, while astonishing, is just not that fast. So the right thing to do today is what Erik and I call the Econ 101 playbook for stimulating economic growth: education, entrepreneurship, infrastructure, immigration, basic research -- you are not going to get disagreement among well-trained economists on these kinds of things.
**OK -- but the Econ 101 playbook is the standard prescription for how to spur growth regardless of the external circumstances. Good times or bad, it's what most economists would advocate. But you make a very strong case in your book that current "advances in technology are driving an unprecedented reallocation of wealth and income." If we're going through an unprecedented transformation, is the standard playbook going to be enough to cope with it, or are we going to have to explore more drastic alternatives? **
Erik: Economic laws haven't changed. We are in a different part of the technological landscape than we have been in the past, but basic principles like, if you tax something you get less of it, and if you don't, you get more of it, are still true. We don't have to throw out the concept of economics to understand how to respond to it the current situation.
Andy: For example, if we were to upgrade this country's infrastructure so that the civil engineering society would give us a decent grade, instead of a D+, that would set the table for a much better business climate in the United States. Robots are still very bad at repairing bridges, so spending on infrastructure would result in jobs. If we could fix our immigration policy and let in these incredibly skilled people, talented people that want desperately to come to our country, if we could do that, we know they would create jobs. Immigration is a huge vehicle for entrepreneurship in this country. And if we make the climate for starting a new business more favorable we know the job growth would come. Over the next decent chunk of time we're still confident that the Econ 101 playbook is the right answer even as the technology continues to race ahead. If that's anywhere near correct, that gives us time to think about what to do, if we really are heading into this sci-fi future down the road, and we both think that we are.
**But how do you deal with the political consequences of growing inequality -- the broadening "spread." You point out that the people who are benefiting the most from the current technologically driven productivity growth are the owners of capital. You can make a good case that as a result of this vast increase in wealth, the owners of capital have more political power now than they've enjoyed in at least a century. I would argue that they are using that power to stymie the kind of appropriate policy responses that would reduce growing spread and reverse spiraling income inequality. Technological change isn't just screwing with the job market, it's concentrating wealth in the hands of people who are actively resisting any efforts to ameliorate the problem. So the spread has serious political consequences. That seems like a really hard problem to crack.**
Erik: It is. And we are very concerned about it. Our colleague Daron Acemoglu co-authored a book called "Why Nations Fail," where he goes into great depth about how economic concentration can lead to political concentration exactly along the lines that you are describing and how that can lead to a vicious, self-defeating cycle that brings down the whole economy. We certainly don't want to get into that kind of cycle, and that's one of the reason we want to change the conversation to focus more on understanding what seems to be driving these changes in the economy. If we get the diagnosis right, I think we would be in a better position to get the right prescriptions.
Andy: What we don't think is the right prescription is to say let's make sure nobody else ever gets rich off new technology. That's really not the way we want to go ahead.
Erik: But it has to be balanced with an equality of opportunity.
Andy: And the big worry, the reason why we quoted Daron in "Why Nations Fail," is that he makes a very persuasive case that the rising inequality that you describe will eventually lead to a serious decrease in the quality of opportunity. And if that's under threat we need to be concerned about that.
**But how do you break the cycle when the logic of technological change is contributing to that cycle?**
Erik: Well, you can't take the attitude that there is nothing we can do. We do believe that technology is a driver of change, but it doesn't follow that there is nothing we can do to shape the future. We think that education, entrepreneurship and tax policy can all help in increasing both the bounty and decreasing the spread. That's our grand challenge.
## Shares | true | true | true | Machines are replacing workers at an alarming pace. Here's how to avoid a middle-class robot apocalypse | 2024-10-12 00:00:00 | 2014-01-17 00:00:00 | article | salon.com | Salon.com | null | null |
|
15,881,970 | https://cacm.acm.org/blogs/blog-cacm/223208-the-real-costs-of-a-computer-science-teacher-are-opportunity-costs-and-those-are-enormous/fulltext | The Real Costs of a Computer Science Teacher Are Opportunity Costs, and Those Are Enormous | Mark Guzdial | Imagine that you’re an undergraduate who excels at science and mathematics. You could go to medical school and become a doctor. Or you could become a teacher. Which would you choose?
If you’re in the United States, most students wouldn’t see these as comparable choices. The average salary for a general practitioner doctor in 2010 was $161,000, and the average salary for a teacher was $45,226. Why would you choose to make a third as much in salary? Even if you care deeply about education and contributing to society, the *opportunity cost* for yourself and your family is enormous. Meanwhile in Finland, the general practitioner makes $68,000 and the teacher makes $37,455. Teachers in Finland are not paid as much as doctors, but Finnish teachers make more than half of what doctors do. In Finland, the opportunity cost of becoming a teacher is not as great as in the U.S.
**The real problem of getting enough computer science teachers is the opportunity cost**. We are struggling with this cost at both the K12 (primary and secondary school) and in higher-education.
I have been exchanging email recently with Michael Marder of UTeach at University of Texas at Austin. UTeach is an innovative and successful program that helps STEM undergraduates become teachers. They don’t get a lot of CS students who want to become CS teachers — CS is among the majors that provide the smallest number of future teachers. A 2011 report in the UK found that CS graduates are less likely to become teachers than other STEM graduates.
CS majors may be just as *interested* in becoming teachers. Why don’t they? My guess is the perceived opportunity cost. That may just be perception—the average starting salary for a certified teacher in Georgia is $38,925, and the average starting salary for a new software developer in the U.S. (not comparing to exorbitant *possible* starting salaries) is $55,000. That’s a big difference, but it’s not the 3x differences of teachers vs doctors.
We have a similar problem at the higher education level. That National Academies just released a new report *Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments* (you can read it for free here, or buy a copy). The report describes the rapidly rising enrollments in CS (also described in the CRA *Generation CS* report) and the efforts to manage them. The problem is basically too many students for too few teachers, and one reason for too few teachers is that computing PhD’s are going into industry instead of academia. Quoting from the report (page 47):
CS faculty hiring has become a significant challenge nationwide. The number of new CIS Ph.D.s has increased by 21 percent from 2009 (1567 Ph.D.s) to 2015 (1903 Ph.D.s), as illustrated in Figure 3.15, while CIS bachelor’s degree production has increased by 74 percent. During that time, the percentage of new Ph.D.s accepting jobs in industry has increased somewhat, from 45 to 57 percent according to the Taulbee survey. Today, academia does not necessarily look attractive to new Ph.D.s: the funding situation is tight and uncertain; the funding expectation of a department may be perceived as unreasonably high; the class sizes are large and not every new hire is prepared to teach large classes and manage TAs effectively; and the balance between building a research program and meeting teaching obligations becomes more challenging. For the majority of new CS Ph.D.s the research environment in industry is currently more attractive.
The opportunity cost here influences the individual graduate’s choice. The report describes new CS Ph.D. graduates looking at industry vs. academia, seeing the challenges of academia, and opting for industry. This has been described as the "eating the seed corn" problem. (Eric Roberts has an origin story for the phrase at his website on the capacity crisis.)
That’s a huge problem, but a similar and less well-document problem is when existing CS faculty taking leaves to go to industry. I don’t know of any measures of this, but it certainly happens a lot—existing CS faculty getting scooped up into industry. Perhaps the best known example was when Uber "gutted" CMU’s robotics lab (see the description here). It happens far more often at the individual level. I know several robotics, AI, machine learning, and HCI researchers who have been hired away on extended leaves into industry. Those are CS faculty who are not on hand to help carry the teaching load for "Generation CS."
Faculty don’t have to leave campus to work with industry. Argo AI, for example, makes a point of funding university-based research, of keeping faculty on campus teaching the growing load of CS majors. Keeping the research on-campus also helps to fund graduate students (who may be future CS Ph.D.s). There’s likely an opportunity cost for Argo AI. By bringing the faculty off campus to Argo full-time, they would like get more research output. There’s an associated opportunity cost for the faculty. Going on leave and into industry would likely lead to greater pay.
On the other hand, industry that instead hires away the existing faculty pays a different opportunity cost. When the faculty go on leave, universities have fewer faculty to prepare the next generation of software engineers. The biggest cost is on the non-CS major. Here at Georgia Tech and elsewhere, it’s the non-CS majors who are losing the most access to CS classes because of too few teachers. We try hard to make sure that the CS majors get access to classes, but when the classes fill, it’s the non-CS majors who lose out.
That’s a real cost to industry. A recent report from Burning Glass documents the large number of jobs that require CS skills, but not a CS major. When we have too few CS teachers, those non-CS majors suffer the most.
In the long run, which is more productive: having CS faculty working full-time in industry today, or having a steady stream of well-prepared computer science graduates and non-CS majors with computer science skills for the future?
## Join the Discussion (0)
## Become a Member or Sign In to Post a Comment | true | true | true | null | 2024-10-12 00:00:00 | 2017-12-01 00:00:00 | null | null | cacm.acm.org | cacm.acm.org | null | null |
2,578,565 | http://blog.mathgladiator.com/2011/05/engineer-and-artist.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,986,727 | https://chelseatroy.com/2019/12/30/posd-2-what-causes-insidious-bugs/ | PoSD 2: What causes insidious bugs? | Chelsea | I recently read John Ousterhout‘s book, *Philosophy of Software Design* (PoSD). This blog post includes my commentary on some parts that stuck with me.
In the last post we talked about eschewing complexity, handling special cases, and example choices in software literature.
In this post, we do a little more detective work. The book never directly discusses debugging. I see this pattern in most fundamental software development literature, and I think I understand why. We programmers think of bugs as exceptional, unique detours from some normal workflow in which we’re shipping functional code.
### And in so doing, we fundamentally misunderstand the bug.
Debugging is not the exception to my normal day—nor, I suspect, yours. Rather, I spend *most* of my time reading error messages or combing over code to determine why something isn’t working. Last week, I spent three hours with three other programmers doing *only* this.
We take this practice that occupies the majority of our work time, and we make it harder by treating it as an exception. We don’t have a unified praxis or pedagogy for debugging, and we don’t think we need one:
Instead, we each build our own anecdotal libraries out of the bugs we have individually seen. And I think that’s such a waste.
So I’m going to start writing more about debugging. I’d like to pay homage to the late Grace Hopper for coining the term ‘bug’, so I’ll make all the posts on this topic available under the category ‘debugging‘ and also under the tag ‘entomology,’ which is how biologists refer to the study of bugs.1
I want to explore what *Philosophy of Software Design* can teach us about building a framework for understanding, preventing, and solving bugs.
### Let’s start with Chapter 14 of PoSD: Choosing names.
Programmers *tend* to agree that how we name things is important, and also difficult. But why?*Why *are good names both so critical and so damn hard?
Check out this excerpt from the book. Ousterhout writes:
The most challenging bug I (John Ousterhout) ever fixed came about because of a poor name choice.
The file system code used the variable name
`block`
for two different purposes. In some situations,`block`
referred to a physical block number on disk; in other situations,`block`
referred to a logical block number within a file. Unfortunately, at one point in the code there was a`block`
variable containing a logical block number, but it was accidentally used in a context where a physical block number was needed; as a result, an unrelated block on disk got overwritten with zeroes.While tracking down the bug, several people, including myself, read over the faulty code, but we never noticed the problem. When we saw the variable
`block`
used as a physical block number, wereflexively assumed(emphasis by me, Chelsea Troy) that it really held a physical block number.
### On the Origin of Bugs
I’ve tracked down a bug or two. I haven’t recorded them all rigorously enough to make a scientific case, but I have noticed something: the longer I spend chasing it, the more likely it becomes that the fix is a one-liner.
By the time I’ve sunk about four hours, it’s almost *guaranteed* to be a one-liner.
I used to think this was some kind of psychological bias, the way it only ever seems to rain when you didn’t pack an umbrella. But now I see why this happens.
**The reason: insidious bugs come from inaccurate assumptions.** This is why I bolded the text “reflexively assumed” in the example above.
But it’s not *just* that insidious bugs come from inaccurate assumptions. It’s deeper than that: **insidiousness as a characteristic of bugs comes from inaccurate assumptions.** We’re looking in the code when the problem is rooted in our understanding. It takes an awfully long time to find something when we’re looking in the wrong place.
When we name things, we’re not just encoding our understanding; we’re *creating *our understanding. This is true to a freaky degree. I talked about that more in this other post. How we name things shapes how we understand them. And *also*, how we name things is *influenced* by how we understand them.
It’s hard for us to detect when our assumptions about a system are wrong because it’s hard for us to detect when we’re making assumptions at all. Assumptions, by definition, describe things we’re taking for granted. They include all the details into which we are *not putting thought*. I talked more about assumption detection in this piece on refactoring. I believe that improving our ability to detect and question our assumptions plays a critical role in solving existing insidious bugs and preventing future ones.
### Encoding assumptions doesn’t just happen in code, either.
Chapters 12 and 13 of PosD discuss code comments. Programmers sometimes advise to *not* write comments in favor of focusing on the legibility of the code itself. In my experience, this works right up until we relinquish full control over the APIs we use in our project—which happens, by the way, the moment we include even a single library dependency. *Why?* Because another set of programmers, with a different perspective from ours, frequently make different assumptions than we do.
If I could go back in time and inject the lesson of this paragraph into every code base I’ve ever seen, I’d have 30% fewer gray hairs:
Precision is most useful when commenting variable declarations such as class instance variables, method arguments, and return values. The name and type in a variable declaration are typically not very precise. Comments can fill in missing details such as:
- What are the units for this variable?
- Are the boundary conditions inclusive or exclusive?
- If a null value is permitted, what does it imply?
- If a variable refers to a resource that must eventually be freed or closed, who is responsible for freeing it or closing it?
- Are there certain properties that are always true for the variable (invariants) such as”this list always contains at least one entry”?
These information points are all examples of **high risk assumptions**: things we know to be one way, that some other programmer is going to *think* they know to be some other way, or is going to not think about at all. These high risk assumptions make great hiding places for bugs.
**So how, exactly, do we find and prevent these insidious little bugs?** *Philosophy of Software Design *doesn’t directly say. However, it provides a robust framework for solving a *different* kind of problem that we can repurpose for finding insidious bugs. We’ll talk about that in the next post (I don’t mean to be a tease, but I try to keep these posts under 1200 words to respect your time and energy, and we’re at 1080 right now :).
1Anecdotally, my mother was an entomologist (the normal biology kind) and my father an engineer, so this series seems like one I’m fated by heritage to write.
### If you liked this piece, you might also like:
The Leveling Up Series (a perpetual favorite for gearheads like yourself)
The Books Category, to which I’m slowly adding all the blog posts where I reflected on a book
The History of API Design Series (prepare to have your sacred cows threatened)
I’m fated by heritage to write.
made me laugh
Hi Chelsea! I noticed that the link in “In the last post we …” returns a 404. I think the date in the URL is incorrect, and can be updated to https://chelseatroy.com/2019/12/17/philosophy-of-software-design-part-1/
Updated! Thank you!!
Hi! Just noticed the link is still broken. It’s pointing to https://chelseatroy.com/2019/12/14/philosophy-of-software-design-part-1 instead of https://chelseatroy.com/2019/12/17/philosophy-of-software-design-part-1/
Ope! Fixed now 🙂 Thank you! | true | true | true | I recently read John Ousterhout’s book, Philosophy of Software Design (PoSD). This blog post includes my commentary on some parts that stuck with me. In the last post we talked about eschewin… | 2024-10-12 00:00:00 | 2019-12-30 00:00:00 | article | chelseatroy.com | Chelsea Troy | null | null |
|
9,225,655 | http://www.sciencedaily.com/releases/2015/03/150317195937.htm | Longer duration of breastfeeding linked with higher adult IQ and earning ability | null | # Longer duration of breastfeeding linked with higher adult IQ and earning ability
- Date:
- March 17, 2015
- Source:
- The Lancet
- Summary:
- Longer duration of breastfeeding is linked with increased intelligence in adulthood, longer schooling, and higher adult earnings, a study following a group of almost 3,500 newborns for 30 years.
- Share:
Longer duration of breastfeeding is linked with increased intelligence in adulthood, longer schooling, and higher adult earnings, a study following a group of almost 3500 newborns for 30 years published in *The Lancet Global Health* journal has found.
"The effect of breastfeeding on brain development and child intelligence is well established, but whether these effects persist into adulthood is less clear," explains lead author Dr Bernardo Lessa Horta from the Federal University of Pelotas in Brazil.
"Our study provides the first evidence that prolonged breastfeeding not only increases intelligence until at least the age of 30 years but also has an impact both at an individual and societal level by improving educational attainment and earning ability. What is unique about this study is the fact that, in the population we studied, breastfeeding was not more common among highly educated, high-income women, but was evenly distributed by social class. Previous studies from developed countries have been criticized for failing to disentangle the effect of breastfeeding from that of socioeconomic advantage, but our work addresses this issue for the first time."
Horta and colleagues analysed data from a prospective study of nearly 6000 infants born in Pelotas, Brazil in 1982. Information on breastfeeding was collected in early childhood. Participants were given an IQ test (Wechsler Adult Intelligence Scale, 3rd version) at the average age of 30 years old and information on educational achievement and income was also collected.
Information on IQ and breastfeeding was available for just over half (3493) participants. The researchers divided these subjects into five groups based on the length of time they were breastfed as infants, controlling for 10 social and biological variables that might contribute to the IQ increase including family income at birth, parental schooling, genomic ancestry, maternal smoking during pregnancy, maternal age, birthweight, and delivery type.
While the study showed increased adult intelligence, longer schooling, and higher adult earnings at all duration levels of breastfeeding, the longer a child was breastfed for (up to 12 months), the greater the magnitude of the benefits. For example, an infant who had been breastfed for at least a year gained a full four IQ points (about a third of a standard deviation above the average), had 0.9 years more schooling (about a quarter of a standard deviation above the average), and a higher income of 341 reais per month (equivalent to about one third of the average income level) at the age of 30 years, compared to those breastfed for less than one month.
According to Dr Horta, "The likely mechanism underlying the beneficial effects of breast milk on intelligence is the presence of long-chain saturated fatty acids (DHAs) found in breast milk, which are essential for brain development. Our finding that predominant breastfeeding is positively related to IQ in adulthood also suggests that the amount of milk consumed plays a role."
Writing in a linked Comment, Dr Erik Mortensen from the University of Copenhagen in Denmark says, "With age, the effects of early developmental factors might either be diluted, because of the effects of later environmental factors, or be enhanced, because cognitive ability affects educational attainment and occupational achievements...By contrast, Victora and colleagues' study suggests that the effects of breastfeeding on cognitive development persist into adulthood, and this has important public health implications...However, these findings need to be corroborated by future studies designed to focus on long-term effects and important life outcomes associated with breastfeeding."
**Story Source:**
Materials provided by **The Lancet**. *Note: Content may be edited for style and length.*
**Journal Reference**:
- Cesar G Victora, Bernardo Lessa Horta, Christian Loret de Mola, Luciana Quevedo, Ricardo Tavares Pinheiro, Denise P Gigante, Helen Gonçalves, Fernando C Barros.
**Association between breastfeeding and intelligence, educational attainment, and income at 30 years of age: a prospective birth cohort study from Brazil**.*The Lancet Global Health*, 2015; 3 (4): e199 DOI: 10.1016/S2214-109X(15)70002-1
**Cite This Page**:
*ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com | true | true | true | Longer duration of breastfeeding is linked with increased intelligence in adulthood, longer schooling, and higher adult earnings, a study following a group of almost 3,500 newborns for 30 years. | 2024-10-12 00:00:00 | 2024-10-12 00:00:00 | article | sciencedaily.com | ScienceDaily | null | null |
|
29,840,592 | https://www.indiehackers.com/post/i-started-a-community-where-you-get-kicked-for-inactivity-8c076721d0 | I started a community where you get kicked for inactivity | Rosie Sherry | As a community builder finding people who will really participate in a community is hard.
I'm a member of many free and paid communities. I don't participate in most of them, even if I have good intentions to do so.
Building a community in the pandemic is hard. People are distracted. And have huge choice. People love the idea of participating but rarely commit.
Community builders feel forced to keep trying to pull people in. Find new interest. Yet it is almost like a vicious downwards spiral — the more inactive members you bring in ends up making it a worse community experience for those who truly want to get value out of it.
The more members there are, the harder it is to actually build relationships. Whilst we love content and benefit from sharing ideas, deep down many of us want to connect a deeper level.
We've never had access to so many communities, we are lonelier than ever and the value of most communities are just not there anymore.
The community has purpose for me, I want to connect on a deeper level with indie founders. Once an indie hacker always an indie hacker!
Here is how it happened.
Within this post I decided to launch a pre-order for the community. I emailed it out to my newsletter and I tweeted about it too.
Within about a week I got my first 20 pre-order subscribers.
🤣 Launch price was $9 — lifetime membership (as long you participate)
⬆️ I raised the price to $19 after the first 20 orders
⬆️ After setting up the Discord Server and inviting people in I increased the price to $49
🤔 I think I will probably continue nudging the price up as more people join
💰 I realised it was easy to set up affiliates with Gumroad, so paid members can make money from it too. There's no pressure, but as indie hackers, this is fun! 🥳 We had our first affiliate sale yesterday.
❌ I'm not interested in monthly memberships, lifetime makes it so much less stressful for me to manage and people churn so much more with monthly/yearly subscriptions. My goal is to keep people staying.
🤑 I've made $325.50 so far from 27 sales
Bloody hell.
6 days after launch - 27 people have signed up, 23 have so far participated.
Engagement can often be a vanity metric, but in situations like this, it's such a nice experience (for me and the members) to have people showing up. Wanting to participate. Connecting. Getting to know each other. And putting in requests on things we can do.
I am showing up to the community and landing into active conversations. The pressure of instigating conversations is not there.
And I'm loving it. 🥳
Of course, it's early days. BUT people are down for the experiment. Everyone who has signed up understands the deal.
We are there together. And it just feels so good.
But it's also part of the fun, it's almost gamifying it.
The fact that I don't want people to leave will force me to reach out, connect, support and pull them in.
Discord has a feature to cull people who don't participate in a 30 day period, but I'll actually be using Savannah to track activity, it has much more data and context of a member's profile.
No email list 🥳
Discord
Gumroad
Orbit
My personal Twitter.
🙏🏽 Thanks for reading, I hope it's helpful and happy to answer any questions.
As member of this one, I'm enjoying the experiment so far! Feels great to have an incentive to show up. Not in a "I will be punished if I don't" way thought. More like, "People actually want me to show up," way. Helps me build community muscle 💪
Yes, and I really want you to show up. ❤️
Happily!
Hey Rosie. I like this idea. When you put in thousands of hours of time to create something you love for your community, you want people to care as well. I'm the same as you, great intentions when I join networks, but I rarely myself engage with them.... I'm interested in 'Orbit', I haven't used it but just checked it out. Interesting. Mark
Yes, and when there is so much choice, it is hard to commit.
Having other members who don't participate can actually be a negative experience.
I'm also glad to be part of this "experiment". As I said to Rosie yesterday, it's a win-win-win deal.
The community is bound to remain at a reasonable size and alive for quite a while, which is a win for community members. The entry price is a win for the community host and doubles as a useful noise filter for the community. Loss aversion also plays a useful role.
And since we can even become affiliates and help keep the community alive and well, it's one more win.
I like it a lot ❤️
Happy to be a participant along with the fine people in the comments!
❤️
🙏🏽
Brilliant Idea! Thanks so much for sharing!
Haha I saw you mention this over on Twitter as well, Rosie! Great write-up here.
Was the idea behind launching with paid to up the stakes for people who did sign up? It's easy to join a community and peter out/lurk if you can just get in...but if you've paid for it, then it's a community you've invested in financially. And then why not make a similar time and engagement investment as well, right
Yes, I would say so. Paid on it's own is not enough. I don't think 30 days engagement rule is enough on it's own either.
Combining the two, in theory, will get people who truly want to be there and who want to connect on a deeper level.
In theory!
This is awesome. I was thinking about having an activity policy where in order to stay a member, people had to participate on some level. I was just thinking about it, however, you actually did! Awesome, & Inspiring.
This sounds great as a gamification mechanism.
I'm wondering about the purpose of the community though. I'd join it for accountability.
An incentive for myself would be other people building things who can give me feedback on what I'm building. Building in public comes with a great disadvantage for most of us - in many cases you have no audience to share to. Plus, with everybody doing it, so even as an indiehacker you can barely keep up with every product. I'd pay for a dedicated group where I can share updates and get feedback.
The purpose is develop intentional and deeper relationships with each other, with a focus on 'indie founders', 'indie creators', 'indie hackers', whatever name you want to call it.
We've started accountability threads, we're giving feedback, we're operating with increased transparency, we're sharing things that many of us don't feel comfortable sharing in a truly open environment.
I want it to be a place where people get heard and truly make supportive friends.
Interesting to read about, but I don't get it myself ;). The last thing I want to worry about is that I haven't show up in some online corner of the internet.
Then it's not for you (right now), and that's ok.
I often think of how Masterminds don't work if people don't show up and particpate. This is the same kind of idea — people who sign up agree to participate intentionally. It makes it a safer and deeper community.
Not everyone wants that, and I get that. Communities shouldn't necessarily be designed for everyone.
Sure. How is it safer, though?
Anything 'behind a wall' and with a person's contact if is naturally going to be a safer environment than one that is open to the public.
Very interesting... In fact, I was thinking about applying that to my Fit Telegram group (recently created) where engagement is very important.
It makes no sense increasing the number of people there constantly if they don't participate actively.
My question is:
If you get X $ lifetime, how do you pay for the tools when the community grows? You know, the price of most of the tools (email marketing or hosting, for exemple) depends on the number of users you have in, the number of emails you send, the number of guests o the storage space.
I have my doubts about lifetime communities in that sense...
In this instance, the costs won't go up. Discord is free, I have no plans for an email list, or expanding things.
This could change.
I think this can be addressed by increasing the price and also people naturally churning. Not everyone will stay around forever and that's ok.
Also, an understimated aspect of community building is that new opportunities always arise from within the community. I bet I'll find more ways to create revenue in time.
I do this with Rosieland. I have a lifetime membership, but I also sell individual products too, these come as a result of serving the community.
Yeah, I agree with that.
Thanks, Rosie and Happy New Year.
I really like your mentality towards the lifetime membership position. It can get really tiring keeping with with all the subscriptions of todays day and age. The nice simple "pay once and participate" sounds so nice!
If people do not participate and get booted, I'm guessing they have to pay the full one time fee to get back in? If so, then I think it is an extra little motivation :)
Honestly, when it comes to community and all the various places we need up connecting, subscriptions are a pain.
I’m opting for one off fees for a whole bunch of reasons these days.
And yes, once booted they can come back, but needing to repay the fee.
I totally agree! And I love the idea!
Cool project! Is there going to be something like a vacation mode if somebody has to sign off for an extended period of time? :)
Dunno! Is one month not plenty of time for a vacation? 🤣
I’m definitely thinking about a farewell channel, where members can send their thanks to other members who choose to leave.
Excited to see how this community works out over time. Has faint echos of App.net's attempt to build a paid Twitter alternative, where I almost think people were more likely to contribute thanks to ... sunk cost fallacy, but in a good way?
At any rate, this felt like exactly the community I needed right now, so excited to be an early member!
Woah, this is a neat share Rosie.
I'm surprised that it's working, but I hope it works out for you. There is a forum that I want to participate in, but they have a rule like that. I don't have enough time to post there every month, so my account was deleted many years ago. I never bothered to sign up again because I won't be able to post every 30 days and the hassle of getting my account deleted is too much of a waste of time for me, even though it's a free site. So instead of getting huge bursts of activity from me when I have free time, they get zero posts.
I run forums, and it's very common for people to join and participate for a while, disappear for years, and then come back as life circumstances change. But maybe in your case, people will pay again when they return.
It's definitely not for every community, I think like people, communities can be equally diverse and exist in many wonderful ways.
Yeah, definitely. Unusual ideas can also be good for marketing, and if curiosity leads to an early burst of activity, it can be beneficial even if the policy eventually changes later in order to stop losing contact with new friends who temporarily get busy with other things in life.
I agree. I have just opened a digital marketing agency with a friend (along with thousands of others during this time, but hey you gotta go with your skill set!!) but its early days and I am not yet ready to make any large contributions.
Later I will contribute when I have been around enough to "write my story" and have earned my $$'s/stripes but will I then have been kicked off?
So yes kicking people off is a good thing as it makes an active community with those that are left, but then no because you will lose people who are early on their journey, and might otherwise be more active later on.
Also, is this community not for people exactly like me, ie have just started something and need to be involved in a community of people who have been there/done that?
I wonder the higher the price, the more kickback you'll get from the ones that were kicked out. Kinda like "dog ate my homework" or "I was so carried away with work." folks saying this is unfair. I know companies do this (not many) and I send nothing but kudos to them for doing so. I think this is a great approach for a community. Can't wait to see the growth.
I think it will be ok as the rules are clear up front. Everyone who has joined so far really seems to get it.
Saying that, I rarely so no when people ask for a discount. 😅
What do you think should be the member limit for a community like this?
Great question.
I don't know.
It will be interesting to see what it looks like at 150 members, the Dunbar number.
I think I'm totally ok stopping sign ups at times, to accommodate people joining and finding their place.
This comment was deleted 3 years ago. | true | true | true | As a community builder finding people who will really participate in a community is hard. I'm a member of many free and paid communities. I don't partic... | 2024-10-12 00:00:00 | 2022-01-05 00:00:00 | https://storage.googleapis.com/indie-hackers.appspot.com/shareable-images/posts/8c076721d0 | article | indiehackers.com | Indie Hackers | null | null |
7,421,297 | http://www.warp.ly/blog/mobile-tuesdays-lizzy-klein-talks-mobile-strategy-real-clv-and-engagement | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
861,980 | http://openbeta.extendedbeta.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,094,225 | https://nextword.substack.com/p/the-rise-of-ai-compliance-officer | The Rise of AI Compliance Officer | John Hwang | On October 30th, 2023, White House announced an executive order on Trustworthy AI which has interesting implications for AI adoption for companies.
I also predict this will officially birth a new job function at large enterprises - AI Compliance Officer - and formalize the creation of private-sector compliance frameworks to govern AI usage.
In this pos… | true | true | true | Regulation and oversight is coming, are companies ready? | 2024-10-12 00:00:00 | 2023-10-31 00:00:00 | article | substack.com | Enterprise AI Trends | null | null |
|
33,422,876 | https://www.theverge.com/2022/11/1/23434305/adobe-pantone-subscription-announcement-photoshop-illustrator | You now have to pay to use Pantone colors in Adobe products | Jess Weatherbed | Last week, Adobe removed support for free Pantone colors across its Photoshop, InDesign, and Illustrator Creative Cloud applications. PSD files that contained Pantone spot colors now display unwanted black in their place, forcing creatives who need access to the industry-standard color books to pay for a plugin subscription (via *Kotaku*).
“As we had shared in June, Pantone decided to change its business model. Some of the Pantone Color Books that are pre-loaded in Adobe Photoshop, Illustrator, and InDesign were phased-out from future software updates in August 2022,” said Ashley Still, senior vice president of digital media marketing, strategy, and global partnerships at Adobe in a statement to *The Verge*. “To access the complete set of Pantone Color Books, Pantone now requires customers to purchase a premium license through Pantone Connect and install a plug-in using Adobe Exchange.”
Pantone has claimed that its color libraries inside of Adobe have not been properly maintained for several years, leading to the Pantone colors being inaccurate, with hundreds missing from Adobe applications altogether. A dedicated (and seemingly outdated) Pantone FAQ says, “Pantone and Adobe have together decided to remove the outdated libraries and jointly focus on an improved in-app experience that better serves our users.”
Creatives who understandably want to continue using the industry-standard color system are expected to pay a $15 monthly / $90 annual subscription for a Pantone license via the Adobe Pantone Connect plugin.
A Pantone license for Adobe will cost creatives $15 a month
Prior to the introduction of the Pantone Color Matching System, companies used individual color guides which gave inconsistent results as each ink company could interpret “red” as slightly differing shades. Even CMYK, another industry standard color matching system used in at-home printers, is seen as inferior as the required combination of cyan, magenta, yellow, and black can lead to slight variations. Pantone doesn’t require a combination of colors, making it more reliable for designers working on large projects.
While the Pantone FAQ states that “existing Creative Cloud files and documents containing Pantone Color references will keep those color identities and information,” Photoshop users are nevertheless reporting that their old PSD files utilizing Pantone colors now show those colors as black. “This file has Pantone colors that have been removed and replaced with black due to changes in Pantone’s licensing with Adobe,” reads a message on affected projects. Other Photoshop users have reported that downloading the Pantone Connect extension isn’t guaranteed to fix the issue. We’ve reached out to Pantone to clarify its position and will provide an update should we hear back.
There are several workarounds available to try and restore the lost Pantone color swatches. These include disabling Adobe application updates if you still have access to Pantone color books, or simply copying the metadata values for your required Pantone range. | true | true | true | Pretty colors come with a premium price. | 2024-10-12 00:00:00 | 2022-11-01 00:00:00 | article | theverge.com | The Verge | null | null |
|
16,963,055 | https://www.engadget.com/2018/04/29/huawei-backup-os-for-smartphones/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,096,475 | http://googleblog.blogspot.com/2011/01/world-ipv6-day-firing-up-engines-on-new.html | World IPv6 Day: firing up the engines on the new Internet protocol | Google | Hey—we've moved. Visit
The Keyword
for all the latest news and stories from Google
Official Blog
Insights from Googlers into our products, technology, and the Google culture
World IPv6 Day: firing up the engines on the new Internet protocol
January 12, 2011
Today, Google and major websites are joining the
Internet Society
to
announce
World IPv6 Day
, a 24-hour test flight of the next generation Internet protocol on June 8, 2011.
The story begins in 1977, when
Vint Cerf
, the program manager for the ARPA Internet research project (and now one of the driving forces behind Google’s IPv6 efforts), chose a 32-bit address format for an experiment in packet network interconnection. Who would have thought that the experiment would evolve into today’s Internet: a global network connecting billions of people, some using handheld devices faster than the mainframes of the 1970s?
For more than 30 years, 32-bit addresses have served us well, but now the Internet is running out of space. IPv6 is the only long-term solution, but as the chart below shows, it has not yet been widely deployed. With IPv4 addresses expected to
run out in 2011
, only
0.2% of Internet users
have native IPv6 connectivity:
IPv6 connectivity among Google users since September 2008
Google has been supporting IPv6 since early 2008, when we first began offering
search over IPv6
. Since then we’ve brought
IPv6 support to YouTube
and have been helping ISPs enable
Google over IPv6
by default for their users.
On World IPv6 Day, we’ll be taking the next big step. Together with major web companies such as Facebook and Yahoo!, we will enable IPv6 on our main websites for 24 hours. This is a crucial phase in the transition, because while IPv6 is widely deployed in many networks, it’s never been used at such a large scale before. We hope that by working together with a common focus, we can help the industry prepare for the new protocol, find and resolve any unexpected issues, and pave the way for global deployment.
The good news is that Internet users don’t need to do anything special to prepare for World IPv6 Day. Our current measurements suggest that the vast majority (99.95%) of users will be unaffected. However, in rare cases, users may experience connectivity problems, often due to misconfigured or misbehaving home network devices. Over the coming months we will be working with application developers, operating system vendors and network device manufacturers to further minimize the impact and provide testing tools and advice for users.
We hope that many other websites will join us in participating in World IPv6 Day. Changing the language spoken by every device on the Internet is a large task, but it’s essential to ensure the future of an open and robust Internet for decades to come.
Posted by Lorenzo Colitti, Network Engineer
Labels
accessibility
41
acquisition
26
ads
131
Africa
19
Android
58
apps
419
April 1
4
Asia
39
books + book search
48
commerce
12
computing history
7
crisis response
33
culture
12
developers
120
diversity
35
doodles
68
education and research
144
entrepreneurs at Google
14
Europe
46
faster web
16
free expression
61
google.org
73
googleplus
50
googlers and culture
202
green
102
Latin America
18
maps and earth
194
mobile
124
online safety
19
open source
19
photos
39
policy and issues
139
politics
71
privacy
66
recruiting and hiring
32
scholarships
31
search
505
search quality
24
search trends
118
security
36
small business
31
user experience and usability
41
youtube and video
140
Archive
2016
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2007
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2006
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2005
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2004
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Google
on
Follow @google
Follow
Give us feedback in our
Product Forums
. | true | true | true | Insights from Googlers into our products, technology, and the Google culture | 2024-10-12 00:00:00 | 2011-01-12 00:00:00 | https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1EtzLszyjNRCni_SlQyJeyVotRzviyD3XkxlsEEEtouIhigkny6EAJyXM2ratPnK-7HYPcN0fH0Ihv8LJ4eihSd7XwLze40QomT_A64cr8Wqrw193H-UjuYai7pY-Xv5FnZiv/w1200-h630-p-k-no-nu/ | article | blogspot.com | Official Google Blog | null | null |
24,798,694 | https://twitter.com/telegram_es/status/1316610054624342017 | x.com | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | X (formerly Twitter) | null | null |
18,672,970 | https://wccftech.com/intel-unveils-foveros-a-brand-new-way-to-3d-stack-chips-with-an-active-interposer/ | Intel Unveils ‘Foveros’, A Brand New Way To 3D Stack Chips With An Active Interposer | Usman Pirzada | Intel unveiled a lot of details at the Architecture Day held yesterday and one of these juicy details include Foveros – Intel’s new approach to heterogeneous system integration. It’s the spiritual successor to Intel’s EMIB and features an active interposer to “mix and match” pretty much any IP together. The key differentiator here is the use of an active interposer as opposed to a passive interposer.
## Intel unveils Foveros 3D die stacking technology
Foveros paves the way for devices and systems combining high-performance, high-density and low-power silicon process technologies. Foveros is expected to extend die stacking beyond traditional passive interposers and stacked memory to high-performance logic, such as CPU, graphics and AI processors for the first time.
The technology provides tremendous flexibility as designers seek to “mix and match” technology IP blocks with various memory and I/O elements in new device form factors. It will allow products to be broken up into smaller “chiplets,” where I/O, SRAM and power delivery circuits can be fabricated in a base die and high-performance logic chiplets are stacked on top.
Intel expects to launch a range of products using Foveros beginning in the second half of 2019. The first Foveros product will combine a high-performance 10nm compute-stacked chiplet with a low-power 22FFL base die. It will enable the combination of world-class performance and power efficiency in a small form factor.
Foveros is the next leap forward following Intel’s breakthrough Embedded Multi-die Interconnect Bridge (EMIB) 2D packaging technology, introduced in 2018.
So why the need for 3D stacking? Well, as Intel demonstrated in their presentation, no single transistor node works across all types of applications. For iGPU you need minimal leakage with low power and low cost, with dGPU you need a mix of performance, power, and cost while for desktop CPUs you need high performance (at a high cost) and power consumption is tolerated. The only way to get an optimal design architecture that caters to all these facets is to connect everything together on an interposer.
This is where Foveros comes in, it is a very high density interconnect that enables the company to realize their vision of connecting chiplets in a package with the seamlessness of a monolithic die.
The layout of the Foveros design is as follows: the compute chip and other IP blocks are placed using FTF Micro-bumps on the active interposer through which TSVs (through silicon vias) are drilled to connect with solder bumps and eventually the final package. It looks pretty similar in design to the heterogenous design featured by AMD and comparisons are going to be completely inevitable.
Unlike AMD designs, however, the interposer in question here is actually a base compute die and will not be passive. This will allow unparalleled control over leakage and performance. Intel is also touting the worlds first “hybrid x86” architecture through the use of Foveros in its 2019 FPGA product.
Intel also mentioned how the future would require a mix of scalar, vector, matric and spatial architectures deployed in CPU, GPU, accelerator and GPGA sockets and Foveros is one of the first steps to take towards realizing that future. The company has also reiterated the lego-like, mix and match philosophy to building computer chips, which is something it has initially shied away from doing (sticking primarily to Monolithic dies) so it is incredibly exciting to see where this will lead us. | true | true | true | Intel unveiled a lot of details at the Architecture Day held yesterday and one of these juicy details include Foveros – Intel’s new approach to heterogeneous system integration. It’s the spiritual successor to Intel’s EMIB and features an active interposer to “mix and match” pretty much any IP together. The key differentiator here is the use of an active interposer as opposed to a passive interposer. Intel unveils Foveros 3D die stacking technology Foveros paves the way for devices and systems combining high-performance, high-density and low-power silicon process technologies. Foveros is expected to extend die stacking beyond traditional passive interposers […] | 2024-10-12 00:00:00 | 2018-12-12 00:00:00 | article | wccftech.com | Wccftech | null | null |
|
5,734,683 | http://stedolan.github.io/jq/ | Redirecting to jqlang.github.io | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | github.io | jqlang.github.io | null | null |
12,186,178 | http://www.nytimes.com/2016/07/30/upshot/how-scalpers-make-their-millions-with-hamilton.html?ref=technology&_r=0 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,614,179 | http://krebsonsecurity.com/2013/04/dutchman-arrested-in-spamhaus-ddos/ | Dutchman Arrested in Spamhaus DDoS | null | A 35-year-old Dutchman thought to be responsible for launching what’s been called “the largest publicly announced online attack in the history of the Internet” was arrested in Barcelona on Thursday by Spanish authorities. The man, identified by Dutch prosecutors only as “SK,” was being held after a European warrant was issued for his arrest in connection with a series of massive online attacks last month against **Spamhaus**, an anti-spam organization.
According to a press release issued by the **Public Prosecutor Service** in The Netherlands, the **National Prosecutor in Barcelona** ordered SK’s arrest and the seizure of computers and mobile phones from the accused’s residence there. The arrest is being billed as a collaboration of a unit called Eurojust, the European Union’s Judicial Cooperation Unit.
The dispute began late last year, when Spamhaus added to its blacklist several Internet address ranges in the Netherlands. Those addresses belong to a Dutch company called “Cyberbunker,” so named because the organization is housed in a five-story NATO bunker, and has advertised its services as a bulletproof hosting provider.
“A year ago, we started seeing pharma and botnet controllers at Cyberbunker’s address ranges, so we started to list them,” said a Spamhaus member who asked to remain anonymous. “”We got a rude reply back, and he made claims about being his own independent country in the Republic of Cyberbunker, and said he was not bound by any laws and whatnot. He also would sign his emails ‘Prince of Cyberbunker Republic.” On Facebook, he even claimed that he had diplomatic immunity.”
Spamhaus took its complaint to the upstream Internet providers that connected Cyberbunker to the larger Internet. According to Spamhaus, those providers one by one severed their connections with Cyberbunker’s Internet addresses. Just hours after the last ISP dropped Cyberbunker, Spamhaus found itself the target of an enormous amount of attack traffic designed to knock its operations offline.
It is not clear who SK is, but according to multiple sources, the man identified as SK is likely one **Sven Olaf Kamphuis. **The attack on Spamhaus was the subject of a New York Times article on Mar. 26, 2013, which quoted Mr. Kamphuis as a representative of Cyberbunker and saying, “We are aware that this is one of the largest DDoS attacks the world had publicly seen.” Kamphuis also reportedly told The Times that Cyberbunker was retaliating against Spamhaus for “abusing their influence.”
Also, a Facebook profile by that same name identifies its account holder as living in Barcelona and a native of Amsterdam, as well as affiliated with “Republic Cyberbunker.”
Mr. Kamphuis could not be immediately reached for comment.
There will be many countries interested in the way this potential threat DOS’ed the Spamhaus organization. I have followed this event pretty closely.
The Spamhaus orgainzation tries to remain unbiased about its findings and overall, they do a great job. they DO allow some time for cleanup of an intrusion or network issue before they add an IP, person or otherwise on the list.
From what I read, the feedback from Spamhaus appeared as if they were affected byt the DoS attack, but not totally knocked down or TKO’ed.
The countries will look over his equipment in earnest, since he was able to use DoS techiques that exceeded the stereotypical limits seen during other DoS attacks in the past.
There are many countries that consider an attack via the internet that travels through their country illegal, and to even think he can create his own country and then simply leaving his so called regime and going shopping or traveling on the roads of another country utilizing their license plates, drivers credentials and other luxuary items without the thought of creating these services on his own.
One step short of Looney Tunes, thinking that since he buys a building and can call it whatever it is, is fine, but once those actions turn into vile, unethical, unlawful and dangerous acts, one can only assume this would be the tip of the iceberg.
I am glad they caught him in this semi-unvolitale state. If the would have waited much longer, it could have been alot worse. I am reluctant to say this, but this could have been a BIG player in the upcoming hacker event known as OP USA.
Good job on catching this wanna be tyrant.
“think he can create his own country and then simply leaving his so called regime and going shopping or traveling on the roads of another country utilizing their license plates, drivers credentials and other luxuary items without the thought of creating these services on his own.”
The USA has this personality type in the Tea Party. These people claim that government is useless and demand that all taxes and regulations be abolished, and then, just like the above quoted text describes, they drive somewhere on roads that taxes paid for, drink water through pipes that taxes paid for, visit a library that taxes paid for, breathe air which is no longer full of toxic chemicals as it was in Donora in 1948 but only because of the Clean Air Act, etc. These people never put their money where their mouth is and move to Somalia, where there are no taxes or government regulations; of course, there is a good chance of being held for ransom or killed in the street, but one must follow one’s ideals, right?
Your comment strays far from the topic. Not surprising, since your comment shows that you live far from reality. In your defense, you could, of course, plead ignorance… the evidence certainly supports that plea.
Name one thing I got wrong, vacuous teabagger. Or are facts too much for you?
Besides reality and the topic at hand?
Off-topic, yes, that is somewhat true, though I did pick-up on something the previous poster said.
Any imbecile can claim that someone’s words are nonsense.
There’s one right there: I have no afiliation with any Tea Party organization. But that’s enough; I don’t feed trolls.
How convenient for you!
P.S. A troll is someone who posts a controversial opinion and then never responds, hoping to create a flame-war. I post on many subjects and defend my opinions. I have a blog with pages and pages of well-researched commentary. Your definition of the word is not mainstream.
I got your back Saucy, the guy – and anyone who believes this guy is NOT wacko belongs in the same cell with him. I think it would take a mere hour for them to see the light.
Commmon, you want to start your own country? Declare self immunity? Ignorance is bliss, but at who’s cost? You honestly think the country is willing to give up their soil to some wacko with issues?
Plain and to the point, he definately belongs somewhere other than spewing forth vile material. It would have been just a matter of time before the really really BAD stuff gained a foothold on his servers.
Its ok to defend someone when it doesn’t immediately affect you. But by God, stand by for the other dark side of the defenders if they were infected by this…. heap of stinking mess.
Your hateful tone aside, no, the tea party does not advocate getting rid of all tax or all regulation. They simply fight to return the power for taxation and regulation back to the states rather than in the hands of the corrupt federal government.
And give it to more corrupt states or corporations.
As far as this topic is from the article, it is actually fairly related. Technology has replaced a lot of jobs and viable careers, and many people who would have been middle class now find themselves trapped outside the economy, outside of society.
Chasing down and doing something about issues like spam, crime rings, swatting, etc requires a lot of cooperation across state and international borders, something the tea party crowd rejects and most of them fail to even start to comprehend. If the democrat’s message is one of unity, theirs is one of “me first”.
meh: “Technology has replaced a lot of jobs and viable careers, and many people who would have been middle class now find themselves trapped outside the economy, outside of society.”
While it’s true that technology has replaced a lot of jobs and formerly viable careers, that isn’t the reason why many people who would have been middle class now find themselves trapped outside the economy.
Technology has actually created more jobs and viable careers, different ones, but more. And let me clarify that many, if not most of them, pay better then the ones they replaced.
There are only two things that have caused so many people who would have been middle class to be trapped outside the economy. The first and most obvious is the global economy.
If you take two beekers of water or two separate group of wage earners. In one you put a low amount of water or low wages. In the other you put a high amount of water or high wages. Independently, they will function at the same level forever, but once you connect them with a pipe or undersea cable, they will seek equilibrium. That is, in part, what’s happening to American wages.
The second part of what has happened is, given the opportunity to do so, many people from the high wage economy would use their resoures to take advantage of the greater buying power being achieved elsewhere. Thereby maintaining their own wage levels.
However, as a result of Ronald Reagan restructuring the banking and financial industry in America, opportunity in America has all but dried up. The very idea of a family business is now something people think of as being from a simpler time in American history. There are no longer any family pharmacies, hardware stores, or dress shops. Now, there are only VC and market funded national chains offering paltry wages.
Don’t blame technology for what’s happened to the middle class, blame Congress. It’s their fault, both Republican and Democrat are equally at fault. The Republicans want all the money going to their corporate cronies who line their pockets and the Democrats are quite happy to go along so they can make more people dependent on their handouts (resulting in more votes for them).
To keep themselves in power, the politicians have carved up America into separate groups based on race, gender, belief, and wealth because they know, united we stand, divided we fall.
Kids today are taught how to be a drone of some slave master they call a boss. They are taught how to be a good employee and that is about it. Is it any wonder our kids can’t do anything but live paycheck to paycheck and are constantly worried about losing their jobs?
I reject the “Republicans are greedy SOB’s” mantra the media keeps portraying. The conservative classes want nothing more than for people to become strong and self reliant again. Self reliance however is not taught anymore. Collectivism and giving your life to the communist hive is all that is taught anymore. The economy will get worse and the rich/poor divide will become greater because our education system is as broken as it can be almost. More and more people will unfortunately turn to cyber crime to not only get rich and avoid the rat race. Nothing the liberal education camps are doing are preparing our kids for self reliance, freedom or faith. They teach kids to be parasites and it’s unfortunately what they become.
Good points Richard,
Mike, not so much –
Your views are both off a bit because you fail to see the economy as a system, it is not a singular occurance but a dynamically changing entity that has been and can be shaped by taxes, national or state laws, and society.
I disagree with your statement about ‘many more jobs’ being created… In fact many LESS jobs have been created, and the vast majority of what is created these days are low paying, no benefit type jobs. (I could link but its easily found).
Infinite growth which our current system is based on is doomed to fail – with 7 billion people and 5 billion living in some form of poverty so 1300 billionaires can gorge themselves is not a viable model and as a result many millions around the world are getting fed up. What is interesting is how this affects global technology crimes and protests… Everything from spam to swatting to ATM skimming to identity fraud comes about because of one basic problem – the banks aren’t on the hook if your money goes missing. They frankly don’t care. They’re too busy out there bundling worthless batches of loans together and making billions in phantom profits to care about a few thousand of your dollars going missing.
It is a crock that ‘conservative’ goals are to bring about self reliance and growth – mathmatically that simply cannot happen when we have less income mobility than most banana republics and virtually every law or goal they push for has one obvious result – further consolidation of wealth and power for those few at the top. Statistically someone born today has less chance of real success than in dozens of other time periods or current countries, and ignoring that bald fact won’t make someone working 3 jobs for minimum wage magically cast off all the systemically created problems that will keep him needing 3 jobs to get by for life.
“The Spamhaus orgainzation tries to remain unbiased about its findings and overall, they do a great job.”
Baloney.
There is never any real evidence of spam. Spamhaus places IP addresses on their list based on nothing short of mob rule. Their so-called evidence of spam would never hold up in any court.
The Spamhaus list is full of outdated IP addresses because Spamhaus has no standard system for removal of IP addresses.
Nearly every IP address I’ve been assigned by hosting companies over the past several years has needed to be removed from the Spamhaus list.
At the same time, IP addresses from large corporations are never added to the Spamhaus list. Currently, the Amazon AWS system is THE major source of spam and hacking. Yet no amount of complaints or evidence is enough to cause Spamhaus to take action.
Spamhaus expands it’s customer base of Internet Service Providers through intimidation by threatening anyone who refuses to buy their list with being added to it.
That’s the real reason they went after Cyberbunker, which won a lawsuit against Spamhaus for forcing Cyberbunker’s backbone suppliers to withhold service without valid evidence of spam.
“Nearly every IP address I’ve been assigned by hosting companies over the past several years has needed to be removed from the Spamhaus list. ”
Just because you follow the footsteps of the illicit when it comes to using an IP, means there has to be a change of thought. If one keeps falling into the same hole, and blaming others, who’s fault is that?
And just think….hummmmm if you would have used Spamhaus as a research tool BEFORE you bought into a bogus IP, MAYBE it could have prevented the pitfalls.
Hopefully, the light will come on one day.
“If one keeps falling into the same hole, and blaming others, who’s fault is that? hummmmm if you would have used Spamhaus as a research tool BEFORE you bought into a bogus IP,”
Why are you commenting? Either you don’t know anything about the Internet, hosting, and IP addresses or you’re a shill for Spamhaus. So, to clarify the facts for readers…
Hosting subscribers don’t choose IP addresses. You’re automatically assigned an address via a web based interface (typically WHM) when setting up an account. Since you neither know what the IP address will be ahead of time nor have any choice in what it will be, there is nothing to research.
Can you whine to your hosting provider to have an address changed if it turns out to be on the Spamhaus list? Sure, but the next one assigned will probably be on the list too.
Almost all open IPv4 addresses were used by spammers at some point in time because, once an IP address is blocked, spammers just change addresses and move on, leaving a trail of blocked addresses behind them.
Hosting providers have what they have. IPv4 addresses are in short supply and rotated to new customers when they’re abandoned by prior customers. Since Spamhaus doesn’t monitor blocked addresses for change of ownership and has no system to determine whether an address is no longer being used for spam. Hosting customers are left to repeatedly live through the nightmare of getting newly assigned addresses unblocked. It’s a process that, in the case of the ATT network, typically takes up to 3 months during which you can’t email any of their subscribers.
The fact that nearly all open IPv4 addresses are already on the Spamhaus list proves that the very basis of the Spamhaus system is flawed. It’s little more than a scam which the large corporations support because Spamhaus never blocks their IP addresses, so it helps them limit competition from small business owners.
The entire RBL concept is flawed and should be replaced.
I have a real solution to you, and it doesn’t even involve replacing the functional RBL system… get a hosting provider that doesn’t accept spammers.
Seriously, these guys jump from ISP to ISP, getting banned as they go, but they’re always the same group of ISPs. But stay out of that group and you’ll have no problem.
Unless, of course, you’re running the sort of business that’s getting legitimately blacklisted for your actions. In which case too bad.
“Your complaining about the IPs being listed, and why do you THINK (at least I hope you do) they got put on that list in the first place? ”
Let be clear, they got put on the list because some jerk decided to report the business using the IP address. That happens to be one of the key issues with the Spamhaus list. It’s simply not valid. Anyone can say anything they want to about a business, that doesn’t make it true. Spamhaus fails to properly investigate allegations and doesn’t notify those accused that they have been targeted.
Once added, those IP addresses never get removed from the list. As a result, the Spamhaus database is totally corrupt, full of IP addresses that at one time may or may not have been used for spam, but now sit idle waiting to taint some unsuspecting small business owner.
In my case, I spent 2 years building a business to where I needed to upgrade my server. That resulted in the assignment of a new IP address that tainted my business as a spammer.
Some small business owners get assigned such an IP the first time they sign up for hosting and don’t have the background or knowledge to know what to do.
The bulk of IP addresses on the Spamhaus list were used by small business owners who send a few dozen emails weekly to likely prospects. There is a huge difference between a local retail business sending a few dozen emails to community residents versus a spammer who sends millions of emails per day. However, Spamhaus fails to differentiate between them, doesn’t notify the business that their IP address is in jeopardy, and has no transparent system for removal. The system is corrupt and broken.
The suggestion that one should choose a hosting provider who doesn’t accept spammers is typical of the bogus excuse making Spamhaus uses to cover its extensive corruption. The typical hosting provider is a reseller who uses an automated system based on WHM. He doesn’t accept spammers. What he does accept are small business owners who at sometime decide to spend a few dozen “specials of the week” emails to likely prospects, resulting in his IP address being added to the Spamhaus list.
In my case, my hosting provider was LiquidWeb, one of the largest and most respected hosting providers in the nation. They have thousands of customers. The idea that they can monitor and identify some small business owner sending a couple dozen “specials of the week” emails that result in his IP being added to the Spamhaus list is absurd.
You can make all the excuses you want. The basic concept of the RBL isn’t valid, but Spamhaus makes it worse by being corrupt, readily adding IP addresses from small hosting providers while ignoring those from large companies such as Amazon AWS.
Additionally, the problem of Amazon AWS isn’t some “misadventure” on my part. The problem is so well known that their entire IP address block is published on the web (just search Google) so they can be blocked at the htaccess level since Spamhaus is too corrupt to include them in their RBL.
It’s simple, Spamhaus is run by egotistic degenerates who have a failed business model they perpetuate by attacking anyone who exposes their incredibly corrupt practices, just as you have attempted to attack me.
Corrupt people like you always believe everyone else is corrupt. I understand, it helps you avoid feeling bad about yourselves.
So Richard, are you a spammer? Why do some people feel that they are entitled to email any random John Doe on the Internet to sell them whatever. It’s lazy people who don’t have real business leads.
Our company is b2b, not retail, but we never read spam, it gets filtered to junk if we haven’t pre-approved the sender.
We have our Phone and Fax #s on our website. People who can’t be bothered to call or fax us, are not considered potential customers or suppliers.
Why would we correspond with some company via email, if they can’t be troubled to even call us? Random companies spamming on the Internet are not considered potential business partners at all.
If someone has taken the time to learn about our business, and they feel we could mutually benefit each other, they call us. They don’t send canned spam email. That’s for lazy companies.
Johnny, no I’m not a spammer, and I don’t support the use of high volume bulk email for marketing.
What I object to is Spamhaus, their founder, their employees, their methods, and their intent. They are nothing more than an anti-competition tool of the big corporations that support them.
I also think that people who object to being told about a product or service by a local business are being stupid and anti-American. You only take phone solicitations? Do you have any idea how many people object to phone calls?
The big corporations love to hear anti-direct mail and anti-spam rhetoric. That’s why they support Spamhaus, which only goes after legitimate small private business owners.
The big corporations know they don’t need to worry about competing against a small local hardware store if the only way for that hardware store can reach people is through a multi-million dollar national ad campaign.
Want to know where all the local hardware stores, family owned pharmacies, and neighborhood businesses have gone? You destroyed them. The only family owned businesses left in America are corporate franchises.
Go ahead, support Spamhaus and their corporate sponsors. Pretty soon you’ll be working for minimum wage or living off the government dole like so many others who were once in the middle class.
I’m the only person I know who didn’t support attacking Iraq. It would have been nice if others had wised up before the government spent a trillion dollars helping Iran get a new BFF. Similarly, it would be very nice if all of you would wise up about Spamhaus and other RBL operators before the middle class ceases to exist.
Reading beyond the press releases often takes real work. But try it sometime, you might be surprised to learn the real story.
either you are “one of them” or you have a delusional, disjointed fanatical way of thinking…. Nope, Sorry! cant be, Highly doubtful, I cannot be wrong. Nope, Don’t can what others have to say. Flame them and cause hate and discontent. Yep.
Chris is right on, you will always work for someone else. You are probably fed up that others can make a difference. Carrying around a facial expression like you ate a pound of sour grapes after hitting your foot with a 10 pound mallet.
I will do as many in the past do, simply ignore your comments and inputs, because at best, all I hear is an inner whining.
Now here’s something interesting…
Reviewing the comments, two things become apparent. Those defending Spamhaus never use their own names. Additionally, they don’t actually defend Spamhaus, instead depending on personal attacks and a variety of invalid suggestions for how to avoid being caught up by it.
That suggests that Spamhaus is too corrupt to defend directly or have your own name associated with.
, is someone becoming enlightened on Brian’s blog? You are onto something…now dig deeper. http://www.stophaus.com/forum.php
Where are your references?
You are an obvious Spamhaus troll. First, no one has “caught” anyone. Sven was on every media channel in the world and was not hiding. Secondly, spamhaus sure as hell didn’t catch anyone and no one has been charged with anything. At this point, it very well could end with Sven being released very quietly and Spamhaus execs being arrested and convicted of federal terrorism crimes. You are the worst disinformationalist I have ever seen.
A HUGE thanks to Xeroflux for providing such a detailed piece on how to handle groups like NANAE, Spamhaus, COINTEL, and other Disinformationalist organizations representing big corporations that are destroying the planet.
[Link withheld for privacy]
Twenty-Five Rules of Disinformation
Note: The first rule and last five (or six, depending on situation) rules are generally not directly within the ability of the traditional disinfo artist to apply. These rules are generally used more directly by those at the leadership, key players, or planning level of the criminal conspiracy or conspiracy to cover up.
1. Hear no evil, see no evil, speak no evil. Regardless of what you know, don’t discuss it — especially if you are a public figure, news anchor, etc. If it’s not reported, it didn’t happen, and you never have to deal with the issues.
2. Become incredulous and indignant. Avoid discussing key issues and instead focus on side issues which can be used show the topic as being critical of some otherwise sacrosanct group or theme. This is also known as the ‘How dare you!’ gambit.
3. Create rumor mongers. Avoid discussing issues by describing all charges, regardless of venue or evidence, as mere rumors and wild accusations. Other derogatory terms mutually exclusive of truth may work as well. This method which works especially well with a silent press, because the only way the public can learn of the facts are through such ‘arguable rumors’. If you can associate the material with the Internet, use this fact to certify it a ‘wild rumor’ from a ‘bunch of kids on the Internet’ which can have no basis in fact.
4. Use a straw man. Find or create a seeming element of your opponent’s argument which you can easily knock down to make yourself look good and the opponent to look bad. Either make up an issue you may safely imply exists based on your interpretation of the opponent/opponent arguments/situation, or select the weakest aspect of the weakest charges. Amplify their significance and destroy them in a way which appears to debunk all the charges, real and fabricated alike, while actually avoiding discussion of the real issues.
5. Sidetrack opponents with name calling and ridicule. This is also known as the primary ‘attack the messenger’ ploy, though other methods qualify as variants of that approach. Associate opponents with unpopular titles such as ‘kooks’, ‘right-wing’, ‘liberal’, ‘left-wing’, ‘terrorists’, ‘conspiracy buffs’, ‘radicals’, ‘militia’, ‘racists’, ‘religious fanatics’, ‘sexual deviates’, and so forth. This makes others shrink from support out of fear of gaining the same label, and you avoid dealing with issues.
6. Hit and Run. In any public forum, make a brief attack of your opponent or the opponent position and then scamper off before an answer can be fielded, or simply ignore any answer. This works extremely well in Internet and letters-to-the-editor environments where a steady stream of new identities can be called upon without having to explain criticism, reasoning — simply make an accusation or other attack, never discussing issues, and never answering any subsequent response, for that would dignify the opponent’s viewpoint.
7. Question motives. Twist or amplify any fact which could be taken to imply that the opponent operates out of a hidden personal agenda or other bias. This avoids discussing issues and forces the accuser on the defensive.
8. Invoke authority. Claim for yourself or associate yourself with authority and present your argument with enough ‘jargon’ and ‘minutia’ to illustrate you are ‘one who knows’, and simply say it isn’t so without discussing issues or demonstrating concretely why or citing sources.
9. Play Dumb. No matter what evidence or logical argument is offered, avoid discussing issues except with denials they have any credibility, make any sense, provide any proof, contain or make a point, have logic, or support a conclusion. Mix well for maximum effect.
10. Associate opponent charges with old news. A derivative of the straw man — usually, in any large-scale matter of high visibility, someone will make charges early on which can be or were already easily dealt with – a kind of investment for the future should the matter not be so easily contained.) Where it can be foreseen, have your own side raise a straw man issue and have it dealt with early on as part of the initial contingency plans. Subsequent charges, regardless of validity or new ground uncovered, can usually then be associated with the original charge and dismissed as simply being a rehash without need to address current issues — so much the better where the opponent is or was involved with the original source.
11. Establish and rely upon fall-back positions. Using a minor matter or element of the facts, take the ‘high road’ and ‘confess’ with candor that some innocent mistake, in hindsight, was made — but that opponents have seized on the opportunity to blow it all out of proportion and imply greater criminalities which, ‘just isn’t so.’ Others can reinforce this on your behalf, later, and even publicly ‘call for an end to the nonsense’ because you have already ‘done the right thing.’ Done properly, this can garner sympathy and respect for ‘coming clean’ and ‘owning up’ to your mistakes without addressing more serious issues.
12. Enigmas have no solution. Drawing upon the overall umbrella of events surrounding the crime and the multitude of players and events, paint the entire affair as too complex to solve. This causes those otherwise following the matter to begin to lose interest more quickly without having to address the actual issues.
13. Alice in Wonderland Logic. Avoid discussion of the issues by reasoning backwards or with an apparent deductive logic which forbears any actual material fact.
14. Demand complete solutions. Avoid the issues by requiring opponents to solve the crime at hand completely, a ploy which works best with issues qualifying for rule 10.
15. Fit the facts to alternate conclusions. This requires creative thinking unless the crime was planned with contingency conclusions in place.
16. Vanish evidence and witnesses. If it does not exist, it is not fact, and you won’t have to address the issue.
17. Change the subject. Usually in connection with one of the other ploys listed here, find a way to side-track the discussion with abrasive or controversial comments in hopes of turning attention to a new, more manageable topic. This works especially well with companions who can ‘argue’ with you over the new topic and polarize the discussion arena in order to avoid discussing more key issues.
18. Emotionalize, Antagonize, and Goad Opponents. If you can’t do anything else, chide and taunt your opponents and draw them into emotional responses which will tend to make them look foolish and overly motivated, and generally render their material somewhat less coherent. Not only will you avoid discussing the issues in the first instance, but even if their emotional response addresses the issue, you can further avoid the issues by then focusing on how ‘sensitive they are to criticism.’
19. Ignore proof presented, demand impossible proofs. This is perhaps a variant of the ‘play dumb’ rule. Regardless of what material may be presented by an opponent in public forums, claim the material irrelevant and demand proof that is impossible for the opponent to come by (it may exist, but not be at his disposal, or it may be something which is known to be safely destroyed or withheld, such as a murder weapon.) In order to completely avoid discussing issues, it may be required that you to categorically deny and be critical of media or books as valid sources, deny that witnesses are acceptable, or even deny that statements made by government or other authorities have any meaning or relevance.
20. False evidence. Whenever possible, introduce new facts or clues designed and manufactured to conflict with opponent presentations — as useful tools to neutralize sensitive issues or impede resolution. This works best when the crime was designed with contingencies for the purpose, and the facts cannot be easily separated from the fabrications.
21. Call a Grand Jury, Special Prosecutor, or other empowered investigative body. Subvert the (process) to your benefit and effectively neutralize all sensitive issues without open discussion. Once convened, the evidence and testimony are required to be secret when properly handled. For instance, if you own the prosecuting attorney, it can insure a Grand Jury hears no useful evidence and that the evidence is sealed and unavailable to subsequent investigators. Once a favorable verdict is achieved, the matter can be considered officially closed. Usually, this technique is applied to find the guilty innocent, but it can also be used to obtain charges when seeking to frame a victim.
22. Manufacture a new truth. Create your own expert(s), group(s), author(s), leader(s) or influence existing ones willing to forge new ground via scientific, investigative, or social research or testimony which concludes favorably. In this way, if you must actually address issues, you can do so authoritatively.
23. Create bigger distractions. If the above does not seem to be working to distract from sensitive issues, or to prevent unwanted media coverage of unstoppable events such as trials, create bigger news stories (or treat them as such) to distract the multitudes.
24. Silence critics. If the above methods do not prevail, consider removing opponents from circulation by some definitive solution so that the need to address issues is removed entirely. This can be by their death, arrest and detention, blackmail or destruction of their character by release of blackmail information, or merely by destroying them financially, emotionally, or severely damaging their health.
25. Vanish. If you are a key holder of secrets or otherwise overly illuminated and you think the heat is getting too hot, to avoid the issues, vacate the kitchen.
______________________________________________________________________________________
Eight Traits of the Disinformationalist
1) Avoidance. They never actually discuss issues head-on or provide constructive input, generally avoiding citation of references or credentials. Rather, they merely imply this, that, and the other. Virtually everything about their presentation implies their authority and expert knowledge in the matter without any further justification for credibility.
2) Selectivity. They tend to pick and choose opponents carefully, either applying the hit-and-run approach against mere commentators supportive of opponents, or focusing heavier attacks on key opponents who are known to directly address issues. Should a commentator become argumentative with any success, the focus will shift to include the commentator as well.
3) Coincidental. They tend to surface suddenly and somewhat coincidentally with a new controversial topic with no clear prior record of participation in general discussions in the particular public arena involved. They likewise tend to vanish once the topic is no longer of general concern. They were likely directed or elected to be there for a reason, and vanish with the reason.
4) Teamwork. They tend to operate in self-congratulatory and complementary packs or teams. Of course, this can happen naturally in any public forum, but there will likely be an ongoing pattern of frequent exchanges of this sort where professionals are involved. Sometimes one of the players will infiltrate the opponent camp to become a source for straw man or other tactics designed to dilute opponent presentation strength.
5) Anti-conspiratorial. They almost always have disdain for ‘conspiracy theorists’ and, usually, for those who in any way believe JFK was not killed by LHO. Ask yourself why, if they hold such disdain for conspiracy theorists, do they focus on defending a single topic discussed in a NG focusing on conspiracies? One might think they would either be trying to make fools of everyone on every topic, or simply ignore the group they hold in such disdain.Or, one might more rightly conclude they have an ulterior motive for their actions in going out of their way to focus as they do.
6) Artificial Emotions. An odd kind of ‘artificial’ emotionalism and an unusually thick skin — an ability to persevere and persist even in the face of overwhelming criticism and unacceptance. This likely stems from intelligence community training that, no matter how condemning the evidence, deny everything, and never become emotionally involved or reactive. The net result for a disinfo artist is that emotions can seem artificial.
Most people, if responding in anger, for instance, will express their animosity throughout their rebuttal. But disinfo types usually have trouble maintaining the ‘image’ and are hot and cold with respect to pretended emotions and their usually more calm or unemotional communications style. It’s just a job, and they often seem unable to ‘act their role in character’ as well in a communications medium as they might be able in a real face-to-face conversation/confrontation. You might have outright rage and indignation one moment, ho-hum the next, and more anger later — an emotional yo-yo.
With respect to being thick-skinned, no amount of criticism will deter them from doing their job, and they will generally continue their old disinfo patterns without any adjustments to criticisms of how obvious it is that they play that game — where a more rational individual who truly cares what others think might seek to improve their communications style, substance, and so forth, or simply give up.
7) Inconsistent. There is also a tendency to make mistakes which betray their true self/motives. This may stem from not really knowing their topic, or it may be somewhat ‘freudian’, so to speak, in that perhaps they really root for the side of truth deep within.
I have noted that often, they will simply cite contradictory information which neutralizes itself and the author. For instance, one such player claimed to be a Navy pilot, but blamed his poor communicating skills (spelling, grammar, incoherent style) on having only a grade-school education. I’m not aware of too many Navy pilots who don’t have a college degree. Another claimed no knowledge of a particular topic/situation but later claimed first-hand knowledge of it.
8) Time Constant. Recently discovered, with respect to News Groups, is the response time factor. There are three ways this can be seen to work, especially when the government or other empowered player is involved in a cover up operation:
a) ANY NG posting by a targeted proponent for truth can result in an IMMEDIATE response. The government and other empowered players can afford to pay people to sit there and watch for an opportunity to do some damage. SINCE DISINFO IN A NG ONLY WORKS IF THE READER SEES IT – FAST RESPONSE IS CALLED FOR, or the visitor may be swayed towards truth.
b) When dealing in more direct ways with a disinformationalist, such as email, DELAY IS CALLED FOR – there will usually be a minimum of a 48-72 hour delay. This allows a sit-down team discussion on response strategy for best effect, and even enough time to ‘get permission’ or instruction from a formal chain of command.
c) In the NG example 1) above, it will often ALSO be seen that bigger guns are drawn and fired after the same 48-72 hours delay – the team approach in play. This is especially true when the targeted truth seeker or their comments are considered more important with respect to potential to reveal truth. Thus, a serious truth sayer will be attacked twice for the same sin.
Book him, Danno! (And then throw the entire book at him for a really long-time lockup somewhere unpleasant….)
Correct me if I’m not reading this right, but apparently this guy ended up exposing himself and whatever botnet he managed to control for no more reason than to throw a tantrum?
In a way, this guy self-selected out of the criminal pool and right into jail. He could’ve used that level of control for illicit profit, but instead used it to lash out at someone for no gain, and a helluva lot of loss.
Don’t get me wrong; I’m not mourning his arrest. On the contrary, I’m celebrating it. It’s a good thing one of these guys went down. I’m just amazed that he wasn’t disciplined enough to figure out he could make money or accomplish other things from where he was at. Then again, he sounded like he was a bit off mentally, so maybe it shouldn’t be a surprise.
After Spamhaus caused de-peering his income source ran dry and I guess the botnet used to attack Spamhaus wasn’t his own…
He obviously had computers to confiscate, so he obviously had resources to launch an attack.
Being of mentally is no longer an excuse since it does not preclude launching an destructive attack.
Being off mentally is no longer an excuse since it does not preclude launching an destructive attack.
“he made claims about being his own independent country in the Republic of Cyberbunker, and said he was not bound by any laws and whatnot.”
Sounds exactly like the Muslim train bomber in Toronto who said that “all of those conclusions was taken out based on Criminal Code and all of us we know that this Criminal Code is not holy book, it’s just written by set of creations (i.e. non-Muslims).”
http://news.nationalpost.com/2013/04/24/terror-charges-dont-matter-not-based-on-holy-book-suspect-in-alleged-via-plot/
Nuts will be nuts.
Like I said before on here, it goes no deeper than “this person will of course justify what they do because they did it and they liked doing it.” That kind of thinking can come about from a average person going “Well I’ll eat the last food item in the fridge so it isn’t wasted”, which is harmless justification. You start becoming desensitized, or have other things going on inside your head however, then it turns into “people die everyday, pressure cooker bombs are nothing to feel anything over” which is the mindset of Dzhokhar Tsarnaev. And then of course they’ll also latch onto any other idea that supports their behavior to further justify it.
http://en.wikipedia.org/wiki/Self-justification
Basically, people like to justify things they do and not view themselves as faulty. Big shocker, huh?
This Sven Olaf Kamphuis (SK) lowlife is highly delusional in his thinking. It’s kind of that grandiose , ” I’ll show you that I’m better mentality” that hard core hackers are known for. So anyway this shows yet again that hackers and cyber-criminals do eventually get caught and they are “not above the law”
This cowardly prince has now been dethroned at the bunker
Suspect in massive Spamhaus DDoS attack arrested in Spain
http://nakedsecurity.sophos.com/2013/04/26/suspect-in-massive-spamhaus-ddos-attack-arrested-in-spain/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+nakedsecurity+%28Naked+Security+-+Sophos%29
http://nakedsecurity.sophos.com/2013/04/25/redkit-exploit-brian-krebs/
Oh wow, I just noticed this. Ahaha “Crebs, its your fault”
Just change the site name to “Crabs n Sekurtee”
“Crebs, its your fault”
The spelling mistake is strange given that the crackers are probably Russian and the Cyrillic alphabet contains a ‘K’ pronounced as it is in English and a ‘C’ pronounced like an English ‘S’.
Maybe they’re not Russians. Maybe they’re Kambodians, Kook Islanders, Kroatians, Kubans, or even Kanadians redirecting to Kolombia, or just maybe they’re just Kooks!
YFTR:
“On 13 May 2010, the Hamburg District Court ordered an injunction against CB3Rob Ltd & Co KG (Cyberbunker) and its operator, Mr. Sven Olaf Kamphuis, restraining them from connecting The Pirate Bay site to the Internet. The injunction application was brought by the Motion Picture Association’s member companies.” (http://en.wikipedia.org/wiki/The_Pirate_Bay)
The flag he’s showing belongs to the Pirate Party movement of which’s german and netherlands parties he is/was a member of… (http://en.wikipedia.org/wiki/Pirate_Parties_International)
IMHO this DDOS was an act of cyberterrorism and he should be imprisoned whith his cybercriminal friends in his depeered bunker for the rest of his life breeding mushrooms. 😉
Sven is crazier than a snake’s armpit. Good riddance, too. He contributes nothing to the world.
“Mr. Kamphuis could not be immediately reached for comment.” because he is detained! lol Krebs you’re awesome 🙂
Your article is loaded and we all know it. Spamhaus is not an RBL. They have an escalation process that they are not hiding and it clearly shows extortion procedures.
Sven did not carry out any attacks and the mere fact that “the attacks started hours after the last ISP dropped Cyberbunker” should give you half the clues on that from the start.
STOPhaus is nothing more than a group of people Spamhaus did wrong and/or are tired of the censorship practices and their apparent immunity to scrutiny through diverse shell company structuring.
No one attacked Cloudflare. When an attack is targeted to a specified DNS record, it is the action of the person mitigating the attack that decides where the payload is distributed and how. Cloudflare made their own decisions and must accept the good consequences with the bad. They can not make poor admin decisions and then place blame on an attack that had nothing to do with them until they got in the middle of it.
The attacks were carried out by attackers in countries where their actions were perfectly legal. Regardless of what your country’s laws are or even those of my country…in their country they are innocent by a lack of regulation against their actions.
The attacks began on March 15th, but Spamhaus doesn’t want to be honest about being down for a week before going to Cloudflare and Cloudflare isn’t going to admit it took them 3 days of poor decisions before finally putting the DNS behind a reverse proxy.
No one seems to want to talk about the fact that Cloudflare has over 50 listings on Spamhaus databases until they took a bullet for Spamhaus and in return, Spamhaus delisted all their IPs from their databases, regardless of the fact that Cloudflare is a well-known “spam support service” and claims to be a bullet-proof host themselves, which is exactly what Sven’s Cyberbunker was listed for to begin with.
The whole corporate structure of Spamhaus stinks to high orbit and every Google search into their public records turns up more and more dirt. The only mystery in our eyes is why the media continues to act as if these facts are not true or not worthy of publication, but more so, why any media outlet would condone censorship in any way.
We want to ask all media outlets…what if it were your media being censored?
Off-sho.re, is that you?
Lots of claims, no citations. Thanks but no thanks.
“STOPhaus is nothing more than a group of people Spamhaus did wrong and/or are tired of the censorship practices and their apparent immunity to scrutiny through diverse shell company structuring.”
Here, let me fix that for you:
“STOPhaus is nothing more than a group of bulletproof hosting providers and carders Spamhaus did wrong and/or are tired of the censorship practices and their apparent immunity to scrutiny through diverse shell company structuring.”
Gee, that’s kind of odd. You seem to know all about it, John! I wonder why that is? Inside info, I suppose??
Bad Boy, Bad Boy! Whatchu gonna do? Whatchu gonna do, when they come for you???
In case you may find it interesting, this is the justification for Operation StopHaus.
http://pastebin.com/UAgfwiyC
Some will call it self-justification, others will say they were right to do so. I think that, like in life, there are greyish tones everywhere. I don’t support Opeation StopHaus, but at the same time I do believe SpamHaus strategies can be have extorsion traits if they end up focusing on someone innocent (yes, no one is always true, it can happen).
“…I think that, like in life, there are greyish tones everywhere….”
Unfortunately, John, because of the threat of the ever increasing online criminal activity and terrorism, life is increasingly becoming a black and white issue.
Consider that in the US torture, inprisonment w/o due process, and assassination of both foreign, and US, citizens has been approved of since the GWBush administration, and continues to be so.
As this escalates, it will be very important to make it very clear that we are on the white side, and not even sympathetic to the black.
Otherwise, we will inevitably be putting ourselves, and possible our families at risk. Is it worth it to make a point? Not for me.
It’s been tried in N. Korea, and other such black & white regimes, but not too successfully. ;^)
Wooooooo. Man, this guy kept some strange company. The first URL is about stop house movement in Spamhaus.org’s opinion.
http://www.spamhaus.org/rokso/evidence/ROK9383/andrew-stephens-mail-mascot/spammers-propaganda-site-stophaus
The SECOND url is about his partner, his twisted ways of doing business. lets say 10% of what they say is true. Would you invite these guys over for tea?
TRUCE ! Drop your guard for a second and enjoy some twisted entertainment on this second URL….hehehehehe. = )
http://www.spamhaus.org/rokso/evidence/ROK9823/andrew-stephens-mail-mascot/reverend-pastor-peanut-butter-andrew-stephens-starts-his-own-scam-church
Posting press releases from Spamhaus? Seems to me that makes you a spammer.
Click on his mug shot…wow… How is this guy not in white looney bin clothing?
If you dig you will see many arrests, some dismissed, but at best, this is simply not “normal”.
http://www.courtclerk.org/
search court records for Stephens, Andrew
he is Andrew J Stephens
Yeah, yeah, yeah…. we get it. You’re an employee of Scamhaus and you need, desperately, to have people believe your side of the story. But since you’re not really a legitimate organization, you’re only hope is to tar this fellow beyond belief.
Seems you’ve lost touch with reality…
| The Spamhaus Project Organization:
| 18 Avenue Louis Casai, CH-1209, Geneva, Switzerland
| The Spamhaus Project Ltd. Registered Office:
| 26 York Street, London W1U 6PZ, United Kingdom
| A nonprofit company limited by guarantee.
| Registered in London, England. Company No. 05303831.
| Spamhaus and the Spamhaus Logo are Registered
| Trademarks of The Spamhaus Project Ltd.
(Source: http://www.spamhaus.org/organization/)
Legally organized and operating in a legitimate fashion are not the same thing. For example, having employees post spam comments and links to company press releases using untraceable screen names may be legal but it is hardly legitimate.
Similarly, tarring small business owners who possess a single IP address auto-assigned by their hosting company while allowing Amazon AWS to operate the worlds largest spam and hacking operation without putting a single one of their IP addresses in your system is quite legal, but it’s simply not legitimate.
Spamhaus is not a legitimate operation and every honest small business owner on the Spamhaus RBL who must deal with their email going to spam bins or not being delivered knows it.
Clean up your database and introduce a transparent system for adding and removing addresses, and then you can claim you’re a legitimate operation. Until then, you’re no more legitimate than the spammers who operate from legally organized business entities.
If you’re this Richard Draucker (http://www.pkbusinessmarketing.com/lookupbook/) you’ve got some pretty dodgy business ethics yourself. And no, I’m not nothing to do with Spamhaus, just a regular Krebs reader who’s fed up with the broken nature of email, plus all the other dodgy seo, “fake directory” and other shysters out there.
So, you’re not affilated with Spamhaus, yet you post a link to discredit someone who exposes their corruption while ignoring this link…
http://pkbusinessmarketing.awardspace.us/
And then you reference the business mentioned as a “fake directory”. What exactly is a fake directory? Is that a directory of businesses that don’t exist?
Let’s be clear, if you aren’t a Spamhaus employee or contractor, you’re at least a Spamhaus supporter who knows their practices aren’t legitimate and, lacking any valid means of defending them, you pull a classic “kill the courier” move.
That’s pretty typical of Spamhaus shills.
Judging by what I see in the media and on this and other comment threads, Spamhaus is spending big bucks to tar the guy who was arrested. Yet, so far, I haven’t seen anything posted that resembles evidence of his having done anything other than refuse to be bullied by Spamhaus and those of you on the the Spamhaus payroll.
Even if you believe everything your saying, do you expect us to believe that all of these small businesses are legit and not any spammers?
I really, really dislike the behavior all spammers, highjackers, online criminals of all kinds, terrorists, and generally all the unethical opportunists that would do harm to others.
I still respect them as people with spiritual value, but much can be learned in prison (I worked at one, and I certainly wouldn’t want to be put in one, but hard time offers its own growth opportunities ;^)
Look at Kevin Mitnick. His attitude seems to have improved quite a bit. I didn’t like that fact that he was denied due process.
But if you align yourself with the black side to make a point about injustice, good luck on that one these days.
Anyone can suffer from being in the wrong place at the wrong time.
I don’t like injustice, but I’ve been infected by foulware and lost a lot of data, time, energy and probably years of life expectancy, so it’s time something is done.
And when that time comes, someone innocent always suffers, usually because of their own reactionary emotions and lack of precautions.
Good luck.
Joao: “Even if you believe everything your saying, do you expect us to believe that all of these small businesses are legit and not any spammers?”
No, of course not, but you really need to define the term spammer, and look at what percentage of spam comes from what sources.
If a new business owner, a florist, whose past experience with the Internet is limited to looking up recipes and posting pictures to Facebook, sends two dozen emails to prospective customers, is she a spammer? Well, yes, but she isn’t the problem.
The problem is the people behind the ever rotating IPs at places like Amazon AWS who send millions of emails daily for a variety of nefarious purposes. Those are the real spammers and they are responsible for more than 80% of all spam. Are they in the Spamhaus database? No.
The Spamhaus database is dominated by those neighborhood florists while totally free of IP addresses for large corporations like Amazon AWS and ATT no matter how much spam comes from their networks.
Spamhaus could easily clean up their database, they choose not to. They could easily send warning notices to small business owners prior to banning their IP, they choose not to. They could easily create an open and transparent system, they choose not to.
They choose to ignore the sources of 80% of all spam while focusing on trivial small businesses that often don’t even think of what they are doing as spam.
More importantly, rather than focusing on improving their system, whenever someone points out their corruption and lack of legitimacy, Spamhaus goes after the speaker. That’s why you see so many comments here attacking me, rather than defending the Spamhaus system.
My neighborhood florists would send a mailer to my house,. They wouldn’t know I live near them via my personal email address? How would they know?
If the florists are serious about wanting my business, they would research the demographic of my physical neighborhood. It costs them money to send out mailers, so they do proper research, unlike spammers.
You’re just being silly now.
I had never even heard of Spmahaus till this incident but now I’m glad they exist. Why should I waste time everyday dealing with you and the other lazy people like you, SPAM?
How is Spamhaus corrupt and lack of legitimacy? You haven’t made any relevant points, since you seem to be a spammer.
I’m all for Freedom of Speech and against censorship, as long as it happens on your website and not my Inbox. Of course if your Freedom of Speech is used for commerce activities, it can be regulated and many laws apply.
Johnny – They wouldn’t know I live near them via my personal email address? How would they know?
They would know because you have done business with some other company that sold your information to a list broker who sold your information to the florist.
The average cost of B2C customer acquisition in America is in the hundreds of dollars, for B2B it can actually run into the thousands. Companies offset that cost by selling their list of customers.
Its pretty clear that few of the people defending Spamhaus have ever even heard of them. You just assume they must be good because, well, they’re against spam, and you’re against spam, so they must be good like you. And they do have such very nice press releases with lots of backing from the multinational corporations that are destroying the middle class.
It’s the same logic path Bush used to get approval for the invasion of Iraq and, gosh, that sure turned out well. The end result will be pretty much the sam… less competition, fewer jobs, lower wages, less opportunity, and higher taxes. But at least you won’t have to deal with spam or junk mail from local small business people, just spam from the fake pharmaceutical companies in India and Russia that Spamhaus doesn’t do anything about because they’re using the Amazon AWS system.
Whining about your misadventures with amazon is not going to cause Spamhaus.org to go away. It strengthens most of the comments your making. Your complaining about the IPs being listed, and why do you THINK (at least I hope you do) they got put on that list in the first place?
You have any clue about network security? How quickly a scam can be brought online and listed in a major site like Amazon, Ebay or other B2B ? Hours. It takes less than 72 hours to knock them off line and ban that IP. It takes this thing called self-motivation to ask the right questions the right way to the right people to get the answers.
I am not going to educate you on the ways of network security, FQDN, Hosting, Whois, Spamhaus and other agencies. Its truly not worth my time and effort. YOUR effort in the way of looking at items is the end result of what people think of you and at this moment, there isn’t much of a fan base in your corner.
I’ll just put you on ignore. Chris, I think you hit the Bozo right on the head. It seems The topic at hand the potentially the person you are talking about have ethics issues ! Oh My… and at best … the ISP’s don’t want this….service so they are forced to go where others of their kind have traveled.
AWESOME customer service there RD !!!
I am going to go to spamhaus.org right now and see if this person is on the list as well ! hehehehehehe.
When you have the spam pimps so upset, as exhibited in some of the comments above, you know the pain of being shut down is felt in their wallet. Much of Brian’s work exposes these bottom feeders, criminals and looney criminals, who are exploiting the technology and the law for big money. You can expect the criminals to cry foul, or “censorship” or a variety of other crap when the rest of society refuses to pay for their daily harassments.
Bottom line: spammers lost another battle. Expect more loses. You spam pimps cry me a river. No one is listening.
To all those who are posting here that are drawn to the black side:
As I said, I’ve worked in a “correctional facility” and I can tell you that all the cons inside are suffering (even the ones who are preditory. Confinement, old age, sickness and death are all they have to look forward to.)
One of the most common sayings from the cons was “if you can’t do the time, don’t do the crime!”
If you’re young and “pretty”, aah, man, you don’t even want to know what those big, burley cons can do to you! The only way to prevent it is to kill someone, and that means you’ll be in there rest of your life, if someone doesn’t kill you first.
If you don’t think you’ll eventually regret getting sent up, you are really living in dreamland, bro!
It’s easy to avoid this kind of suffering, you know. It’s called “right employment”, plus “right attitude”. Get a new job, and get a new attitude.
It takes more guts to change an attitude than to keep it. It’s not easy to change, even when you make your mind up. I know. But it gets easier if you stick with it. It takes staying power.
You think you’re tough? Try spending the next 50 years working on a better attitude! ;^)
Whether you Love him or whether you hate him, Brian sets a pretty good example by trying to do something positive with his life by helping others, and I, and a lot of others, appreciate it.
So, what it boils down to is making a bad choice and convincing yourself that your too smart to get caught.
Well, I feel for you, I really do. But I assure you that you will eventually get caught (either in this life, or the next. I know, you probably don’t believe in an afterlife. One thing’s for sure, you’ll have to find out, won’t you? Good luck when you have to look under your justifications for hurting others, and your buddies aren’t there to cheer you on & prop you up! Ouch!)
Whether you Love him or whether you hate him, Brian sets a pretty good example by trying to do something positive with his life, and I, and a lot of others, appreciate it.
So, what it boils down to is making a bad choice and convincing yourself that your too smart to get caught.
Well, I feel for you, I really do. But I assure you that you will eventually get caught (either in this life, or the next. I know, you probably don’t believe in an afterlife. One thing for sure, you’ll have to find out, won’t you? Good luck when you have to look under your justifications for hurting others, and your buddies aren’t there to cheer you on & prop you up! Ouch!)
In respect to the IPaddress above in Brian’s post, I found the AS numbers that relate to the range. The are:
AS51787 ( CB3ROB)
AS34109 ( CB3ROB)
According to Robtex, the AS34109 is loaded with filthly websites.
I will post the link to the Robtex website rather than SEO promote the filth:
http://as.robtex.com/as34109.html#sites
This is just 100 Randomly selected sites which are hosted at this IP/AS. If you hover over the links within Robtex, you usually are safe if the links stay within the Robtex domain. It can offer a wealth of information for those who care to dig into such things. Proceed with caution and at your own risk.
Exactly. And on the same note, have a look at this:
http://sitevet.com/db/asn/AS34109
According to sitevet, cyberbunker is ranked #43 for badness out of approximately 43,000 networks, making it dirtier than 99.9% of networks on the Internet…
But let me guess, the fact that Spamhaus, SiteVet, and a bunch of other organizations all think cyberbunker is a cesspool is not reason to McColo as34109… instead it’s evidence of an Internet-wide “Jewish conspiracy” against Sven…. right? Because that makes more sense, right?
Potato: “According to sitevet, cyberbunker is ranked #43 for badness”
First, for clarity, this reponse isn’t to suggest that Cyberbunker wasn’t a bad place. It’s ONLY to point out some irregularities with your source of that claim.
Sitevet is new and clearly labeled as beta. It’s whois record shows a fake address at MyUS.com in Florida. My guess is the phone number is also fake, probably a Florida Skype number. Tellingly, Sunbiz.org (the Florida Secretary of State website) lists no business by the name of sitevet registered in the State of Florida.
Note that, Spamhaus is well known for creating such fake entities which they then cite in their reports to support their claims against hosts and sites. I’m not claiming that’s the case here, I’m only making it clear that it could be. Who is SiteVet, really?
That said, Sitevet seems to depend heavily on HE data. I’m not familiar with that, but looking at the site generating that data, I see they rank China as less of a risk than America, but list Russia as being the worst place in the world. My experience isn’t consistent with their data.
No where in any of the data do I see what my own server logs tell me… The worst host in the world is Amazon AWS, which isn’t in any RBL, yet generates more bad traffic than all other sources combined.
I’m obviously not alone in that. There are so many others with the same experience that the Amazon AWS IPs are published online so they can be blocked.
FYI – My logs show Microsoft IPs #2 for bad traffic.
Frankly, I don’t have much confidence in the Internet security community. There’s a suspicious lack of transparency and I see a lot of effort going into shutting down places like Cyberbunker, which my logs have no hits from whatsoever, while totally ignoring spam and malware coming from American corporate networks, which just happens to be the industry’s main source of income. | true | true | true | A 35-year-old Dutchman thought to be responsible for launching what's been called "the largest publicly announced online attack in the history of the Internet" was arrested in Barcelona on Thursday by Spanish authorities. The man, identified by Dutch prosecutors only… | 2024-10-12 00:00:00 | 2013-04-26 00:00:00 | null | null | krebsonsecurity.com | Briankrebs | null | null |
35,114,899 | https://www.scattered-thoughts.net/writing/things-unlearned/ | Things unlearned | null | *This post is part of a series, starting at Reflections on a decade of coding.*
One of my favorite questions to ask people is: what are some things that you used to strongly believe but have now changed your mind about?
Here are some of mine.
## Everyone is doing it wrong
Here are some quotes that I would have agreed with 10 years ago:
Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.
[...] the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. You can't trust the opinions of the others, because of the Blub paradox: they're satisfied with whatever language they happen to use, because it dictates the way they think about programs.
[...] clearly revolutionizes software as most know it. It could lead to efficient, reliable applications. But that won't happen. A mainstay of our economy is the employment of programmers. A winnowing by factor 100 is in no one's interest. Not the programmers, the companies, the government. To keep those programmers busy requires clumsy languages and bugs to chase.
[...] the reason we are facing bugs that kill people and lose fortunes, the reason that we are facing a software apocalypse, is that too many programmers think that schedule pressure makes it OK to do a half-assed job.
It's easy to find examples of this idea - that everyone is doing computers completely wrong and that there exist simple solutions (and often that everyone else is just too lazy/stupid/greedy/immoral to adopt them).
X is amazing, so why isn't everyone using it? They must be too lazy to learn new things. Y is such a mess, why didn't they just build something simple and elegant instead. They must just be corporate jobsworths who don't care about quality. Why are all those researchers excited about Z? They just don't understand what the real world is like from up in their ivory tower.
It's not limited to programming, of course:
Instead of losing faith in the power of government to work miracles, people believed that government could and should be working miracles, but that the specific people in power at the time were too corrupt and stupid to press the "CAUSE MIRACLE" button which they definitely had and which definitely would have worked. And so the outrage, the protests - kick these losers out of power, and replace them with anybody who had the common decency to press the miracle button!
It's so easy to think that simple solutions exist. But if you look at the history of ideas that actually worked, they tend to only be simple from a distance. The closer you get, the more you notice that the working idea is surrounding by a huge number of almost identical ideas that don't work.
Take bicycles, for example. They seem simple and obvious, but it took two centuries to figure out all the details and most people today can't actually locate the working idea amongst its neighbours.
Even when old niche ideas make a comeback (eg neural networks) it's not because they were right all along but because someone recognized the limitations and found a new variation on the idea that overcame them (eg deep learning).
I imagine some fans of the penny farthing groused about how everyone else was just too lazy or cowardly to ride them. But widespread adoption of bicycles didn't come from some general upswelling of moral fortitude. It came from someone figuring out a design that was less prone to firing the rider headfirst into the ground whenever they hit a bump.
Finding the idea that actually works amidst the sea of very similar ideas that don't work requires staying curious long enough to encounter the fine-grained detail of reality and humble enough to recognize and learn from each failure.
It's ok to think that things have flaws or could be improved. But it's a trap to believe that it's ever the case that a simple solution exists and everyone else is just too enfeebled of character to push the miracle button. All the miracle buttons that we know about have already been pressed.
I learned this the hard way at Eve. Starting from my very earliest writing about it there was a pervading idea that we were going to revolutionize everything all at once. It took me two years to gradually realize that we were just hopping from one superficial idea to another without making any progress on the fundamental problems.
I remember at one early point estimating that it would take me two weeks to put together a reasonable query planner and runtime. The first time I even came close to success on that front was 3 years later. Similarly for incremental maintenance, which I'm still figuring out 7 years later.
It's not that our ideas were bad. It's just that we assumed so strongly that the problems must be simple that we kept looking for simple solutions instead of making use of the tools that were available, and we kept biting off more than we could chew because we didn't believe that any of the problems would take us long to solve.
Contemporaries like airtable instead started by solving an appropriately-sized subset of the problem and putting in the years of work to progressively fill in all the tiny details that make their solution actually useful. Now they're in a solid position to keep chipping away at the rest of the problem.
## Programming should be easy
A similar trap hit often got me on a smaller scale. Whenever I ran up against something that was ugly or difficult, I would start looking for a simpler solution.
For example when I tried to make a note-taking app for tablets many years ago I had to make the gui, but gui tools are always kind of gross so I kept switching to new languages and libraries to try to get away from it. In each successive version I made less and less progress towards actually building the thing and had to cover more and more unknown ground (eg qtjava relies on using reflection to discover slots and at the time was difficult to implement the correct types from clojure). I wasted many hours and never got to take notes on my tablet.
If you have a mountain of shit to move, how much time should you spend looking for a bigger shovel? There's no obviously correct answer - it must depend on the size of the mountain, the availability of large shovels, how quickly you have to move it etc. But the answer absolutely cannot be 100% of your time. At some point you have to shovel some shit.
I definitely feel I've gotten better at this. When I wanted to write a text editor last year I spent a few days learning the absolute basics of graphics programming and text rendering, used mostly mainstream tools like sdl and freetype, and then just sat down and shoveled through a long todo list. In the end it only took 100 hours or so, much less time than I spent thrashing on that note-taking app a decade ago. And now I get to use my text editor all the time.
Sometimes the mountain isn't actually as big as it looks. And the nice thing about shoveling shit is that you get a lot faster with practice.
## The new thing is better
As a corollary to searching for the easy way, I've always been prone to spending far too much time on new or niche ideas. It's usually programming languages that get me, but I see other people do the same with frameworks, methodologies or architectures too. If you're really attracted to novelty you can spend all your time jumping between new things and never actually engage with the mainstream.
Mainstream ideas are mainstream for a reason. They are, almost by definition, the set of ideas which are well understood and well tested. We know where their strengths are and we've worked out how to ameliorate their weaknesses. The mainstream is the place where we've already figured out all the annoying details that are required to actually get stuff done. It's a pretty good place to hang out.
Of course there is value in exploring new ideas, but to be able to sift through the bad ideas and nurture the good ones you have to already thoroughly understand the existing solutions.
For example, at Eve I didn't read any of the vast literature on standard approaches to SQL query planning. I only looked at niche ideas that promised to be simpler or better, despite being completely untested (eg tetris-join). But even after implementing some hot new idea I couldn't tell if it was good or bad because I had no baseline to compare it to. Whereas a group that deeply understands the existing tools can take a new idea like triejoin and compare it to the state of the art, understand its strengths and weaknesses and use it appropriately.
I also remember long ago dismissing people who complained that some hot new niche language was missing a debugger. At the time I did that because I didn't see the need for a debugger when you could just reason about code algebraically. But in hindsight, it was also because I had never used a debugger in anger, had never watched anyone using a debugger skillfully, and had never worked on a project whose runtime behavior was complicated enough that a debugger would be a significant aid. And all of that was because I'd spent all my time in niche languages and instead of becoming fluent in some ecosystem with mature tooling like java or c#.
The frontier is the place to go mining for new ideas, but it's 1% gold and 99% mud. If you live your whole life there you'll never know what indoor plumbing is like and you'll find yourself saying things like "real programmers don't need toilet paper".
## Learning X will make you a better programmer
For the most popular values of X, I haven't found this to be true.
I think these claims are a lot like how people used to say that learning latin makes you smarter. Sure, learning things is fun. And various bits of knowledge are often useful within their own domain. But overwhelmingly, the thing that made me better at programming was doing lots of programming, and especially working on problems that pushed the limits of my abilities.
### Languages
The first language I learned was haskell and for several years I was devoted to proclaiming its innate superiority. Later on I wrote real production code in ocaml, erlang, clojure, julia and rust. I don't believe any of this improved my programming ability.
Despite spending many years writing haskell, when I write code today I don't use the ideas that are idiomatic in haskell. I write very imperative code, I use lots of mutable state, I avoid advanced type system features. These days I even try to avoid callbacks and recursion where possible (the latter after a nasty crash at materialize). If there was an alternate universe where I had only ever learned c and javascript and had never heard of any more exotic languages, I probably still would have converged to the same style.
That's not to say that languages don't matter. Languages are tools and tools can be better or worse, and there has certainly been substantial progress in language design over the history of computing. But I didn't find that any of the languages I learned had a special juice that rubbed off on my brain and made me smarter.
If anything, my progress was often hampered by the lack of libraries, unreliable tools and not spending enough time in any one ecosystem to develop real fluency. These got in the way of working on hard problems, and working on hard problems was the main thing that actually led to improvement.
By way of counter-example, check out this ICFP contest retrospective. Nikita is using clojure, a pretty niche language, but has built up incredible fluency with both the language and the ecosystem so that he can quickly throw out web scrapers and gui editors. Whereas I wouldn't be able to quickly solve those problems in **any** language after flitting around from ecosystem to ecosystem for 12 years.
(See also A defense of boring languages, Your language sucks, it doesn't matter)
### Functional programming
(Specifically as it appears in haskell, clojure, elm etc.)
I **do** find it useful to try to organize code so that most functions only look at their explicit inputs, and where reasonable don't mutate those inputs. But I tend to do that with arrays and hashtables, rather than the pointer-heavy immutable structures typically found in functional languages. The latter imposes a low performance ceiling that makes many of the problems I work on much harder to solve.
The main advantage I see in functional programming is that it encourages tree-shaped data, one-way dataflow and focusing on values rather than pointer identity. As opposed to the graph-of-pointers and spaghetti-flow common in OOP languages. But you can just learn to write in that style from well-designed imperative code (eg like this or this). And I find it most useful at a very coarse scale. Within the scope of a single component/subsystem, mutation is typically pretty easy to keep under control and often very useful.
(Eg here the top-level `desugar`
function is more or less functional. It's internals rely heavily on mutation, but they don't mutate anything outside the `Desugarer`
struct.).
### Lambda calculus / category theory / automata / ...
Certain areas of maths and computer science attract a completely inappropriate degree of mystique. But, like languages, bodies of theory are tools that have a specific use.
- Lambda calculus is useful mainly as a simple standard language for explaining new PL ideas. You need to be familiar with it only if you want to read or write PL papers.
- Automata theory and language classes are only really useful if you're trying to expand the state of the art (eg inventing treesitter). Even though I write parsers all the time, in practice what I need to remember is a) write recursive descent parsers (like most major language implementations) b) google "pratt parsing" when dealing with operator precedence.
- Category theory is the only undergrad class I regret, a hundred hours of my life that has yet to help me solve a single problem or grant any fresh insight.
On the other hand, there are much less sexy areas that have been consistently useful throughout my entire career:
- Very basic probability and statistics are crucial for doing performance estimates, analyzing system behavior, designing experiments, making decisions in life in general.
- Having even the most basic Fisher-Price model of how hardware works makes it much easier to write fast software.
- Being fluent in the core language of mathematics (basic logic, sets, functions, proof techniques) makes it easy to pick up domain-specific tools when I need them eg statistical relational learning when working at relational.ai, bidirectional type inference for imp.
And of course my day-to-day work relies heavily on being able to construct proofs, analyze algorithms (with heavy caveats about using realistic cost models and not erasing constant factors), and being fluent in the various standard algorithmic techniques (hashing, sorting, recursion, amortization, memoization etc).
(See How to solve it for proof heuristics, How to prove it for core math literacy, Statistical rethinking for modelling probabilistic problems.)
I've nothing against theory as a tool. If you do data science, learn statistics. If you do computer graphics, learn linear algebra. Etc.
And if you're interested in eg the theory of computation for its own sake, that's great. It's a fascinating subject. It just isn't an effective way to get better at programming, despite people regularly proclaiming otherwise.
For all of the above, the real kicker is the opportunity cost. The years that I spent messing around with haskell were not nearly as valuable to me as the week I spent learning to use rr. Seeking out jobs where I could write erlang meant not seeking out jobs where I could learn how cpus work or how to manage a long-lived database. I don't write erlang any more, but I still use cpus sometimes.
Life is short and you don't get to learn more than a tiny fraction of the knowledge and skills available, so if you want to make really cool stuff then you need to spend most of your time on the highest-leverage options and spend only a little time on the lottery tickets.
I expect people to object that you never know what will turn out to be useful. But you can make smart bets.
If I could go back and do it again, I would spend the majority of my time trying to solve hard/interesting problems, using whatever were the mainstream languages and tools in that domain, and picking up any domain-specific knowledge that actually came up in the course of solving a problem. Focus on developing fluency and deep expertise in some area, rather than chasing the flavor of the day.
## Intelligence trumps expertise
People don't really say this explicitly, but it's conveyed by all the folk tales of the young college dropout prodigies revolutionizing everything they touch. They have some magic juice that makes them good at everything.
If I think that's how the world works, then it's easy to completely fail to learn. Whatever the mainstream is doing is ancient history, whatever they're working on I could do it in a weekend, and there's no point listening to anyone with more than 3 years experience because they're out of touch and lost in the past.
Similarly for programmers who go into other fields expecting to revolutionize everything with the application of software, without needing to spend any time learning about the actual problem or listening to the needs of the people who have been pushing the boulder up the hill for the last half century.
This error dovetails neatly with many of the previous errors above eg no point learning how existing query planners work if I'm smart enough to arrive at a better answer from a standing start, no point learning to use a debugger if I'm smart enough to find the bug in my head.
But a decade of mistakes later I find that I arrived at more or the less the point that I could have started at if I was willing to believe that the accumulated wisdom of tens of thousands of programmers over half a century was worth paying attention to.
And the older I get, the more I notice that the people who actually make progress are the ones who are keenly aware of the bounds of their own knowledge, are intensely curious about the gaps and are willing to learn from others and from the past. One exemplar of this is Julia Evans, whose blog archives are a clear demonstration of how curiosity and lack of ego is a fast path to expertise.
## Explore vs exploit
This is the core tradeoff embodied by many of the mistakes above. When faced with an array of choices, do you keep choosing the option that has a known payoff (exploit) or do you take a chance on something new and maybe discover a bigger payoff (explore).
I've consistently leaned way too hard towards explore, leaving me with a series of low payoff lottery tickets and a much less solid base to execute from.
If I had instead made a conscious decision to spend, say, 2/3rds of my time becoming truly expert in some core set of safe choices and only 1/3rd exploring new things, I believe I would have come out a much more capable programmer and be able to solve more interesting problems. Because I've watched some of my peers do exactly that. | true | true | true | null | 2024-10-12 00:00:00 | 2021-09-29 00:00:00 | null | article | scattered-thoughts.net | Sc13Ts | null | null |
20,947,917 | https://www.thenation.com/article/philanthropy-charity-inequality-taxes/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,700,118 | https://atomstalk.com/blogs/how-to-solve-boolean-algebra-expressions/ | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
25,071,768 | https://njal.la/blog/we-dont-have-enemies/ | We don't have enemies | null | # We don't have enemies
We don't have enemies. Enemies have us though.
It turns out that the RIA (not the artist thank god) and MPA, Motion Picture Association, are annoyed with the fact that Njalla is not shutting off domains on their illegal requests. We think it's important that courts do their work in order to have some legal grounds for blocking something as important as infrastructure.
We always stand up for freedom of speech, worked against censorship and privacy. As a group we work tirelessly on promoting and extending these very basic human rights to everyone on the globe. We're not afraid of a little good PR from people that see us as enemies. In fact, when these groups see Njalla as something bad, it's because they see basic human rights as something negative. The fact that they're also upset with entire countries like Iceland makes us think someone had a bad day when Trump was not re-elected.
These lobbyist groups have now listed Njalla on their "notorious markets" list. We're proud to be on the list, since it means we have impacted something. Other people/groups/movements that look at us as notorious are groups like The anti-abortion groups, Pro-animal cruelty groups, the anti-democratic movements, the alt-right and not to forget Nazis. They're upset with us not shutting of activists domains in these fields as well. History will tell if MPA/nazis/anti-abortion groups, or the rest of us, are on the right side.
It's also come to our attention that MPA doesn't like Cats. Hitler didn't have cats, he had dogs. | true | true | true | We don't have enemies. Enemies have us though. It turns out that the RIA (not the artist thank god) and MPA, Motion Picture Association, are annoyed with the fact that Njalla is not shutting off domains on their illegal requests. We think it's important that courts do their work in order to have some legal grounds for blocking something as important as infrastructure. We always stand up for freedom of speech, worked against censorship and privacy. As a group we work tirelessly on promoting and extending these very basic human rights to everyone on the globe. We're not afraid of a little good PR from people that see us as enemies. In fact, when these groups see Njalla as something bad, it's because they see basic human rights as something negative. The fact that they're also upset with entire countries like Iceland makes us think someone had a bad day when Trump was not re-elected. These lobbyist groups have now listed Njalla on their "notorious markets" list. We're proud to be on the list, since it means we have impacted something. Other people/groups/movements that look at us as notorious are groups like The anti-abortion groups, Pro-animal cruelty groups, the anti-democratic movements, the alt-right and not to forget Nazis. They're upset with us not shutting of activists domains in these fields as well. History will tell if MPA/nazis/anti-abortion groups, or the rest of us, are on the right side. It's also come to our attention that MPA doesn't like Cats. Hitler didn't have cats, he had dogs. | 2024-10-12 00:00:00 | 2020-11-10 00:00:00 | null | null | null | null | null | null |
15,624,421 | https://azure.microsoft.com/en-us/free/ | Create Your Azure Free Account Or Pay As You Go | Microsoft Azure | null | # Build in the cloud with an Azure account
Get started creating, deploying, and managing applications—across multiple cloud, on-premises, and at the edge—with scalable and cost-efficient Azure services.
## Choose the Azure account that’s right for you
Pay as you go or try Azure free for up to 30 days. There’s no upfront commitment—cancel anytime.
### Azure free account
Best for proof of concept and exploring capabilities
###### to use on Azure services within 30 days
### Pay as you go
Best for customers ready to start building workloads.
###### beyond free monthly amounts
[*]
During the signup verification process, there may be a temporary $1 authorization placed on your card, which will be reversed upon verification.
FREE SERVICES
## Take advantage of free products
These products are free up to the specified monthly amounts. Some are always free to all Azure customers, and some are free for 12 months to new customers only.
12 months
#### Azure Virtual Machines—Windows
750 hours each of B1s, B2pts v2 (Arm-based), and B2ats v2 (AMD-based) burstable VMs
12 months
#### Azure Virtual Machines—Linux
750 hours each of B1s, B2pts v2 (Arm-based), and B2ats v2 (AMD-based) burstable VMs
Always
#### Azure SQL Database
100,000 vCore seconds of SQL database serverless usage per month with 32 GB of storage
12 months
#### Azure Blob Storage
5 GB locally redundant storage (LRS) hot block with 20,000 read and 10,000 write operations
Always
#### AI Speech
0.5 million neural characters per month
Always
#### AI Search
50 MB storage for 10,000 hosted documents and 3 indexes per service
12 Months
#### AI Document Intelligence
500 pages S0 tier
12 Months
#### AI Vision
5,000 transactions for each S1, S2, and S3 tier
RESOURCES
## Learn more about Azure
### Take the next step
### Start building on Azure free
Get free services and a $200 credit to explore Azure for up to 30 days.
### Get started with pay as you go
Pay only for what you use beyond free amounts of services.
## Frequently asked questions
- Read the full offer terms to find information on service-level agreements, the cancellation policy, and other important account details.
- It costs nothing to start with Azure. If you sign up for an Azure free account, you won’t be charged anything unless you decide to move to pay-as-you-go pricing. With pay-as-you-go pricing, you’ll be charged only for what you use beyond your monthly free amounts of services. During the signup verification process, a one-dollar (or equivalent) temporary authorization charge may be placed on your card and then removed upon verification.
- When you create your Azure account, you start getting monthly free amounts of certain types of services. Some are always free to all Azure customers, and some are free to new customers until 12 months after you created your account. If you sign up for an Azure free account, you’ll need to move to pay-as-you-go pricing within 30 days or after you’ve used your credit (whichever happens first) to continue to receive free services.
- Each month you receive specified free amounts of certain types of services. With pay-as-you-go pricing, if you exceed your monthly free amounts for any services, you’ll be billed for them at pay-as-you-go rates. For the services that are free for 12 months, any services you’re using after 12 months has expired will continue to run and you’ll be billed for them at pay-as-you-go rates.
- The Azure free account provides access to all Azure services and does not block customers from building their ideas into production. The Azure free account includes certain types of specific services—and certain amounts of those services—for free. To enable your production scenarios, you may need to use resources beyond the free amounts. If you choose to move to pay as you go, you’ll be billed for those additional resources at pay-as-you-go rates.
- Only specific amounts of certain services are free. If you sign up for an Azure free account, you’ll get a $200 credit that you can apply to try services that aren't in the free list, or to use more than your free amounts of any services. You have up to 30 days to use your credit.
- If you sign up for an Azure free account, you can see your remaining credit under Microsoft Cost Management in the Azure portal.
- We’ll never charge you unless you decide to move to pay-as-you-go pricing. If you move to pay as you go, you’ll only be charged for services that you use above the free monthly amounts. You can check your usage of free services in the Azure portal.
- If you sign up for an Azure free account, we’ll notify you that it’s time to decide if you want to move to pay-as-you-go pricing. If you do, you’ll continue to receive free services and be able to purchase services beyond free amounts. If you don’t, your account and services will be disabled. To resume usage, you'll need to move to pay as you go.
- If you sign up for an Azure free account and move to pay-as-you-go pricing before the end of 30 days, you won’t lose your credit. When you move to pay as you go, any credit you have remaining will still be available for the full 30 days from when you created your free account.
- All you need is a phone number, a credit card or a debit card (non-prepaid), and a Microsoft account or a GitHub account. Only credit cards are accepted in Hong Kong and Brazil.
- We use the phone number and credit card or debit card for identity verification to validate that account holders are real people and not bots. We don't charge your credit card or debit card anything when you sign up for Azure, but you may see a one-dollar (or equivalent) verification hold on your credit card or debit card account. The hold is temporary and will be removed. Azure free account customers will not incur any charges. For pay-as-you-go customers, the credit or debit card you provide will be used as your default payment method when you authorize payment.
- You can sign up with either a Microsoft account or a GitHub account.
- Use the "Sign in with GitHub" option on the Azure sign-in page. When you first sign into a Microsoft product with your credentials, GitHub will ask for your permission to consent. GitHub will share with Microsoft the name and public and private email addresses on your GitHub account to check if you already have a Microsoft account. If it looks like you already have an account, you’ll have the option to use that account and add your GitHub account as a login method. Otherwise, a new account will be created and linked to the GitHub account.
- Microsoft is committed to user privacy. The request for profile information is used to check for the presence of an existing Microsoft account and to create an account if needed. Once your information enters the Microsoft ecosystem, it’s protected by the Microsoft terms of service and isn’t shared without your permission. Connecting a GitHub identity to a Microsoft account will not give Microsoft any code access.
- No, your Azure free account credit can’t be applied to Azure Marketplace offers and you can only purchase products on Azure Marketplace once you’ve moved to pay-as-you-go pricing. However, many Azure Marketplace partners offer free trials and/or free tier plans for their solutions.
- There is a limit of one account with free services and $200 credit per new customer.
- The Azure Hybrid Benefit cannot be combined with the Azure free account credit. You’ll need to move to pay-as-you-go pricing to be able to use your Azure Hybrid Benefit.
- No, but if you move to pay as you go after you use your credit, you can purchase Spot Virtual Machines at deep discounts compared to pay-as-you-go pricing for virtual machines.
- Contact Sales by chat, phone, or email and we'll be happy to answer any questions you have.
## Is Azure available in my location?
Azure is commercially available in 140 countries and regions, including those listed below. Azure free account, free services, and $200 credit don’t include sovereign cloud products in US Government, Azure China, and Azure Germany regions. | true | true | true | Create an Azure account to get started with scalable and cost-efficient services for creating, deploying, and managing applications. | 2024-10-12 00:00:00 | 2024-01-01 00:00:00 | null | website | microsoft.com | azure.microsoft.com | null | null |
5,326,958 | http://www.scq.ubc.ca/a-dialogue-with-sarah-aged-3-in-which-it-is-shown-that-if-your-dad-is-a-chemistry-professor-asking-%E2%80%9Cwhy%E2%80%9D-can-be-dangerous/ | A DIALOGUE WITH SARAH, AGED 3: IN WHICH IT IS SHOWN THAT IF YOUR DAD IS A CHEMISTRY PROFESSOR, ASKING “WHY” CAN BE DANGEROUS | SCQ | Stephen McNeil | # A DIALOGUE WITH SARAH, AGED 3: IN WHICH IT IS SHOWN THAT IF YOUR DAD IS A CHEMISTRY PROFESSOR, ASKING “WHY” CAN BE DANGEROUS
SARAH: Daddy, were you in the shower?
DAD: Yes, I was in the shower.
SARAH: Why?
DAD: I was dirty. The shower gets me clean.
SARAH: Why?
DAD: Why does the shower get me clean?
SARAH: Yes.
DAD: Because the water washes the dirt away when I use soap.
SARAH: Why?
DAD: Why do I use soap?
SARAH: Yes.
DAD: Because the soap grabs the dirt and lets the water wash it off.
SARAH: Why?
DAD: Why does the soap grab the dirt?
SARAH: Yes.
DAD: Because soap is a surfactant.
SARAH: Why?
DAD: Why is soap a surfactant?
SARAH: Yes.
DAD: That is an EXCELLENT question. Soap is a surfactant because it forms water-soluble micelles that trap the otherwise insoluble dirt and oil particles.
SARAH: Why?
DAD: Why does soap form micelles?
SARAH: Yes.
DAD: Soap molecules are long chains with a polar, hydrophilic head and a non-polar, hydrophobic tail. Can you say ‘hydrophilic’?
SARAH: Aidrofawwic
DAD: And can you say ‘hydrophobic’?
SARAH: Aidrofawwic
DAD: Excellent! The word ‘hydrophobic’ means that it avoids water.
SARAH: Why?
DAD: Why does it mean that?
SARAH: Yes.
DAD: It’s Greek! ‘Hydro’ means water and ‘phobic’ means ‘fear of’. ‘Phobos’ is fear. So ‘hydrophobic’ means ‘afraid of water’.
SARAH: Like a monster?
DAD: You mean, like being afraid of a monster?
SARAH: Yes.
DAD: A scary monster, sure. If you were afraid of a monster, a Greek person would say you were gorgophobic.
(pause)
SARAH: (rolls her eyes) I thought we were talking about soap.
DAD: We are talking about soap.
(longish pause)
SARAH: Why?
DAD: Why do the molecules have a hydrophilic head and a hydrophobic tail?
SARAH: Yes.
DAD: Because the C-O bonds in the head are highly polar, and the C-H bonds in the tail are effectively non-polar.
SARAH: Why?
DAD: Because while carbon and hydrogen have almost the same electronegativity, oxygen is far more electronegative, thereby polarizing the C-O bonds.
SARAH: Why?
DAD: Why is oxygen more electronegative than carbon and hydrogen?
SARAH: Yes.
DAD: That’s complicated. There are different answers to that question, depending on whether you’re talking about the Pauling or Mulliken electronegativity scales. The Pauling scale is based on homo- versus heteronuclear bond strength differences, while the Mulliken scale is based on the atomic properties of electron affinity and ionization energy. But it really all comes down to effective nuclear charge. The valence electrons in an oxygen atom have a lower energy than those of a carbon atom, and electrons shared between them are held more tightly to the oxygen, because electrons in an oxygen atom experience a greater nuclear charge and therefore a stronger attraction to the atomic nucleus! Cool, huh?
(pause)
SARAH: I don’t get it.
DAD: That’s OK. Neither do most of my students. | true | true | true | SARAH: Daddy, were you in the shower? DAD: Yes, I was in the shower. SARAH: Why? DAD: I was dirty. The shower gets me clean. SARAH: Why? DAD: Why does the shower get me clean? SARAH: Yes. DAD: Because the water washes the dirt away when I use soap. SARAH: Why? DAD: Why do I | 2024-10-12 00:00:00 | 2011-07-18 00:00:00 | null | article | ubc.ca | SCQ | The Science Creative Quarterly | null | null |
10,973,583 | http://www.technologyreview.com/news/546081/the-trials-of-barack-obama-gadget-hound/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,851,662 | https://en.wikipedia.org/wiki/Cosmic_ray_visual_phenomena | Cosmic ray visual phenomena - Wikipedia | null | # Cosmic ray visual phenomena
**Cosmic ray visual phenomena**, or **light flashes** (**LF**), also known as **Astronaut's Eye**, are spontaneous flashes of light visually perceived by some astronauts outside the magnetosphere of the Earth, such as during the Apollo program. While LF may be the result of actual photons of visible light being sensed by the retina,[1] the LF discussed here could also pertain to phosphenes, which are sensations of light produced by the activation of neurons along the visual pathway.[2]
## Possible causes
[edit]Researchers believe that the LF perceived specifically by astronauts in space are due to cosmic rays (high-energy charged particles from beyond the Earth's atmosphere[3]), though the exact mechanism is unknown. Hypotheses include Cherenkov radiation created as the cosmic ray particles pass through the vitreous humour of the astronauts' eyes,[4][5] direct interaction with the optic nerve,[4] direct interaction with visual centres in the brain,[6] retinal receptor stimulation,[7] and a more general interaction of the retina with radiation.[8]
## Conditions under which the light flashes were reported
[edit]Astronauts who had recently returned from space missions to the Hubble Space Telescope, the International Space Station and Mir Space Station reported seeing the LF under different conditions. In order of decreasing frequency of reporting in a survey, they saw the LF in the dark, in dim light, in bright light and one reported that he saw them regardless of light level and light adaptation.[9] They were seen mainly before sleeping.
## Types
[edit]Some LF were reported to be clearly visible, while others were not. They manifested in different colors and shapes. How often each type was seen varied across astronauts' experiences, as evident in a survey of 59 astronauts.[9]
### Colors
[edit]On Lunar missions, astronauts almost always reported that the flashes were white, with one exception where the astronaut observed "blue with a white cast, like a blue diamond." On other space missions, astronauts reported seeing other colors such as yellow and pale green, though rarely.[10] Others instead reported that the flashes were predominantly yellow, while others reported colors such as orange and red, in addition to the most common colors of white and blue.[9]
### Shapes
[edit]The main shapes seen are "spots" (or "dots"), "stars" (or "supernovas"), "streaks" (or "stripes"), "blobs" (or "clouds") and "comets". These shapes were seen at varying frequencies across astronauts. On the Moon flights, astronauts reported seeing the "spots" and "stars" 66% of the time, "streaks" 25% of the time, and "clouds" 8% of the time.[10] Astronauts who went on other missions reported mainly "elongated shapes".[9] About 40% of those surveyed reported a "stripe" or "stripes" and about 20% reported a "comet" or "comets". 17% of the reports mentioned a "single dot" and only a handful mentioned "several dots", "blobs" and a "supernova".
## Motion
[edit]A reporting of motion of the LF was common among astronauts who experienced the flashes.[9] For example, Jerry Linenger reported that during a solar storm, they were directional and that they interfered with sleep since closing his eyes would not help. Linenger tried shielding himself behind the station's lead-filled batteries, but this was only partly effective.[11]
The different types of directions that the LF have been reported to move in vary across reports. Some reported that the LF travel across the visual field, moving from the periphery of the visual field to where the person is fixating, while a couple of others reported motion in the opposite direction. Terms that have been used to describe the directions are "sideways", "diagonal", "in-out" and "random".[9][10] In Fuglesang *et al.* (2006), it was pointed out that there were no reports of vertical motion.[9]
## Occurrences and frequencies
[edit]There appear to be individual differences across astronauts in terms of whether they reported seeing the LF or not. While these LF were reported by many astronauts, not all astronauts have experienced them on their space missions, even if they have gone on multiple missions.[9] For those who did report seeing these LF, how often they saw them varied across reports.[9] On the Apollo 15 mission all three astronauts recorded the same LF, which James Irwin described as "a brilliant streak across the retina".[12]
### Frequency during missions
[edit]On Lunar missions, once their eyes became adapted to the dark, Apollo astronauts reported seeing this phenomenon once every 2.9 minutes on average.
On other space missions, astronauts reported perceiving the LF once every 6.8 minutes on average.[9] The LF were reported to be seen primarily before the astronauts slept and in some cases disrupted sleep, as in the case of Linenger. Some astronauts pointed out that the LF were seemingly perceived more frequently as long as they were perceived at least once before and attention was directed to the perception of them. One astronaut,[13] on his first flight, only took note of the LF after being told to look out for them. These reports are not surprising considering that the LF may not stand out clearly from the background.
### Fluctuations during and across missions
[edit]Apollo astronauts reported that they observed the phenomenon more frequently during the transit to the Moon than during the return transit to Earth. Avdeev *et al.* (2002) suggested that this might be due to a decrease in sensitivity to the LF over time while in space.[13] Astronauts on other missions reported a change in the rate of occurrence and intensity of the LF during the course of a mission.[9] While some noted that the rate and intensity increased, others noted a decrease. These changes were said to take place during the first days of a mission. Other astronauts have reported changes in the rate of occurrence of the LF across missions, instead of during a mission. For example, Avdeev himself was on Mir for six months during one mission, six months during the second mission a few years later and twelve months during a third mission a couple of years after. He reported that the LF were seen less frequently with each subsequent flight.[13]
Orbital altitude and inclination have also correlated positively with rate of occurrence of the LF. Fuglesang *et al.* (2006) have suggested that this trend could be due to the increasing particles fluxes at increasing altitudes and inclinations.[9]
## Experiments
[edit]### ALFMED experiment
[edit]During the Apollo 16 and Apollo 17 transits, astronauts conducted the Apollo Light Flash Moving Emulsion Detector (ALFMED) experiment where an astronaut wore a helmet designed to capture the tracks of cosmic ray particles to determine if they coincided with the visual observation. Examination of the results showed that two of fifteen tracks coincided with observation of the flashes. These results in combination with considerations for geometry and Monte Carlo estimations led researchers to conclude that the visual phenomena were indeed caused by cosmic rays.[14][15]
### SilEye-Alteino and ALTEA projects
[edit]The SilEye-Alteino and Anomalous Long Term Effects in Astronauts' Central Nervous System (ALTEA) projects have investigated the phenomenon aboard the International Space Station, using helmets similar in nature to those in the ALFMED experiment. The SilEye project has also examined the phenomenon on Mir.[13] The purpose of this study was to examine the particle tracks entering the eyes of the astronauts when the astronaut said they observed a LF. In examining the particles, the researchers hoped to gain a deeper understanding of what particles might be causing the LF. Astronauts wore the SilEye detector over numerous sessions while on Mir. During those sessions, when they detected a LF, they pressed a button on a joystick. After each session, they recorded down their comments about the experience. Particle tracks that hit the eye during the time when the astronauts indicated that they detected a LF would have had to pass through silicon layers, which were built to detect protons and nuclei and distinguish between them.
The findings show that "a continuous line" and "a line with gaps" was seen a majority of the time. With less frequency, a "shapeless spot", a "spot with a bright nucleus" and "concentric circles" were also reported.[13]: 518 The data collected also suggested to the researchers that one's sensitivity to the LF tends to decrease during the first couple of weeks of a mission. With regards to the probable cause of the LF, the researchers concluded that nuclei are likely to be the main cause. They based this conclusion off of the finding that in comparison to an "All time" period, an "In LF time window" period saw the nucleus rate increase to about six to seven times larger, while the proton rate only increased by twice the amount when comparing the two time periods. Hence, the researchers ruled out the Cherenkov effect as a probable cause of the LF observed in space, at least in this case.
### Ground experiments in the 1970s
[edit]Experiments conducted in the 1970s also studied the phenomenon. These experiments revealed that although several explanations for why the LF were observed by astronauts have been proposed, there may be other causes as well. Charman *et al.* (1971) asked whether the LF were the result of single cosmic-ray nuclei entering the eye and directly exciting the eyes of the astronauts, as opposed to the result of Cherenkov radiation within the retina. The researchers had observers view a neutron beam, composed of either 3 or 14 MeV monoenergetic neutrons, in several orientations, relative to their heads. The composition of these beams ensured that particles generated in the eye were below 500 MeV, which was considered the Cherenkov threshold, thereby allowing the researchers to separate one cause of the LF from the other. Observers viewed the neutron beam after being completely dark-adapted.[7]
The 3 MeV neutron beam produced no reporting of LF whether it was exposed to the observers through the front exposure of one eye or through the back of the head. With the 14 MeV neutron beam, however, LF were reported. Lasting for short periods of time, "streaks" were reported when the beam entered one eye from the front. The "streaks" seen had varying lengths (a maximum of 2 degrees of visual angle), and were seen to either have a blueish-white color or be colorless. All but one observer reported seeing fainter but a higher number of "points" or short lines in the center of visual field. When the beam entered both eyes in a lateral orientation, the number of streaks reported increased. The orientation of the streaks corresponded to the orientation of the beam entering the eye. Unlike in the previous case, the streaks seen were more abundant in the periphery than the center of visual field. Lastly, when the beam entered the back of the head, only one person reported seeing the LF. From these results, the researchers concluded that at least for the LF seen in this case, the flashes could not be due to Cherenkov radiation effects in the eye itself (although they did not rule out the possibility that the Cherenkov radiation explanation was applicable to the case of the astronauts). They also suggested that because the number of LF observed decreased significantly when the beam entered the back of the head, the LF were likely not caused by the visual cortex being directly stimulated as this decrease suggested that the beam was weakened as it passed through the skull and brain before reaching the retina. The most probable explanation proposed was that the LF were a result of the receptors on the retina being directly stimulated and "turned on" by a particle in the beam.
In another experiment, Tobias *et al.* (1971) exposed two people to a beam composed of neutrons ranging from 20 to 640 MeV after they were fully dark-adapted. One observer, who was given four exposures ranging in duration from one to 3.5 seconds, observed "pinpoint" flashes. The observer described them as being similar to "luminous balls seen in fireworks, with initial tails fuzzy and heads like tiny stars". The other observer who was given one exposure lasting three seconds long, reported seeing 25 to 50 "bright discrete light, he described as stars, blue-white in color, coming towards him".[8]: 596
Based on these results, the researchers, like in Charman *et al.* (1971), concluded that while the Cherenkov effect may be the plausible explanation for the LF experienced by astronauts, in this case, that effect cannot explain the LF seen by the observers. It is possible that the LF observed were the result of interaction of the retina with radiation. They also suggested that the tracks seen may point to tracks that are within the retina itself, with the earlier portions of the streak or track fading as it moves.
Considering the experiments conducted, at least in some cases the LF observed appear to be caused by activation of neurons along the visual pathway, resulting in phosphenes. However, because the researchers cannot definitively rule out the Cherenkov radiation effects as a probable cause of the LF experienced by astronauts, it seems likely that some LF may be the result of Cherenkov radiation effects in the eye itself, instead. The Cherenkov effect can cause Cherenkov light to be emitted in the vitreous body of the eye and thus allow the person to perceive the LF.[9] Hence, it appears that the LF perceived by astronauts in space have different causes. Some may be the result of actual light stimulating the retina, while others may be the result of activity that occurs in neurons along the visual pathway, producing phosphenes.
## See also
[edit]## References
[edit]**^**Hecht, Selig; Shlaer, Simon; Pirenne, Maurice Henri (July 1942). "Energy, Quanta, and Vision".*Journal of General Physiology*.**25**(6): 819–840. doi:10.1085/jgp.25.6.819. PMC 2142545. PMID 19873316.**^**Dobelle, W. H.; Mladejovsky, M. G. (December 1974). "Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind".*The Journal of Physiology*.**243**(2): 553–576. doi:10.1113/jphysiol.1974.sp010766. PMC 1330721. PMID 4449074.**^**Mewaldt, R. A. (1996). "Cosmic Rays". In Rigden, John S. (ed.).*MacMillan Encyclopedia of Physics*. Vol. 1. Simon & Schuster MacMillan. ISBN 978-0-02-897359-3. Archived from the original on 30 August 2009. Retrieved 27 August 2016.- ^
**a**Narici, L.; Belli, F.; Bidoli, V.; Casolino, M.; De Pascale, M. P.; et al. (January 2004). "The ALTEA/ALTEINO projects: studying functional effects of microgravity and cosmic radiation" (PDF).**b***Advances in Space Research*.**33**(8): 1352–1357. Bibcode:2004AdSpR..33.1352N. doi:10.1016/j.asr.2003.09.052. PMID 15803627. **^**Tendler, Irwin I.; Hartford, Alan; Jermyn, Michael; LaRochelle, Ethan; Cao, Xu; Borza, Victor; Alexander, Daniel; Bruza, Petr; Hoopes, Jack; Moodie, Karen; Marr, Brian P.; Williams, Benjamin B.; Pogue, Brian W.; Gladstone, David J.; Jarvis, Lesley A. (2020). "Experimentally Observed Cherenkov Light Generation in the Eye During Radiation Therapy".*International Journal of Radiation Oncology, Biology, Physics*.**106**(2). Elsevier BV: 422–429. doi:10.1016/j.ijrobp.2019.10.031. ISSN 0360-3016. PMC 7161418. PMID 31669563.**^**Narici, L.; Bidoli, V.; Casolino, M.; De Pascale, M. P.; Furano, G.; et al. (2003). "ALTEA: Anomalous long term effects in astronauts. A probe on the influence of cosmic radiation and microgravity on the central nervous system during long flights".*Advances in Space Research*.**31**(1): 141–146. Bibcode:2003AdSpR..31..141N. doi:10.1016/S0273-1177(02)00881-5. PMID 12577991.- ^
**a**Charman, W. N.; Dennis, J. A.; Fazio, G. G.; Jelley, J. V. (April 1971). "Visual Sensations produced by Single Fast Particles".**b***Nature*.**230**(5295): 522–524. Bibcode:1971Natur.230..522C. doi:10.1038/230522a0. PMID 4927751. S2CID 4214913. - ^
**a**Tobias, C. A.; Budinger, T. F.; Lyman, J. T. (April 1971). "Radiation-induced Light Flashes observed by Human Subjects in Fast Neutron, X-ray and Positive Pion Beams".**b***Nature*.**230**(5296): 596–598. Bibcode:1971Natur.230..596T. doi:10.1038/230596a0. PMID 4928670. S2CID 4260225. - ^
**a****b****c****d****e****f****g****h****i****j****k****l**Fuglesang, Christer; Narici, Livio; Picozza, Piergiorgio; Sannita, Walter G. (April 2006). "Phosphenes in Low Earth Orbit: Survey Responses from 59 Astronauts".**m***Aviation, Space, and Environmental Medicine*.**77**(4): 449–452. PMID 16676658. - ^
**a****b**Sannita, Walter G.; Narici, Livio; Picozza, Piergiorgio (July 2006). "Positive visual phenomena in space: A scientific case and a safety issue in space travel".**c***Vision Research*.**46**(14): 2159–2165. doi:10.1016/j.visres.2005.12.002. PMID 16510166. S2CID 18240658. **^**Linenger, Jerry M. (13 January 2000).*Off The Planet: Surviving Five Perilous Months Aboard The Space Station MIR*. McGraw-Hill. ISBN 978-0-07-136112-5.**^**Irwin, James B. (1983).*More Than Earthlings*. Pickering & Inglis. p. 63. ISBN 978-0-7208-0565-9.- ^
**a****b****c****d**Avdeev, S.; Bidoli, V.; Casolino, M.; De Grandis, E.; Furano, G.; et al. (April 2002). "Eye light flashes on the Mir space station".**e***Acta Astronautica*.**50**(8): 511–525. Bibcode:2002AcAau..50..511A. doi:10.1016/S0094-5765(01)00190-4. PMID 11962526. **^**"Experiment: Light Flashes Experiment Package (Apollo light flash moving emulsion detector)". Experiment Operation During Apollo IVA at 0-g. NASA. 2003. Archived from the original on 11 May 2014.**^**Osborne, W. Zachary; Pinsky, Lawrence S.; Bailey, J. Vernon (1975). "Apollo Light Flash Investigations". In Johnston, Richard S.; Dietlein, Lawrence F.; Berry, Charles A. (eds.).*Biomedical Results of Apollo*. Vol. NASA-SP-368. NASA. NASA SP-368. | true | true | true | null | 2024-10-12 00:00:00 | 2005-01-05 00:00:00 | null | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
22,102,249 | https://h5p.org | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
318,111 | http://venturebeat.com/2008/09/28/skydeck-gets-boost-for-service-that-reminds-you-to-call-your-ma/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,545,627 | https://robconery.com/software-design/does-functional-programming-still-matter/ | 🤖 Does Functional Programming Matter To You? | null | # 🤖 Does Functional Programming Matter To You?
*December 05, 2023*| Software-design
*Learning Elixir changed me as a programmer, and learning functional concepts changed the way I think about writing software. How about you? Is functional proogramming a useful thing to learn?*
It seemed like functional programming got a boost back in the mid to late 2010s when Elixir started gaining in popularity. I, for one, had my entire professional outlook turned inside out by getting to know this language and the underlying BEAM runtime and OTP framework.
I couldn't understand why we hadn't always worked this way. I didn't understand why OTP and frameworks like it weren't the norm! I began to understand, however, why functional programming people tend to be ... passionate functional programming people.
Now you might be wondering if the title is clickbait and I don't think it is because I am genuinely curious about your answer. If you're receiving this via email, I would love a reply! I found functional concepts to be life-changing, literally, changing the way I think about code, tests, and putting applications together.
What do I mean? Here are a few things...
## Purity
You might know this already but "pure code" is completely self-contained and doesn't rely on anything outside of its scope. A simple example would be a math function (using JavaScript here):
```
const squareIt = function(num){
return num * num;
}
```
I know there are more elegant ways to do this and guards to put on here but you get the idea: this is a *pure* function.
Let's change the above function to be *impure*:
```
const RobsConstant = 2.58473;
const squareItRobsWay = function(num){
return num * num * RobsConstant;
}
```
My function will now *behave differently* if the value of `RobsConstant`
changes, which it shouldn't because it's a constant and all, but it's possible that I could redefine this value and pull it from a database, who knows! My function sure doesn't, and it's possible that we could introduce an error at some point (turning `RobsConstant`
into a string, for instance) which is really, really annoying.
If we were being good functional people, we would use two functions and shove them together:
```
const RobsConstant = function(){
return 2.58473;
};
const squareIt = function(num){
return num * num;
}
const squareItRobsWay = RobsConstant() * squareIt(4);
```
This seemingly small change is profound! We can test both functions to make sure they do what they're supposed to, which means we can have full confidence that our `squareItRobsWay`
value should *always* return what we expect (again: assuming we have tests in place).
## Currying. Crazy Talk.
You may have heard this term when talking to a functional person and thought it sounded a bit *mathy*. I know I did. Currying is splitting a function with multiple arguments into a chain of smaller functions with only a single argument.
Dig this:
```
const buildSelect = function(table, criteria, order,limit){
let whereClause="", params=[];
if(criteria){
whereClause = "where 1=$1" //pseudo code, obvs
params=[1] //placeholder for this example
}
const orderClause = order ? `order by ${order}` : ""
const limitClause = limit ? `limit ${limit}` : ""
const sql = `select * from ${table} ${whereClause} ${orderClause} ${limitClause}`;
return {sql: sql, params: params};
}
```
I'm punting on writing out the `where`
stuff because it's not important. What *is* important is the idea that we have code here that we can use elsewhere. If we put our functional hats on, focus on *purity*, we can actually *curry* this into a set of smaller functions that only do one thing:
```
const where = function(item){
//build a where clause by inspecting the item
return item ? `where 1=$1` : "";
}
const params = function(item){
//create parameters from the criteria item
return item ? [1] : "";
}
const orderBy = function(clause){
return clause? `order by ${clause}` : ""
}
const limitTo = function(clause){
return clause ? `limit ${clause}` : "";
}
const selectQuery = table => criteria => order => limit => {
//create a where statement if we have criteria
const sql = `select * from ${table} ${where(criteria)} ${orderBy(order)} ${limitTo(limit)}`;
return {sql: sql, params: params(criteria)};
};
```
Believe it or not, this works! We can invoke it like this:
```
const sql = selectQuery("products")({sku: "one"})("cost desc")(100);
console.log(sql);
```
```
❯ node query.js
{
sql: 'select * from products where 1=$1 order by cost desc limit 100',
params: [ 1 ]
}
```
In functional languages you typically chain methods together, passing the result of one function right into another. In Elixir, we could build this exact function set and start with the table name, passing along what we need until we have the select statement we want:
```
"products"
|> where({sku: "one"})
|> orderBy("cost desc")
|> limitTo(100)
|> select
```
This, right here, is a *functional transformation.* You have a bunch of small functions that you pass a bit of data through, transforming it as you need.
## Partial Application
The first draft of this post went out as an email so if you're here from that email you didn't see this section! Sorry - it happens.
You might be looking at that invocation wanting to barf, but that's not how you would use this code. Typically, you would build up a *partial* use of the functions, like this:
```
//assuming there's a sales_count in there
const topProducts = selectQuery("products")()("sales_count desc");
```
We're *partially applying* our functions to create a new one that shows us the top *n* products, which we can specify in this way:
```
const top10Products = topProducts(10);
console.log(top10Products);
```
This is where things get really, really interesting. Functional composition is at the heart of functional programming, and damned fun to use, too! Here's what our code will generate for us:
```
{
sql: 'select * from products order by sales_count desc limit 10',
params: ''
}
```
Small, simple functions, easily testable, composable, and the clarity is wonderful.
So: what do you think? Is this style of programming interesting, simpler, elegant or horrid? Or is everything just React these days :D.
There's more, of course, and I made a fun video many years ago for *The Imposter's Handbook*which you can watch right here, if you like!
##### Join over 15,000 programmers just like you and me
I have a problem when it comes to trying new things and learning about computer science stuff. I'm self-taught, so it's imperative I keep up with what's going on. I love sharing, so sign up and I'll send along what I've learned right to your inbox. | true | true | true | Learning Elixir changed me as a programmer, and learning functional concepts changed the way I think about writing software. How about you? Is functional proogramming a useful thing to learn? | 2024-10-12 00:00:00 | 2023-12-05 00:00:00 | null | website | bigmachine.io | bigmachine.io | null | null |
17,704,285 | https://www.nature.com/articles/s41538-018-0021-9 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,221,889 | http://www.newscientist.com/article/mg20527520.400-firing-on-all-neurons-where-consciousness-comes-from.html?full=true | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,317,792 | https://medium.com/p/26d2d1646cd | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,158,510 | http://linuxaria.com/recensioni/top-open-source-medical-billing-and-emr-software?lang=en | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
4,143,457 | http://semiaccurate.com/2012/06/21/consent-decree-over-microsoft-goes-back-to-monopoly-forced-bundling/ | Consent decree over, Microsoft goes back to monopoly forced bundling | Charlie Demerjian | Remember that pesky monopoly verdict against Microsoft that ended in a toothless consent decree? Would it shock you to hear that the second the DOJ’s eyes are off, Microsoft jumped right back to forced bundling to stifle competition?
No, we are not talking about the locked bootloader that Microsoft is mandating its partners use on any WART (Windows ARM RT) device, this one is more devious. Remember how Microsoft bundled Internet Explorer with Windows 98 by force, making it non-removable and mandatory, in order to crush the life out of Netscape? They are doing it again with Office, but this time it isn’t ‘free’ you have to pay full price for it, and the target is iOS and Android. Luckily, the last vestiges of that pesky consent decree expired a few years ago, and at least in the US, government oversight is openly purchasable.
What am I talking about? When SemiAccurate was at Computex a few weeks ago, we confirmed the rumors that Microsoft is asking $80-90 for WART licenses from OEMs. That is a lot, infinitely more than what Apple and Android licensees pay. How can they justify this when a full blown copy of Windows 7 costs large OEMs only $35 or so? Two to three times that is a lot for a cut down version, especially in a market that is as price constrained as tablets. That kind of BoM premium makes a massive difference to end user pricing, rising yet farther when you have to pack more hardware in to support Microsoft code bloat.
For an end user, the MSRP of Windows 7 Home Premium OEM is $99 at many online stores. That means that if you are a Dell/HP/Lenovo class buyer, as any decent WART tablet maker would be, you pay about 1/3 of the retail price for the same sticker and OS license. Fair enough, that is about par for the course in OEM software volume sales. This means OEMs selling the x86 version of Surface will pay about 40% of the Microsoft tax that their ARM based brethren do for WART. Bear in mind that WART is a subset of Windows 8, it has only a fraction of the capabilities that the full Windows does, yet costs more than twice as much. That seems strange, very strange, but luckily, there is a good explanation for the difference.
That difference is Office, WART has it bundled for ‘free’, Windows 8 does not. It may seem like an odd choice, but Microsoft is indeed bundling Office Home & Student RT 2013 with every WART license. There is no other option, take both or nothing, even if you are a Tier 1 OEM. While Microsoft has not commented on the matter, anyone want to place bets on whether it is uninstallable or “integral to the functionality of WART”? Me neither, but even if it is, Microsoft still gets paid for a full Office license.
Currently, Office Home & Student 2010, the latest released version, costs $119 from multiple online vendors. This is for a one license downloadable version, the copy with CD and three licenses costs $129. If you take the same 1/3rd multiplier that retail buyers of Windows 7 OEM licenses have vs large Tier 1 vendors, that would price a volume key for Office Home & Student 2010 at about $40 for Dell et al. It may be off by a bit, but that is the right general ballpark according to SemiAccurate’s sources.
So, if you take $40 for volume Office Home & Student 2010 (OHS), add it to the $35 for Windows 7 Premium OEM volume license, you get $75. If Dell, Lenovo, HP, and other massive players pay $75 for the two separately, you can be damn sure that the smaller Taiwanese vendors, lets call them Tier 1.5s, pay a bit of a premium over that. Anyone think 10-20% would be out of line? While it is just speculation, that puts the two licenses dead on top of the rumored $80-90 cost for a WART license with the attendant forced OHS-RT 2013 bundle. What a strange coincidence, don’t you think?
So basically, Microsoft is at it again. They can’t compete on a fair playing field, so they are leveraging their monopoly to force vendors in to excluding, or at least paying ‘full’ price for their software. Anything the vendors want to include, should there be any left by the launch date, has to ride on top of that. Actually, it has to ride on top of the WART bundle AND can only be sold through Microsoft’s store, for which they get a 30% share, not to mention the ability to shut down competitive products on a whim. Monopolies are a nice job if you can get them, or is it keep them?
Why does Microsoft bother? Tablets are basically always online, either via 3/4G or WiFi, they are pretty close to doorstops without network connectivity. Once you have a network, services like Google Docs and other attendant office-suites-as-a-service options are quite compelling. Free vs paying $125 per device isn’t a tough choice for the overwhelming majority of mobile users. If you look at phones, the only ones that have Office, the Microsoft version of Office that is, are WinCE/Windows Phone 7. With their mobile OS sales in the low single digits and dropping rapidly, Microsoft can’t afford a repeat on tablets.
Because Office is Microsoft’s last remaining monopoly leverage point, they aren’t shy about abusing it. They won’t make Office for the iPad or Android because then there would be no reason for anyone to even consider buying a WART device. Instead, they are waging a massive PR war about how iPads and Android tablets are not suitable for business, only WART is. Or will be someday, but until then, don’t buy the competition if you even consider Office necessary. MS Office once again, not those pretenders that cost far less and don’t lock you in.
So they are once again abusing their monopoly to make sure that if you want Office, you HAVE to buy a Microsoft WART tablet. Only Microsoft controls what platforms Office goes on, and however much sense it makes to put it on OSes other than WART, and however much money that would bring in, there is no way they will allow it to happen. Monopolies are not only for shutting out potential competition, they are also about self-protection, and Microsoft is quite aware of this.
Convenient, eh? And don’t forget, the last time they tried this, quite illegal too. This time, it may or may not be legal, but the monopoly leveraging tactics have not changed a bit. Microsoft management may be incompetent and unable to comprehend the mobile space, but they sure don’t change. WART plus a forced Office bundle is nothing more than Windows 98 plus Internet Explorer, only this time you are paying full price for both. If you want proof that Microsoft is absolutely lost in the mobile space, now you have it.**S|A**
#### Charlie Demerjian
#### Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024 | true | true | true | Remember that pesky monopoly verdict against Microsoft that ended in a toothless consent decree? | 2024-10-12 00:00:00 | 2012-06-21 00:00:00 | http://semiaccurate.com/assets/uploads/2012/12/sa_square_trans.png | article | semiaccurate.com | SemiAccurate | null | null |
33,324,748 | https://medium.com/backchannel/how-steve-jobs-fleeced-carly-fiorina-79d1380663de | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,328,142 | http://chartsbin.com/view/45327 | Top 10 Countries by Robot Density | null | This chart shows
**Top 10 Countries by Robot Density.****A robot**is a machine especially one programmable by a computer capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
**Robots can be autonomous or semi-autonomous**and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
The branch of technology that deals with the
**design, construction, operation, and application of robots**, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition.Many of today's robots are inspired by nature contributing to the field of
**bio-inspired robotics**. These robots have also created a newer branch of robotics: soft robotics.From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment.Robots have replaced humans in performing repetitive and
**dangerous tasks**which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea.Robotic
**characters, androids**(artificial men/women) or**gynoids**(artificial women), and**cyborgs**(also "bionic men/women", or humans with significant mechanical enhancements) have become a staple of science fiction.7 years ago | true | true | true | This chart shows Top 10 Countries by Robot Density. A robot is a machine especially one programmable by a computer capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be | 2024-10-12 00:00:00 | 2017-09-08 00:00:00 | http://cdn3.chartsbin.com/chartimages/l_45327_162656b117ae1a8e17d62c80cda82a49 | article | chartsbin.com | ChartsBin | null | null |
8,562,629 | http://infospectives.co.uk/2014/11/05/govt-vs-tech-companies-will-we-have-gchq-rummaging-in-our-draws/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,297,421 | http://splitsider.com/2011/01/why-nielsen-ratings-are-inaccurate-and-why-theyll-stay-that-way/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,550,351 | http://klaig.blogspot.com/2013/10/taking-over-resized-tmux-session.html | Taking over a resized tmux session. | null | The thing is when you come from a low res screen to an high res screen you can land on a resized session like that :
Tmux is simply trying to accommodate to all the client currently connected. If you left open your session on your laptop at home or simply closed it quickly, you might be stuck in this situation for a while.
Here is how I unstuck myself from within tmux...
tmux detach -aOr in your .zsh/bashrc as an alias
alias takeover="tmux detach -a"Fixed:
## No comments:
## Post a Comment | true | true | true | So I use tmux quite intensively and I reuse my long lived session from my workstation and my laptop. The thing is when you come from a low ... | 2024-10-12 00:00:00 | 2013-10-14 00:00:00 | null | blogspot.com | klaig.blogspot.com | null | null |
|
9,899,503 | http://www.theguardian.com/world/2015/jul/16/why-chinas-stock-market-bubble-was-always-bound-burst | Why China’s stock market bubble was always bound to burst | Orville Schell | Orville Schell | Over the past few weeks, punters in China underwent a near-death experience when their country’s two stock exchanges entered freefall. The rapidly inflating bubble that had driven share prices to dizzying heights had suddenly burst. By this spring, the stock markets in Shanghai, with 831 listed companies, and Shenzhen, with 1,700, boasted a combined market capitalisation of $9.5tn, which made them – along with the much older Hong Kong exchange – the second-largest financial market in the world.
After languishing for the past four years, these two Chinese stock markets suddenly took off last summer, becoming a cauldron of voracious buying, selling and spectacular profit-taking. Shares of newly listed companies soared thousands of percentage points within months of their initial public offerings, driven upward by a new and growing cadre of relatively unsophisticated private investors that included tens of millions of ordinary workers, farmers, housewives and pensioners. According to one widely cited survey of these new investors, 67% of them have less than a high-school education.
The fact that Chinese stocks were climbing ever higher while the Chinese economy was cooling should have been an unmistakable warning of a bubble, but it caused surprisingly little concern. (Another reason to worry might have been the disparity in prices between so-called “A-shares”, which can only be purchased by investors inside China to keep the domestic market shielded from outside foreign manipulation, and stakes in the same companies available to foreign investors through the Hong Kong exchange, known as “H-Shares”. This disparity suggested Chinese investors were bidding up prices well beyond any reasonable approximation of their value.) In fact, drawn by the casino-like profits to be made in the boom, more and more small investors flocked to the thousands of brokerage houses that are now proliferating in every Chinese city in order to buy and sell while staring up at flickering electronic data boards charting the rise and fall of equity prices.
With markets rising in straight lines on graphs plotting their progress – the Shanghai exchange had shot up some 135%, and the Shenzhen exchange had gone even higher at 150% in less than a year – stocks had begun to seem like a sure bet for Chinese investors with fevered dreams of quick wealth. They promised a much higher rate of return than traditional low-interest bank savings accounts, which have paltry annual yields of barely 2%. At the time of the crash, 9% of Chinese households – some report the figure as high as 200 million people – had bought into the booming market. Steadily rising prices seemed to be delivering on both Deng Xiaoping’s promise of “a relatively well-off society” (*xiaokang shehui*) and the current president Xi Jinping’s rhetoric of a full-blown “Chinese dream” (*zhongguo meng*) – a fuzzy notion that promises wealth, wellbeing, and power to individuals and the nation as a whole.
China had already experienced a dangerous bubble in its residential housing market, but in that case the government had succeeded in engineering a relatively soft landing by raising interest rates, limiting the number of residences one owner could buy in such cities as Beijing and Shanghai and levying a new tax. Accordingly, it was all too easy for small investors to assume that the bull market was implicitly backed by a kind of unwritten government guarantee – that the good times were only beginning to roll, and the state would step in to take care of things before the bottom fell out. In fact, the government itself had become bedazzled by the seemingly invincible rise in stock prices. Instead of dedicating its energy to regulating the markets, the Chinese Communist party began to see an unprecedented opportunity in further inflating the bubble – a chance to sell equity stakes in dangerously debt-burdened state enterprises and help clean up some very messy balance sheets. If the planting of two stock markets on soil long ploughed by Maoist sloganeering about “capitalist roaders” was a mild surprise, it was mind-bending to witness the party embrace the bull market so ardently that even its official voice, the People’s Daily, began to flog stocks as a golden risk-free opportunity.
When the Shanghai index reached a new pinnacle of 4,000 in April, a column in the People’s Daily effused that this “was only the start of a bull market”. “What’s a bubble?” it asked insouciantly. “Tulips and bitcoins are bubbles … But if A-shares are seen as the bearer of the Chinese dream, then they contain massive investment opportunities.”
With the party cheerleading the market’s inexorable rise, it became even easier to imagine that the government was, in effect, declaring an informal debt obligation on the stock market’s future – essentially covering any bets with its own considerable assets. How could you lose? And so the bubble grew and grew: price-to-earnings ratios for Chinese stocks averaged an astounding 70-to-1, against a worldwide average of 18.5 to 1; the value of the A-shares inside China grew to be nearly double the equivalent shares of the same companies on the Hong Kong exchange. Ordinary Chinese people had become so intoxicated by bull-market euphoria that stories began to proliferate about people leaving their jobs, and even their families, to become day traders, often using funds borrowed from high-interest rate “shadow banks” or loans taken out against their homes. (By last weekend, margin borrowing on both exchanges had surpassed $320bn, representing almost 10% of the total market capitalisation of all stocks being traded.)
Of course, some observers could see the inherent weakness in this seemingly irrepressible market, but with so much money to be made, it was easy to ignore or disparage warnings of anomalies and irregularities. Few wanted to ruin the heady ride with naysaying and pessimism, particularly as some 7% of households had invested 15-20% of all their assets in shares. When the World Bank released its China Economic Update Report this spring, noting that the state had gone beyond its role as a regulator and guarantor for financial systems and engaged in more active interference, its officials met with strong opposition from the Chinese government.
“Instead of promoting the foundations for sound financial development,” the World Bank’s report cautioned, “the state has interfered extensively and directly in allocating resources through administrative and price controls, guarantees, credit guidelines, pervasive ownership of financial institutions and regulatory policies.”
It was a prescient warning, given what followed. But almost immediately after the report appeared, the chapter containing these cautions suddenly vanished: unspecified Chinese officials had taken umbrage at such direct criticism, and forced the World Bank to redact the offending portion of its analysis. Chinese officialdom has often had difficulty accepting criticism – especially from outside experts in a public setting – and whether this critical shortcoming is a consequence of traditional mores or a Leninist political culture is hard to determine. However, such sensitivities have frequently prevented Chinese officials from identifying and fixing problems before they erupt into crises. In the case of China’s stock market bubble, almost no one wanted to listen to voices of caution – and the whole country was to pay a bitter price for avoiding reality. For what was at stake was not just the integrity of China’s financial system, but the question of the Chinese people’s ongoing confidence in their government. This would be critical as the country continued, in Deng Xiaoping’s words, “to cross the river by feeling its way over the stones” (*mozhe shitou guohe*). Indeed, there is no nation of significance in the world today that is in the process of undergoing a more challenging transition: from Maoist revolution to stakeholder in the modern, marketised global world.
**When the Shanghai stock exchange **composite index plunged** **from its high of 5,166 to just about 3,700 over the course of a few short weeks in June and early July – wiping out some $3tn worth of market value – its precipitous fall stunned the nation. The collective sense of shock owed something to the fact that over the past 25 years, China had not encountered many economic crises that officials found impossible to handle through state intervention. So, when this one began, the natural reaction was to attempt to “manage” the collapsing share prices through more state intervention, beginning with a surprise interest rate cut on 27 June. As if the stock markets were still just cogs in a socialist command economy, the government then ordered up a whole menu of non-market fixes: first, a moratorium on all new public offerings; then, on 8 July, a six-month prohibition on share sales by company directors or any listed shareholder owning a 5% stake in that company. By this time, more than half of the listed companies on the two Chinese exchanges had suspended trading of their own stocks; state-owned enterprises (SOEs) were instructed not to sell shares.
At the same time, the China Securities Regulatory Commission created a “market stabilisation fund” of $19.3bn and ordered brokerage houses to start buying into the market themselves. The commission also relaxed rules on margin trading so that government pension funds and other SOEs could also answer the party’s call to prop up the market themselves by buying shares; it lowered securities trading fees; and even promised to purchase shares through its own proprietary accounts, and to continue doing so for as long as the** **composite index for both exchanges remained below the new target of 4,500.
To reassure “the people” that the government had not turned bearish on the foundering markets, the People’s Daily again rhapsodised about the glories of investment over “the long term”: “It is after storms that we encounter rainbows,” it wrote. “Looking back at the development of China’s capital market, we realise that the road to development has not always been smooth, but has instead been a twisting one with ups and downs. But it is in each lesson learned that the market has matured … So participants in the market should earnestly reflect, collectively sum up their experiences, and then work together to achieve a capital market that is** **stable and can continue to develop in a healthy manner over the long term.”
But such florid rhetoric was too little and too late to soothe the agitated investors who had just lost their shirts. Not surprisingly, the unofficial press and the internet had erupted with the sentiments of people furious about both their financial losses and what they regarded as the Communist party’s poor leadership.
One young investor in Manchuria blamed “the state’s inadequate regulation”, and pointed to the government’s role in inflating the bubble as a reason for it to forcefully intervene to prop up prices. “The bull market was itself a policy-driven one, so only major policies can save it,” he told the New York Times.
Another angry investor – who claimed to have lost most of his savings – posted a complaint that quickly went viral: “This was a stock wipeout that thoroughly damaged middle-classed assets from a decade of striving,” he wrote. “For us the ‘Chinese dream’ is just a dream.”
This was a particularly pointed attack on Xi’s own roseate notion of the “Chinese dream” – which the president has relied upon to inspire his people forward on what he calls “the road to rejuvenation”, and to galvanise their nationalism against the various “hostile foreign forces” (*jingwai didui shili*) that he regards as a threat to China’s continuing rise.
Even China’s feared Public Security Bureau was summoned to meet the challenge of the plunging stock markets. Its charge was to investigate, and possibly arrest, what the New China News Agency called “malicious” short sellers (which in China is not an illegal practice) – a group of malefactors said by some Chinese media outlets to include the American financier George Soros.
The extent of the party’s belated panic may have been best captured by the absurd instruction given by party propaganda officials to students at Tsinghua University School of Economics and Management, who were told to loudly chant, at the start of their graduation ceremony, “Revive the A-shares! Benefit the people!” (After news of this effort went viral, the chant was changed at the last minute to something more anodyne.)
In 1989, four days after the massacre of protesters at Tiananmen Square, a gloomy Deng Xiaoping told his military commanders, “There was this storm that was bound to happen sooner or later.” He was doubtless referring to the way he had opened up Chinese society over the preceding decade to many democratic ideas, which helped precipitate the monumental demonstrations in Beijing that spring. Were Deng still alive to watch the Chinese stock exchanges that he himself had initiated crash some 25 years later, he might have said the same thing. For the rise, and fall, of these markets was a delayed but equally inevitable outcome of another one of his very unlikely and risky experiments.
Upon returning to power in 1978 after Mao’s death, Deng feared that China would fall further behind the rest of the world if it did not quickly initiate radical reforms. Proclaiming a bold new programme of “reform and opening up” (*gaige kaifang*), he called on the Chinese to begin borrowing from other economic and political systems. “We mustn’t fear to adopt the advanced management methods applied in capitalist countries,” he said.
It was a shocking exhortation from a communist leader who had spent his entire life steeped in Maoist revolution and class struggle. But it was only the first of a string of surprising public utterances and policies during these early pioneering years in the 1980s. Part of Deng’s broad regimen of reform involved testing the transplantation of indelibly capitalist ideas and institutions into the host body of China’s hitherto completely Marxist-Leninist system.
One of his boldest experiments began in the late 1980s, when, like a mad scientist playing with the creation of new hybrid species, he called for the creation of two capitalist-style stock exchanges in China. The first was to be established in Shanghai, a city that had been the wellspring of the most virulently leftist form of violent Maoist class struggle during the Cultural Revolution. The second was to be set up in Shenzhen, then a brand new city-in-the-making that was itself another one of Deng’s bold experiments. It involved the creation of four new coastal special economic zones, intended to promote Chinese development by allowing more unfettered interaction with the outside world.
Observers at the time – including myself, then covering these counterintuitive changes in China for the New Yorker – were left scratching our heads in puzzlement. How were these capitalist organs ever going to be transplanted into the living body of Chinese socialism, much less take root and thrive?
Deng, who quaintly dubbed all these experiments “socialism with Chinese characteristics” (*zhongguo you teside shehui zhuyi*), seemed unfazed by all the contradictions inherent in these plans. But, as someone who had first come to China during the Cultural Revolution, I found it difficult not to be sceptical about such projects succeeding – given the clash between what then appeared to be two wildly irreconcilable ideological positions. When the Shanghai exchange opened its first proper trading floor just across the Garden Bridge in what, before 1949, had been the grand old colonial Astor Hotel, it was a mind-bending experience to gaze out over the traders’ desks to the electronic index boards (some of the first I had seen in China!) where the share prices of the few score listed companies were displayed. It was hard not to wonder how Chairman Mao’s vaunted Chinese communist revolution had ever come to this. How had such a reviled capitalist institution fallen into such an apostate land?
However, as Deng’s new amalgam economy began to gather momentum, private businesses did begin to proliferate and the economy did begin to take off. Soon, the sceptics were left to wonder whether Deng might not have stumbled on a new hybrid economic model after all – one that was not only workable, but very dynamic. If so, he had done a masterful end-run around all the old verities of our own western economic development theory, systems and experience.
But as bold, inventive and successful as Deng Xiaoping’s new economy came to seem over the ensuing years, the implanting of such market-oriented entities as stock exchanges into what was still largely an unreconstructed one-party, centrally planned state economy created a situation that seemed destined – as Deng had lamented in 1989 – to trigger a storm sooner or later. In the world of economics, there are few institutions more dependent than stock exchanges on an ability to respond in an unfettered way to market forces. After all, in the world of politics, however, there are few systems more dedicated to maintaining central control and empowering state intervention than that of the People’s Republic of China. In short, it seemed highly unlikely these opposite tendencies could coexist in happy synergy for ever: what we have seen playing out in China over the past few weeks, then, was a kind of delayed autoimmune reaction to having such an alien presence transplanted within it. The host might survive, but the organ was going to have to adapt to be accepted.
**By the end of last week,** Chinese markets had started to respond to the government’s many interventions, and a slight rebound began. But although they were still up 82% over a year ago, they had fallen 28% from their high in June – and this week, they began dipping once again. Furthermore, some half of all listed companies – representing 40% of the market’s total value – had suspended trading, creating a gross market distortion, augmented by the fact most “buyers” in the market were now government-funded surrogates ordered to do so, not value-conscious investors.
It is hard to imagine that all this behind-the-scenes manipulation will not dent the confidence of investors in the future: the already tenuous connection between share prices and actual corporate value will now be even more uncertain, when the government, in effect, has its thumb on the scales. Added to that fear is the real possibility that shareholders who re-enter the market may find themselves in the future holding untradable and therefore illiquid shares if the government again decides to freeze market operations in response to a sharp decline. Almost immediately after the crash, the overseas investors who had bought into Chinese companies through the Hong Kong exchange began pulling their capital out of Chinese stocks – the most prolonged period of net outflow since the programme of trading via Hong Kong began last November.
So it is still far too early to speak of China’s stock markets as having “stablised” or “returned to normalcy”. (To begin with, many company shares are still not trading.) By making the risky choice to move in and prop up share prices as they fell, party leaders effectively took ownership of markets whose proper functioning requires them to remain independent. Evidently, they felt that their credibility as China’s grand air-traffic controllers was being put at risk – that they would lose credibility, respect and even face if they did not confront the plunging market head on and at least give the appearance of being in control.
Here it must be said that, because China is a closed society and the negotiations among its leadership are so opaque, we on the outside rarely know who really makes decisions about important issues – or by what process they are made; all we can see are the results that follow. So when we speak of the wishes of “party leaders”, or even of “President Xi”, we are referring to a world that we can only see through a glass darkly.
But in this case, the consequences of the decision are clear. By acting so intrusively, party leaders have left themselves subject to what Colin Powell memorably called the “Pottery Barn rule” – if you break it, you own it. Suddenly, China’s stock exchanges have become wards of the Chinese Communist party – and their fate hardly bodes well for Xi’s declaration that the nation’s economic salvation will lie in allowing market forces to play a greater role in the allocation of resources.
There is an ancient Chinese aphorism: “Sleeping in the same bed, but dreaming different dreams” (*Tongchuang yimen*g). By initiating so many innovative reforms back in the 1980s, Deng Xiaoping not only created some strange bedfellows, but bequeathed a complex legacy of interbred institutions to later generations. By introducing so many conflicting ideas and institutions from different systems into the heart of what was still the Chinese communist revolution, Deng Xiaoping became the progenitor of what ended up being a virtual counter-revolution against Maoism. And while his “reform and opening up” did infuse his country with significant new dynamism, it also put a series of troublesome institutional and ideological contradictions at the centre of China’s whole post-Mao landscape. Stock markets were only one of the most obvious and graphic examples of these contradictions.
When the stock bubble – which had so inflated the listed value of shares that a natural correction was inevitable – finally burst, China did appear to be undergoing a kind of organ failure. In the view of Beijing’s attending neo-Maoist physicians, this failure called for emergency life support. Ever worried about any perturbation in the field that might undermine social stability, China’s “socialist” *apparat* did what it knows how to do best when alarmed. Instead of standing by and waiting while Adam Smith’s “invisible hand” restabilised the markets – albeit at a considerably lower valuation – Xi and his leadership sprung into action. Having evidently decided that this precipitous correction was a dire threat not just to China’s economic health, but also to social and political stability, they sought to levitate share prices back up again through whatever forms of state intervention and manipulation they could muster.
The party might have been excused if it had simply eschewed responsibility for what was happening. After all, markets have a logic of their own that makes them both rise and fall according to their own forces. But, instead of simply saying, “Not our problem”, it launched a massive socialist-style rescue campaign, thereby making the party responsible for everything that happened thereafter.
Why did the party allow itself to become stuck in this quicksand? Leaders evidently felt themselves threatened not only by the collapsing share prices, but by what they also feared would be perceived as an erosion of their own credibility. What they seem to have concluded was at stake was their ability to continue projecting an image of omnipotence – the appearance, at least, of being strong enough to continue guiding and controlling “all under heaven” (*tianxia*).
This is an ancient notion, dating back to imperial times, when an emperor’s reign was believed to be legitimised by a so-called “mandate of heaven,” (*tianming*), that conferred the right on a sovereign to rule. Any untoward sign of heaven’s disfavour, it was believed, would be manifested through such things as earthquakes, rebellions, droughts or other disturbances in the usual order of things. And since such events were invariably viewed as ominous end-of-dynasty symbols, emperors were always strongly allergic to them.
Of course, emperors of old never had to wrangle modern stock markets. But today, a good market crash fits right into these ancient – and still deeply embedded – ideas in the popular mind. Indeed, in the modern context, it is not hard to see how a crashed financial market might be viewed as a powerful suggestion that party leaders are losing heaven’s favour and their own legitimacy, and, worse, that a new dynastic cycle may be in the offing. And, of course, the last people the party wished to alienate were members of its burgeoning middle class, whose quiescence has long depended on the economy moving onward and upward without major interruption.
Since such residual traditional logic remains deep in the bloodstream of the Chinese people, it most certainly played a role in goading party leaders into attempting to bring these two insubordinate “capitalist” markets to heel as quickly as possible to provide proof that they still held the right to rule. Of course, when such modern markets are rising, they *ipso facto* help prove that “the mandate” is still conferring legitimacy on China’s rulers, and so can be left unattended. But when they begin to founder, the ensuing dysfunction creates the suggestion that the mandate is weakening. Such a threat begs urgent intervention.
**The obvious contradiction between** a largely self-regulating financial market and a highly controlled and centralised economy is a graphic representation of China’s divided modern-day self. What characterises China today is that it is in the middle of a process of uncertain change in many ways. Its once purely socialist command economy is now partially socialist and partially capitalist, and it is this collision that helped trigger the drama of clashing systems and values we have been watching play out over the past few weeks.
President Xi, like his predecessor Hu Jintao, speaks often about the Confucian virtues of harmony (*hexie*) and stability (*wending*). Perhaps this is because every time he and the six other members of the ruling politburo standing committee turn around, they run into disaccord and instability. And many of the problems they now confront grow out of the fact that, for the past century, China has been in the process of tectonic self-reinvention, seeking to recreate not only a new political system, economic structure and value system, but a whole new national identity. This long, painful, and complicated process began with the collapse of China’s ancient imperial system and its sustaining Confucian ideology in 1912. Thereafter, China was left with the daunting task of filling the remaining vacuum – of recreating itself from the ground up. In serial fashion it experimented with Sun Yat-sen’s Republicanism, Chiang Kai-shek’s east-west syncretism, Mao Zedong’s sinified communism, Deng Xiaoping’s hybrid capitalist-socialism, Jiang Zemin and Hu Jintao’s consensus authoritarianism; and now … now what? What will the essence of Xi Jinping’s reinvented China be? What exactly is his vision of financial markets in a vibrant new economy? One that does the party’s bidding or one that responds independently to market signals and market value, as other global capital markets do? And what is his larger vision for China’s economy, political system, rule of law, civil society, and system of core values? What, in effect, is his model?
Although we can see a few shadowy outlines of answers emerging as China’s reform odyssey continues, we still do not really know exactly where Xi intends to take the nation. To look into his “Chinese dream” is to see an aspiration for a country that is wealthier, more powerful and better respected. If you look at Xi’s domestic policies it is possible to see an ominously Mao-tinged autocrat whose answer to most problems seems to be more discipline, controls and toughness. But there is little else. And so, because China will almost certainly remain caught between transitions for some time to come, the resolution of crises such as stock market crashes will remain an uncertain and parlous business. The Maoist toolbox into which Xi now seems to reach with increasing frequency when problems occur provides him with few suitable tools for handling many of the complexities of 21st-century economic markets.
China’s future remains unclear, because so much of its agenda is still a work in progress. To have a capitalist stock market being played like a casino by tens of millions of freebooting speculators right in the middle of a society still purporting to be socialist and run by a communist party with a deep affinity for rigid, Leninist, interventionist controls speaks to the contradictory nature of the modern Chinese dilemma. It is a living embodiment of what Mao Zedong liked to refer to as an “antagonistic contradiction” (*diwo maodun*) – an inconsistency that can only be resolved through struggle, even violence.
Alas, such an unenviable predicament is hardly limited to China’s stock markets. In certain telling ways the response of the nation’s leaders to the recent market crash is emblematic of a much larger dilemma – one that sits right at the heart of China’s uneasy fusion of communism and free-market economics, a system with little precedent and no operating manual. In the years to come, with many opposing principles and forces in an unresolved state of contention, it is unlikely, as China’s great experiment in self-recreation goes forward, that its hallmarks will be “harmony” and “stability”.
## Comments (…)
Sign in or create your Guardian account to join the discussion | true | true | true | The long read: The sudden collapse of the nation’s share boom left tens of millions of investors in shock. But a massive government intervention to prop up the market has laid bare the contradictions of a capitalist China | 2024-10-12 00:00:00 | 2015-07-16 00:00:00 | article | theguardian.com | The Guardian | null | null |
|
13,447,304 | https://blog.docker.com/2017/01/cpu-management-docker-1-13/ | Docker Blog | Giri Sreenivas | Docker announces significant upgrades to its subscription plans, delivering more value, flexibility, and tools for customers of all sizes.
# Docker Blog
## How to Improve Your DevOps Automation
Learn how to improve your DevOps automation to streamline processes across your software development lifecycle.
## A New Era at Docker: How We’re Investing in Innovation and Customer Relationships
Docker’s Chief Revenue Officer thanks customers for being part of our story, especially as we continue to evolve in a rapidly changing ecosystem.
## Leveraging Testcontainers for Complex Integration Testing in Mattermost Plugins
Learn how Mattermost has embraced Testcontainers to overhaul its testing strategy, achieving greater automation, improved accuracy, and seamless plugin integration.
## Using an AI Assistant to Script Tools
In this Docker Labs GenAI series installment, learn how to use an AI assistant to script a tool based on a specific definition.
## Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl
Learn best practices for using tags and labels to manage image sprawl in Docker container workflows.
## Exploring Docker for DevOps: What It Is and How It Works
We explore the use of Docker for DevOps and explain how the combination can help developers create more efficient and powerful workflows.
## 2024 Docker State of Application Development Survey: Share Your Thoughts on Development
Take the 2024 Docker State of Application Development Survey now. The survey is open from September 23rd, 2024 (7AM PST) to November 20, 2024 (11:59PM PST).
## Using an AI Assistant to Read Tool Documentation
Explore how to use Docker and LLMs to streamline workflows for command-line tools to enhance the process of reading docs, troubleshooting errors, and running commands. | true | true | true | Read our blog to find the latest Docker updates, news, technical breakdowns, and lifestyle content. | 2024-10-12 00:00:00 | 2021-09-23 00:00:00 | article | docker.com | Docker | null | null |
|
7,184,471 | http://community.spiceworks.com/topic/440257-spiceworks-closes-57m-financing | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,451,660 | https://doordash.engineering/2022/05/18/enabling-faster-financial-partnership-integrations-using-cadence/ | Enabling Faster Financial Partnership Integrations Using Cadence | Lev Neiman; Wenhan Shen | Financial partnerships are tricky to manage which is why DoorDash needed the right technology stack to be able to quickly onboard new Dashpass partners. The challenge was that each partner brought with them a diverse set of conditions and rules that our system needed to be able to accommodate without skipping a beat. To ensure that these integrations could be carried out quickly we needed to choose a technology stack that would enable us to manage all our partner considerations and onboard them to the platform in a timely manner.
After a thorough technology review of the leading task processing technologies we chose Cadence as the task processing engine and opted to follow the separation of concern (SoC) design principle in order to gain reliability, visibility and encapsulate the details. Below we will explain the challenges of ensuring faster DashPass partner integrations and how we conducted a technology review to select Cadence as the best technology to help us speed up integrations.
## Background: How DashPass partnerships work
DashPass partners with several banks, including Chase and RBC, to offer credit card customers a free DashPass for limited periods of time. To provide this benefit, each partner must be integrated into the DashPass system for card eligibility checks and reconciliation. But integrating each partner with our systems took an extended amount of time — RBC integration took several quarters — because of a variety of challenges, including:
- Different business requirements for each financial partner
- Varying synchronous and asynchronous reconciliation processes
- Race conditions resulting in data corruption and unexpected behavior
- Unclear ownerships that reduce overall engineering efficiency and create confusion for team collaborations.
We were able to resolve each of these challenges by building a more coherent platform that speeds up the onboarding process considerably.
## Stay Informed with Weekly Updates
Subscribe to our Engineering blog to get regular updates on all the coolest projects our team is working on
## Please enter a valid email address.
## Thank you for Subscribing!
### Challenge 1: Integration logic varies between financial institutions
Each partner has established different rules around how customers will be allowed to enjoy the DashPass benefit. These differences can be related to factors like how long the customer gets the benefit, when the benefit kicks in or lapses and more.
Such complexities lead to multiple branches in the decision tree, causing our code base to grow more complex as more partners come on board. If we fail to build solutions to contend with this branching, our code becomes more difficult to read, maintain, and scale.
### Challenge 2: Each institution handles reconciliation differently
Reconciliation is an essential part of dealing with transactions on cards we cannot yet verify, a process known as multi-match. But each institution deals with reconciliation differently. For example, some conduct reconciliation synchronously, while others require asynchronous reconciliation over multiple days. To enable a good user experience in multi-match cases, we may have to compensate after a certain period of time has passed.
### Challenge 3: Lack of visibility, reliability, and control
The workflow of claiming DashPass benefits involves multiple steps and branches. Without some mechanism to control what is happening at corresponding steps, it is difficult to retry on failed steps, gain visibility into where the customer has progressed at each step, and recover from infrastructure failures, (i.e. corountines that are “fire and forget” could be lost) and server timeouts.
### Challenge 4: Race conditions and idempotency
Write requests can take some time in certain cases, causing the client to commit a retry, which can result in data corruption because there are two write requests for the same user and the same operation. For example, we use Redis locks for a few endpoints like “subscribe” to protect against users receiving two active subscriptions, but this is not an ideal solution.
### Challenge 5: No clear ownership separation
DashPass backend evolved organically as a line-for-line rewrite of DSJ, our legacy Django monolith application. Multiple teams subsequently have worked on DSJ without clear separation of concerns. Core business logic flow — which intercepts payment methods being added and creates links that make users eligible for a partnership DashPass — is intermingled with integration logic specific to particular partners.
This highly imperative code hurts our development velocity and operational excellence. Debugging regressions and supporting customers can become time-consuming because of limited observability. Because it's hard for new developers from other teams to make new integrations, team collaboration becomes complicated , and it's easy to introduce bugs. We use Kotlin coroutines that spawn from main gRPC requests to drive much of the logic, but that is both error-prone — the gRPC server can die at any moment — and is hard to debug.
## Key objectives to achieve with improved integrations
In addition to resolving complexity issues, improving visibility, reducing potential infrastructure failure, centralizing control, and clarifying ownership separation, we are pursuing several key objectives with the DashPass partner integration platform, including:
- Reducing the engineering time and complexity in onboarding new partners
- Introducing an interface that assigns core flow to the platform team and institution-specific integration logic to collaborating teams, allowing them to implement a well-defined interface to integrate a new DashPass partner while minimizing effort and the surface area for regressions
- Gaining visibility into what step each customer has reached as they progress alongside related customer information, card information, and financial response information
- Making the partner subscription flow immune to infrastructure failures by allowing the server to recover and retry at the last successful step after interruptions
- Creating centralized control of the workflow to allow query, history look-up history, and previous behavior analysis
Our solution is to build a platform with flexible workflows to allow fast integration of future financial partners. There are, however, many choices of technology stack for workflow management. Here is an overview of our technology selection process and why we ultimately chose Cadence.
## Selecting the right technology stack
Among the technology stacks we considered were Cadence, Netflix Conductor, AWS Step Functions, and in-house solutions such as Kafka and Postgres. To assess the choices, we considered the following features:
*Language*used in the client library.*Ease-of-use*in implementing our codebase and whether we needed to change our infrastructure to accommodate features.*Easy querying*in both synchronous and asynchronous workflow states.*Easy look-ups*to to search workflows based on, for example, customer ID.- Historical check to verify results.
- Testable to confirm integrations.
*Backwards compatibility*to support future workflow changes.- Logging/monitoring and the ease of setting them up.
- High performance in the face of additional layers of complexity.
- Reliability in the event of failure, including allowing server-side retries following recovery.
## Our technology review
Ultimately, we took deep dives into four options for our technology stack: Cadence, Netflix Conductor, AWS Step Functions, and building an in-house solution.
### Cadence
Cadence made it onto our shortlist because it's flexible, easy to integrate and ID unique that would address our use case.
#### Pros
- Easy and fast to integrate
- Open source, so no infrastructure restrictions
- Guarantees exactly-once job execution with a unique id that cannot be executed concurrently, solving race conditions that currently require locks
- Allows failed jobs to retry, creating a valuable recovery mechanism
- Provides a way to wait for job completion and result retrieval
- Popular language libraries already built-in
- Small performance penalties
- Scales horizontally with ease
- Supports multi-region availability
- Offers thorough documentation and already familiar to our team
- No reliance on specific infrastructure
- No limits on workflow and execution duration
- Easy search function
- Simplified test setup for integration tests
#### Cons
- Configuration not as flexible as an in-house solution
- Long-lived actors are consciously thrown out for backward compatibility
- History storage must be done manually, limiting search
### Netflix Conductor
Netflix conductor came highly recommended because of its wide support for different languages, has production testing, is open sourced and is widely used.
#### Pros
- Open source, so no infrastructure restrictions
- Supports Java and Python clients
- Supports parallel task executions
- Supports reset of tasks
#### Cons
- DSL-based workflow definition, while starting simple, can become complicated as workflow becomes more complex
### An In-house solution
While it was certainly an option to select an open source technology we also had the option of building something ourselves (i.e. Kafka + Postgres).
#### Pros
- We dictate the workflow control mechanism
- Allows implementation of TCC instead of SAGA for transaction compensation
#### Cons
- Building an in-house solution requires significant engineering effort
- Extra complexity because message queue solution would have to poll for result completion
### AWS Step Functions
AWS step function was added to our shortlist because it also provides workflow solutions with failure retries and observability.
#### Pros
- Offers Java client library
- Provides a retry mechanism for each step
#### Cons
- Tight throttling limits
- Requires infrastructure change, engendering extra work
- Difficult integration testing
- Offers state-machine/flow chart instead of procedure code
- Inflexible tagging limits elastic search
## Why we chose Cadence to power our workflows
Ultimately, we chose Cadence because of its flexibility, easy scaling, visibility, fast iterations, small performance penalty, and retry mechanism. Unlike AWS Step Functions or a similar DSL-based approach, Cadence allows flexible development of a complex workflow. In addition to allowing synchronous waiting for job completions, Cadence scales well and is available across multiple regions.
Workflow visibility is key to solving customer issues. Cadence’s elastic search allows for that. Additionally, easy integration tests through the Cadence client library allows fast iteration and confidence in our code. With a low roundtrip time for a synchronous workflow — p50 - 30ms p99 - 50ms — Cadence requires no performance penalty and brings in little degradation in latency. To avoid overloading downstream services during downtime, Cadence provides easy configuration for retries and exponential backoff.
## Setting up Cadence workflows
Our sample workflow links customer credit cards to DashPass partners. When a customer adds a card, the card’s information is validated against payment method eligibility including such things as verifying country validity and bin number. If eligibility checks are successful, the partner API is called for further eligibility checks, which are sorted as eligible, not eligible, and multi-match. Multi-match, in particular, triggers a fallback check as a follow-up action.
In Figure 1, the workflow is diagrammed with green boxes indicating where specific integrations can deviate and where core flow can call out to the corresponding integration. Core flow receives integrations via Guice injection, asks each for eligibility checks, and follows up accordingly. Eligibility checks are included in the Cadence activity. The Cadence activity will call out to the partner implementation accordingly. If fallback checks are required, a separate workflow will be spun up.
We set up integration tests to test old paths and new paths (which uses Cadence) to verify they have the same outputs — meaning the same gRPC response and database creation/update.
We also have shadowing in the rollout to validate the process. In shadow mode, gRPC outputs are compared asynchronously and dry mode is enabled to pretend like we will create a membership link (membership link here means the credit card has been linked to a financial institution successfully) in the subscription database and see if it is the same as the original one.
It is also worth mentioning that core flow is decoupled from plan integrations this way as a separation of concerns pattern. We have developed interfaces for new partners that abstract away implementation details which are represented by green boxes, shown in figure 1 above. Core flow calls into the particular implementation's method to execute green box logic. Integrations are injected using Guice/dependency injection into the core flow at startup time.
## Results
In the months since rollout, there have been no major issues. While integrating RBC took several quarters before we introduced the new integration platform, our integration of Afterpay following the platform’s rollout was completed within a single quarter. Under the new process, creating link partner integration requires only one or two pull requests. Additionally, Cadence has allowed us to address ownership separation concerns and speed future integrations.
Cadence Workflow could be used to resolve similar visibility and reliability challenges in other situations with minimal effort. Among its benefits are increased visibility into workflow and activity, a simplified retry process, free locks via workflow ID, and guaranteed exactly-once-per-task execution.
## Acknowledgement
Special shoutouts to Yaozhong Song, Jon Whiteaker, and Jai Sandhu for their contributions to this work. | true | true | true | Read the technology review we conducted to find the right task management technology for Dashpass onboarding. Learn why we chose Cadence | 2024-10-12 00:00:00 | 2022-05-18 00:00:00 | article | doordash.com | DoorDash | null | null |
|
5,778,223 | http://www.remailproject.com/2013/05/inbox-queuing.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,355,616 | http://www.bbc.co.uk/news/business-38552828 | VW chiefs 'hushed up emission cheating' | null | # VW chiefs 'hushed up emission cheating'
- Published
**VW executives knew about emissions cheating two months before the scandal broke, but chose not to tell US regulators, according to court papers.**
The bosses involved include Oliver Schmidt, who was in charge of VW's US environmental regulatory compliance office from 2012 until March 2015.
On Monday he was charged with conspiracy to defraud and has been detained pending a hearing on Thursday.
He was arrested on Saturday in Florida, where he was on holiday.
Volkswagen said it could not comment on an "ongoing" legal matter.
A complaint to the US District Court for the Eastern District of Michigan, filed by the Federal Bureau of Investigation (FBI) against VW at the end of last year, accuses the carmaker of deliberately misleading regulators about cheating US pollution tests by means of so-called "defeat devices".
The complaint said Mr Schmidt and others gave a presentation to VW's executive management on or about 27 July 2015.
"In the presentation, VW employees assured VW executive management that US regulators were not aware of the defeat device," the complaint said.
"Rather than advocate for disclosure of the defeat device to US regulators, VW executive management authorised its continued concealment."
Separately, VW owners in the UK are seeking several thousand pounds in compensation over the scandal.
## Deceit
By the summer of 2015, the complaint from the FBI said US regulators knew that emissions from VW diesel vehicles were "substantially higher" when they were being driven on the road than when being tested.
The affidavit said Mr Schmidt - who was the general manager in charge of VW's Environmental and Engineering Office between 2012 and March 2015 - knew the discrepancy was because VW had "intentionally installed software in the diesel vehicles it sold in the US from 2009 through 2015 designed to detect and cheat US emissions tests".
In 2015, Mr Schmidt travelled to the US to talk to US regulators about the discrepancy. The filing says that during these talks, Mr Schmidt "intended to, and did, deceive and mislead US regulators" by saying the difference in the emission levels was not because of deliberate cheating.
The affidavit cites two VW employees who said that in a presentation to VW's executive management in Germany, "VW employees [including Mr Schmidt] assured VW executive management that US regulators were not aware of the defeat devices - that is the engines' ability to distinguish between the dynamometer and road mode.
"Rather than advocate for disclosure of the defeat device to US regulators, VW executive management authorised its continued concealment. "
VW said it would not be "appropriate to comment on any ongoing investigations or to discuss personnel matters".
"Volkswagen continues to cooperate with the Department of Justice as we work to resolve remaining matters in the United States," it said.
## Group litigation
Meanwhile, in the UK, lawyers said 10,000 VW owners had already expressed an interest in suing VW. They estimated that owners could get "several thousand" pounds in compensation.
Harcus Sinclair is applying for a group litigation order - which is similar to a US class action lawsuit - in the High Court later this month.
The legal action is aimed at securing compensation for people who own or have previously owned one of the vehicles.
In the UK around 1.2 million diesel engine cars are affected by the emissions scandal.
Harcus Sinclair said it was basing its estimate of the level of compensation owners could get on the €5,000 (£4,300) per owner awarded in Spain and the $8-10,000 awarded in the US.
"The key allegation is that the affected cars should not have been certified as fit for sale because it is alleged that they produced higher levels of nitrogen oxide and nitrogen dioxide emissions than the rules allowed," it said in a statement.
## Ombudsman approach
Seventy-seven current or former VW owners have put their names to Harcus Sinclair's application for a group litigation order which will be heard in the High Court on 30 January.
The firm hopes that the marketing and publicity surrounding Monday's launch will encourage more drivers to sign up to the action.
It added it was also talking to other law firms about joining forces with them, in an effort to avoid cost duplication.
If the High Court gives the action the go-ahead, a pre-trial hearing will follow and then the trial itself in about 18 months.
In a statement, VW said: "We have been notified that Harcus Sinclair intends to bring proceedings against Volkswagen on behalf of 77 claimants in the English High Court.
"We intend to defend such claims robustly," it added.
Another law firm, Leigh Day, said it had been approached by about 10,000 VW owners regarding the emissions issue.
However, the company said it wanted to avoid the "cost risk associated with pursuing the matter through the courts".
Instead, it had submitted some test cases to the dispute resolution body, the Motor Ombudsman.
- Published10 December 2015
- Published29 December 2016
- Published20 December 2016 | true | true | true | VW bosses knew of emissions cheating before the scandal broke, but kept it quiet, court papers say. | 2024-10-12 00:00:00 | 2017-01-09 00:00:00 | article | bbc.com | BBC News | null | null |
|
12,749,950 | http://www.k2.t.u-tokyo.ac.jp/vision/DPM/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,942,111 | http://www.collective-evolution.com/2014/11/06/the-space-powered-generator-free-energy-video/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,082,440 | https://www.theguardian.com/environment/2023/oct/31/banks-pumped-more-than-150bn-in-to-companies-running-carbon-bomb-projects-in-2022 | Banks pumped more than $150bn in to companies running ‘carbon bomb’ projects in 2022 | Ajit Niranjan | Banks pumped more than $150bn last year into companies whose giant “carbon bomb” projects could destroy the last chance of stopping the planet heating to dangerous levels, the Guardian can reveal.
The carbon bombs – 425 extraction projects that can each pump more than one gigaton of carbon dioxide into the atmosphere – cumulatively hold enough coal, oil and gas to burn through the rapidly dwindling carbon budget four times over. Between 2016 and 2022, banks mainly in the US, China and Europe gave $1.8tn in financing to the companies running them, new research shows.
The climate rhetoric did not match up with what was happening on the books, said Shruti Shukla, an energy campaigner at the National Resources Defense Council, which was not involved in the investigation. “We need to rapidly decline our production of fossil fuels and support for fossil fuels, whether that’s regulatory or financial.”
The carbon bombs, which were first identified in an academic database by the Guardian and partners last year, are the single biggest sources of fuels that release planet-heating gas when burned. Data for Good and Éclaircies, two French non-profits, and several European media outlets have now used publicly available data to map out the companies that operate the carbon bombs and the banks that finance them.
For some projects, the datasets did not match up, were out of date or had an unclear operation status. But the researchers are confident that at least 20 of the 425 have started running since 2020, most of which are coalmines in China, while three projects have been stopped. In total, the researchers estimate that there are now as many as 294 projects running and at least 128 that are yet to start.
Between 2016 and 2022, the research shows, banks in the US alone were responsible for more than half a trillion dollars of finance to companies planning or operating carbon bombs. The single biggest financier was JPMorgan Chase, providing more than $141bn, followed by Citi, with $119bn, and Bank of America, with $92bn. Wells Fargo was the seventh-biggest financier, with $62bn.
Also in the top 10 were three Chinese banks – ICBC, Bank of China and Industrial Bank (China) – and three European ones – BNP Paribas, HSBC and Barclays.
The bulk of the money they provided was general corporate financing to operators, rather than direct loans for projects to dig up fossil fuels.** **In 2022, direct and indirect financing of carbon bombs came to an estimated $161bn.
Bringing planned carbon bombs into action would run counter to increasingly stark warnings from doctors, energy experts and climate scientists about the urgent need to swap to cleaner sources of energy.
In 2021 the International Energy Agency found no room for continued expansion of fossil fuel extraction projects in its net zero emissions scenario. A recent Nature study reassessed the amount of fossil fuels that could be burned if realistic levels of carbon dioxide removal are assumed. It found that between 2020 and 2050, the supply of coal must fall by 99%, oil by 70%, and gas by 84% to keep the planet from heating by 1.5C above preindustrial levels.
If those targets are not met, extreme weather will continue to grow increasingly violent, experts have warned. If they are met, experts said many carbon bombs will become stranded assets that need to be written off, which some fear will shock the financial system.
“If that happens rapidly, we could have another financial crisis,” said Jan Fichtner, a sustainable finance research fellow at the University of Witten-Herdecke, who was not involved in the research.
To avoid this, the profitability of oil and gas must be tackled, he added. “In a capitalist system, profitability is the most important current. You can try to swim against the current, it’s possible, but it’s very, very difficult.”
In response to the findings, a JPMorgan Chase spokesperson said: “We provide financing all across the energy sector: supporting energy security, helping clients accelerate their low-carbon transitions and increasing clean energy financing with a target of $1tn for green initiatives by 2030. We are taking pragmatic steps to meet our 2030 emission intensity reduction targets in the six sectors that account for the majority of global emissions, while helping the world meet its energy needs securely and affordably.”
A spokesperson for HSBC said: “Supporting the transition to net zero and engaging with clients to help them diversify and decarbonise is a key priority for us. We are working to align our financed emissions to net zero by 2050.”
Barclays said it had set 2030 targets to reduce the emissions it finances in five high-emitting sectors, including energy, where it has achieved a 32% reduction since 2020. “Aligned to our ambition to be a net zero bank by 2050, we believe we can make the greatest difference by working with our clients as they transition to a low-carbon business model, reducing their carbon-intensive activity while scaling low-carbon technologies, infrastructure and capacity,” a spokesperson said.
BNP Paribas said that in 2021 it “strongly reinforced” its withdrawal trajectory from fossil fuels and aims to further shift its energy-based financing to 80% for low-carbon sources by 2030. A spokesperson said: “BNP Paribas is turning the page on fossil fuels and is focused on mobilising its resources to low-carbon energies. Analyses covering the period between 2016 and 2022 do not reflect the dynamic of BNP Paribas in terms of financing the energy sector. Indeed, BNP Paribas updated in 2023 its oil and gas policy with this engagement: BNP Paribas will no longer provide any financing (loans and bonds) dedicated to the development of new oil and gas fields regardless of the financing methods.”
Wells Fargo, ICBC, Bank of America and Citi declined to comment. Bank of China and Industrial Bank (China) did not respond to a request for comment.
When the Guardian revealed the carbon bombs last year, scientists thought the remaining carbon budget to give a half chance of keeping global heating to 1.5C was about 500 gigatons of carbon dioxide. But on Monday, leading climate scientists published an update that put the figure at just 250 gigatons. The carbon bombs could release more than 1,000 gigatons over their lifetime. | true | true | true | Projects that risk 1.5C heating target operated by companies receiving financing from European, Chinese and US banks | 2024-10-12 00:00:00 | 2023-10-31 00:00:00 | article | theguardian.com | The Guardian | null | null |
|
36,757,069 | https://twitter.com/shmuli/status/1680669938468499458 | x.com | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | X (formerly Twitter) | null | null |
6,694,299 | http://www.etotheipiplusone.net/?p=3035 | On 2.00Gokart; Or, Designing a Design Class to Disrupt Design Classes as We Know It; Or, How to Make MIT Undergraduates Build Silly Go-Karts so You Don’t Have To | Chorl | I think I’ve promised a 2.00gokart “total recap post” after every session of it so far. This is a piece that is long overdue on this site, and in all honesty, probably also way past its time to present in a more formal venue. **Edit: It’s now on Make Blog! Thanks Make!** For the past now 2 years and four sessions, what I consider to be my most long and extensive project has been developing quietly in the halls here at MIT – that is, as quiet as the high-pitched whine of square-wave commutated brushless airplane motors can get you, anyway, interrupted periodically by the interdiction of concrete-backed drywall upon metal; facilities and my space directors will never let me live that down.
I’m two months late and running from when I first said to expect a wrapup of the summer SUTD special session I ran for their visiting students. What this post will be is a *ton* of writing. Interspersed with as many photos and references as I can manage, of course, in my usual style of discourse, but most of it will be me waxing poetic – and perhaps polemic at times – regarding my own motivations to start this course, experiences in running it, and ultimately what my end goals are and where I want to see this class end up.
I anticipate this post being extremely long. In fact, so long that I’m going to split it into multiple sections ahead of time. What will be presented from here on is basically a much more concise, casual, and perhaps more profane and offensive version of my original Master’s Thesis, minus most of the the graphs and tables (because every Master’s Thesis in engineering needs graphs and tables, for whatever reason), and with more pictures and videos. The first two parts essentially amounts to me ranting, and the second half is the productive info, followed by more despotic proselytizing.
Here’s the table of contents: First, a summary of my motivation for making the class. Next,
- A brief rundown of my own history with engineering projects and how that both aided and hindered my academic performance at MIT
- How I took an interest in teaching and why I saw issues with the current system of design classes
- A history of my involvement and leadership in the electric vehicle design realm
- Recap of the 2012 class “2.00scooter”
- The changes made for the 2013 class “2.00gokart”
- The 2013 summer special session and the changes made for it
- Where the class stands now; content, procedural, and logistics.
- What I think the class brings to the world of design classes that is different or novel.
- More about the resource base of the class and the cost of running
- An SAE Asston (like an ISO/DIN Arsetonne) of lecture notes, resources, and links I have built up so you can run your own silly go-kart class
# about 2.00gokart
Here’s the short story:** “2.00gokart”** is my shorthand project code referring to the undergraduate sophomore/freshman level Mechanical Engineering lab class I have been developing for the past two years at MIT, both as a graduate student and as a full-time instructor, that takes a step back from the traditional guided structure of an engineering design class and instead pushes students to learn engineering skills more akin to what they will find in real-world projects, including: being given open-ended design goals, self-defined scoping and design constraints, discovering and analyzing own parts and resources, and validation of the final product through physical demonstration.
In other words, I’m going to make you build a go-kart – I’m not even saying what kind, or how; I’m going to make your find your own damned parts and tell me why you think they will work, and then I will make you ride it… and you *can’t* pretend or make up that last part. The emphasis of the class is on the first two ‘bullet points’ – self-directed goals and constraints, and seeking resources, including parts. You can find myriad engineering project classes that will tell you “adhere to this set of demands or constraints and show that your product meets them physically”, but few, if any, that load the emphasis instead on how to define a project given almost no demands, hunt for the resources to execute it, and compromising and changing your design to accept what resources you have.
I’ll be blunt: These tenets are based strongly off my own experiences in building *dozens* (like *actually dozens*) of admittedly some times half-baked engineering projects before, during, and after my career at MIT as a student. My lack of thoroughness in the engineering science portion of these projects is completely in the open and not up for dispute. What I learned by reading through the lines during my undergrad career was that there was a seemingly inexorable disconnect between the goals of a *design* class and the goals of an *engineering* class. Here’s how I see it: An engineering class teaches you to use a theoretical and analytical approach to solve a well-defined problem, and a *design* class teaches you to use a practical approach backed by engineering science to solve an ill-defined or open-ended problem. It sounds like the theory should come first and then jack into the practice, but I’ll talk about how I do not think that is the case shortly.
I have seen it happen over and *over* again. A Certain Scientific Department becoming uneasy with how ‘theoretical’ it has become, and with news of other universities’ more ‘hands-on’ approach to engineering making a comeback, tries to add a lab or design component to a class that in recent history has been entirely on paper. The students have never been exposed to real parts and processes before; maybe a few of them have, from their own personal experience in years past. The groups form, there’s much struggle and frustration at how stuff just doesn’t work as well in real life as they do in Solidworks or Matlab, and in the end, maybe one or two projects “work” and everyone else leaves with a really dim view of project classes in general, and some start resenting the Department for ‘not teaching them anything useful’. Does this sound familiar to you?
This is where my personal bias from past experience comes in – and again, I am completely upfront about it: This class is about doing it my way, and you should keep that in mind as you read. **The way I learned to do this was the complete opposite**. I began messing with things, I built things that may not have worked, I came and learned about why they didn’t work so I could build better things. This is, superficially, what the story of an engineering degree should be. But my *years* of experience *prior to* even coming to school to learn about it meant that when it came time to act, I not only acted because it was damn near second nature to do so, but was a resource for everyone else who had never needed to “act” – or perhaps, just flat out did not know how to. I don’t mean that in the derogatory sense – you literally could have never built anything in your life before, and that was totally fine, because let’s be really honest here and admit first that the vast, vast majority of engineering majors do not come into school already being engineering majors – the school has to teach them from start to finish, everything they need to know to be competent engineers.
My whole thesis, if it had to be boiled and distilled and refined down, is this: **Give**** students the tools of practicality and channel their creativity first, and supplement them with the knowledge of theory and science later**. When you’re trapped on the proverbial deserted tropical island, you occupy yourself with survival and make-do first, then figure out how to get off the damn island. Right now, it is my contention that we teach people ocean currents, wind patterns, and sailing first, and assume they know how to build the boat to use it already. The boat will most likely sink, and it will **not** be the Arduino’s fault this time.
(Acknowledgements: I’d like to thank my parents, and my… uhh, bottle of Tap Magic Aluminum here… and, like, this motor controller? ** *rummages around* **Oh, and Hatsune Miku)
(Serious acknowledgements: The MIT Mechanical Engineering Department, the MIT-SUTD Collaboration, the Pappalardo Laboratory staff and director R. Fenner, and Profs. Frey, Wallace, Hunter, Slocum, Sarma, and any other Mechanical Engineering professor and administrative staff, for basically just putting up with my bullshit for 5 and some years *and counting*.)
# 1. My History in Building Engineering Projects for The Lulz; or, A Brief Biography of Charles
I’m loving these 1800s era compound titles.
Here’s something about me that surprises most people for some reason: I’m the only engineering blood in my family. I do not come from a long line of auto mechanics or machine shop owners, nor do my parents both work in an engineering, architectural, or design setting. In fact, one does something with finance and the other does something with IT for a major home furnishing and clothing department store, and that was only *after* I took off playing with silly robots. My grandparents used to play in a Chinese state orchestra, or were fish farmers – that half of the family I’ve never met in person so I’m going to assume they were fish farmers, but maybe I have a cousin in Northeastern China somewhere who builds sweet DIY robots from scrap metal. Which would explain everything, really.
My own history with engineering begins like that of any 11-year-old who was totally *not* supposed to be watching South Park in mid-2000 (MILLENIAL!) but was doing so anyway. After South Park one day, a strange show called Battlebots suddenly came on. That link is *old* – the current Battlebots.com website isn’t really much to look at – the sport has generally declined to the point where it is wholly decentralized and grassroots, and I don’t really have a good central link to point people to any more besides perhaps the Robot Fighting League forum or the Facebook group. Or my fleet page… Shameless plug over.
The robots themselves aren’t the focus here. What the intervening seven years between that first episode of Battlebots and when I left home for MIT taught me was that robots, or really building anything hands-on, was a silly hobby. My parents were supportive of my hobby, but they continually reminded me that I had to do well in classes, especially my computer science ones, and get into a good college and get a good job and a house and a wife and… hey, a shiny thing.
I do not blame them or want to shit on their efforts to make me successful. The takeaway here instead was that **I found engineering to be a fun and nonserious distraction** for longer than I have been told it was something more than that. My earliest robots… ~~were complete and utter horseshit~~ didn’t really work, to keep it politically correct. I rebuilt them, adding their failures to my knowledge base. I took things apart, I bought things *just for taking apart *(to my parents’ chagrin – aren’t I going to try using that first, maybe?) to incorporate into my latest death machine.
*I still buy things just for taking them apart* – we just call that Beyond Unboxing now.
In those seven years what I built up almost by accident was a extremely expansive lookup table of problems and solutions – stuff I’ve done which worked but I didn’t really know why yet. And I documented them furiously. Why? Because I knew I was going to have to come back to them later and finally understand why something happened the way it did, and because even back then I realized someone else besides me could find the information useful at some point in the future. Much of my learning during my earliest years was through hours and hours spent on builders’ websites who were gracious, or perhaps detail-oriented enough, to post their builds online back in an era when that was not just a one-click upload to a social network.
This is why I labor so intensively in blogging **everything** I build or do. I not only feel indebted to the old guard of Battlebots veterans for providing me with such valuable resources during my dawn, but also that it is my obligation to mark the trail for current and forthcoming generation of new builders and hackers and makers, many of which have already let me know how a single picture or post on this website has saved their asses or enlightened them. That’s fucking worth it.
I digress, but only slightly. I’ll explain how and why I incorporate this kind of public and open documentation into my “curriculum” in its relevant section.
Back to the story. My years of engineering practice prior to MIT admittedly made me a bit cocky when it came to building anything*. *That, combined with my attitude towards engineering as something fun and hilarious (instead of SRS BIDNESS) meant that I spent most of my undergrad career not paying attention to classes. I openly admit to and am proud to point out cherrypicking only what I found relevant to some project I had going on at the time and shutting out the rest. I spent way too much time at my UROP in the Media Lab‘s Smart Cities (now Changing Places) research group, exercising my skillset in constructing silly vehicles and learning new skills while I was at it, and irritating the shop managers with my incessant building of something which was ~~not~~ ~~research-related at all~~ definitely for the group, guys, I swear!
My best performance in classes came when I had some project that was directly or indirectly relevant to them. I conceived Segfault during my early controls classes, but only got it right at the end when I took an intensive control systems design lab. *It languished for an entire year in between, *with multiple attempts at trying to make the balance controller work. LOLrioKart was a lesson in blowing up semiconductors over and over until a power electronics lab finally gave me the confidence to attack it correctly. The classes I did the worst in were the ones that I couldn’t connect to anything I was building. The drudgery. Anyone who knows me will know how many times it took me to pass our ‘measurement and instrumentation’ class, which was basically heaps of statistical analysis and technical paper writing that I found completely irrelevant to anything I was interested in. Fluid mechanics was mostly a wash, as was pretty much all of solid mechanics and materials – you are hard pressed to get me to remember anything from those classes at all – they’re just distant memories at this point. Don’t even get me started on the Mechanical Engineering senior product design class.
Some people have said, in words or in meaning, that I avoided all the “hard” classes at MIT. Some times “hard” is replaced with “good”. I don’t deny this – the aversion to classes is not an independent trait that I can trace start to finish and capture the all details to transmit them here. In fact, it was very much tied to my engineering identity at MIT, one that I was intentionally or otherwise cultivating amongst my peers at the time with these “non-serious, just for fun, really!” engineering projects, and it is incredibly relevant to my thesis that theory follows practice, not vice versa, for better results. The combination of these and other factors was ultimately what led me into this path of teaching and mentorship I am on today.
# 2. MITERS, the Relevance of Hivemind Building, and the Real Story of 2.007
No mention of my history would be complete without mentioning MITERS, the on-campus “hackerspace” or “makerspace” or “facespace” or whatever term is now the hot word to describe a communal shared space full of interesting tools and even more interesting people. I often downplay my role in the “MITERS revival” that took place after 2007, when I joined up, but the fact is that I immediately found MITERS to be my niche at MIT and took control of as much as I could. I and a core group of 3 or 4 people spent most of 2008 ‘building shop’ – cleaning out junk, opening up space, purchasing new tooling and tools, etc. all in an effort to make the space that much more useful to everyone else. Part of the shop building was just myself or someone else needing to make something using a new weird process and we bought tools ad-hoc for it.
And I built stuff and paraded projects around like a *maniac* while advertising for MITERS (This site holds almost all of that history, for those interested) – Until 2009, we had trouble filling officer positions (all 4 or 5 of them) because the regular members were just us. But gradually, through many Orientation demos and hovering through the hallways and other publicity events, new members started trickling in. I think the best years for that so far has been 2010 and 2011, when we probably added dozens of regulars and a handful more dedicated users to the roster. MITERS was a starting to become a resource for people who just needed to do one or two little things and were walk-in referrals by their friends, basically, so we all began to take on the role of mentors and teachers to some degree.
Even back then, a few of the freshmen and sophomores had seen our work on the Internet, especially mine since I both blogged extremely often and began to attract the attention of tech blogs and maker-oriented sites – LOLrioKart was pretty big during this time and I became known, for better or worse, pretty much *for it*. Whether I liked it or not, I was slowly emerging as a face for “engineering weird shit at MIT”. Some times, my acts of public bravado (such as the infamous LOLrioKart speeding ticket incident, which was really a “operating a 4-wheel vehicle in a restricted [bike] lane” ticket) attained local mythical status. It’s often said that successful organizations tend to be built around one personality, and for a time I was doubtlessly feeding into that – intentionally or accidentally. Just about everything I did was somewhat related to MITERS or promoting MITERS or attracting more buildy-frosh to MITERS. Hence my mention previous of my “engineering identity”. By some time mid Junior year, I was pretty much convinced that The System was not supportive of my building things for fun, so I felt the need to make sure there was a way around it – by showing what can be done in a nonstructured environment, away from professors’ demands and shop managers’ shoulder-looking, and publicizing it. Those who know me well would know I do not take Systems at face value, and part of that attitude was cultivated during my “regime” at MITERS.
The influx of members during that 2009-2011 timeframe began to feed into a new dynamic. **People were building things because their friends had built something and they thought it was cool and also wanted to try**, and MITERS went through (and continues to go through) “phases” of projects where many students and members get interested in some device and then all build something like that. It’s the only reason pictures like this from Maker Faire are possible:
I think we noticed electric scooters as the first major “wave” of hivemind-building in 2011. Then tesla coils and go-karts and quadrotors and DIY audio amplifiers and mopeds ~~and vans~~… The list goes on, and changes depending on who you ask.
Here’s the thing about the “hivemind” though. **Nobody was just copying or imitating someone**. You wanted to build a scooter, so you might have asked me (or Shane, Adrian, etc.) what parts to use and where to get them. But you found your own piece of scrap aluminum and built it all on top of that. I might have given you a part or two, left over from other projects. There was no “ISO Standard MITERS Derpy Electric Scooter”, but numerous takes on the same general idea. Not all of these projects got finished, and some in fact still hang from the MITERS ceiling like olden-time criminals made examples of in public, warning newcomers against the hazard that is getting way too excited about something and then taking too many classes to finish it. Hey, you all better finish your damned go-karts soon.
I would even wager to say that even if you were just copying someone else, it’s still productive if you’re honestly into your project for its own sake. Build it anyway for your own enjoyment and learning regardless. I bring this up because of the contrast between the MITERS “hivemind” and my own experiences in the Department of Mechanical Engineering. There’s this conversation I’ve had numerous times that seems to indicate a certain disturbing (to me, anyway) attitude in the student population. It always occurs in a public or semipublic setting, such as a departmental gathering or just outside various professors’ offices, and I probably got it the most in the 2009-2010 era when several of my projects were “Internet famous”. It goes something like:
“Hey, you’re Charles, right?”
“Yeah.”
“I’ve seen your projects and stuff on the Internet [my usual sign to try and escape] – they’re pretty cool!”
“Thanks – I do all of it through MITERS [or other obligatory MITERS plug]”
“I’ve heard of that place, and really wish I could build stuff so I can hang out there more”
“Why don’t you? Lots of people just come in and hang out too, and you can learn stuff from people there”
“
I just don’t think I could build anything new”“Why does it need to be new? Why not just build it for fun [or for your own learning, or something equally ?”
“
I dunno, I just don’t want to copy someone else if they’ve already made something”
I realized *that’s what academia does to you*. In academic engineering, everything you do is under the shadow of research. The pressure to be “new” or “innovative” or “groundbreaking” blocks out skill building by doing it for your own sake. Ideas fight for the right to be justified, analyzed, and scruitinized, instead of just to exist. This would be fine if it were limited to grad students, since by design you’re a research machine and you should already know what’s coming. However, there’s definitely this unspoken sentiment that if you can’t make something “new” or innovative or beyond the norm right away, it’s not worth doing. It has been a concern of the MITERS officers recently with regards to our “image” on campus of this super hardcore group of hackers who won’t stop for any newbie questions. Perhaps it’s just a consequence of MIT being a giant amoebic agglomeration of overachievers that people make this assumption immediately. I felt this exact same pressure when I entered graduate school in the department and it was one of the factors which ultimately caused me to suspend my masters’ studies and instead focus on the then-nascent 2.00gokart project full time.
That past spring, I dug a little deeper with my involvement in the Mechanical Engineering sophomore design class 2.007 – the one which, years ago, spawned off FIRST Robotics. My “story of 2.007” is a tragic recount of my own build season with the class, but what’s missing from that is all the time I spent immediately jumping in and aiding fellow students.
2.007 is a class well known for its chaos and intensity. Many students who have never built mechanical systems before were suddenly faced with parts choices, design decisions, and figuring out how to make a certain part they designed. And people stumbled hard – if you weren’t already well-versed in building or at least have seen and used machine tools before, and fell behind, recovery was difficult. The class expected students to know mechanical engineering parts already, but what class had taught them that beforehand? There were lectures about what gears and bearings and linkages were, but none about how to connect them to one another. People do figure it out, but they still do stumble to this day, and that’s something I’m not sure how to remedy without restructuring the entire class content and pacing of the department (which I HAVE thought about many times, but that’s for a separate ~~thesis~~ opinions column)
This was what I was faced with – trying to inject other students with just a little bit of the knowledge needed to carry them through their design. I showed a lot of folks my dirty tricks such as caliper abuse (codified in 2010), and also in 2010 I first put together “How to Build your Robot Really Really Fast” – these days codified into How to Build your Everything Really Really Fast. Starting in 2010, I also held “secret ninja robot lectures”, as I called them, after lab hours were over which were basically open question consulting hours. If anything, the spring of 2009 when I took 2.007 was the wake-up call to me that I could contribute positively to how people learn mechanical engineering. By which I mean “I think ya’ll are doing it wrong… lemme try.” My position as legitimate undergraduate lab assistant in both 2010 and 2011 only strengthened that desire.
2.007 isn’t the only design class in the department, but it’s the one I rag on (and praise) the most because I’ve been more deeply involved in it. MIT actually does have quite a selection of design classes, but I witnessed the same kinds of patterns in either taking them or knowing people who have. The majority of ours in Mechanical Engineering are catered towards seniors and juniors, the students work in (some times very large) teams, and invariably the people who produce the best work in those classes have picked up their experience before MIT or doing something that was not just taking classes – they did research (such as UROPing) or worked on one of the engineering competition teams or for a startup company or on their own projects or something, where they were able to get the hardware knowledge. And the final product is usually the concerted effort of just a few of said people. Teams without one or more of those people tended to bodge together something that passed the requirements and could be dressed up for the final big show so the class didn’t look bad.
There are freshman level introductory design classes, too, don’t get me wrong – but you seem to spend a lot of time messing with foamcore and cardboard and “prototyping”. They put emphasis on the sketchbook and the Pugh chart and the ‘sketch model’ made of wooden dowels and blocks of foam. So on one end, you learn the ‘design process’ and the other end you produce working hardware, but where do you learn about how to design with and around the hardware or to use the tools available to you? The answer if you asked around enough is either 2.007 (in which we’re expected to teach everything from Solidworks to electronics for mechanical engineers to *remedial machining* already), or research & urops and clubs. **It seemed absurd to me that to excel in the Department of Mechanical Engineering as a student, you had to basically already know mechanical engineering, or you were prone to feeling left out, frustrated, and bogged down.**
That’s basically the root of the ‘agenda’ behind 2.00gokart. To create a design class for freshmen and sophomores (and other newbies) that gave people **meaningful** hardware experience right away. To introduce them to tools of creating working mechanical devices early, and to expose them to the idea that at some point down the line your designs aren’t going to come from a box of parts in the lab cabinet – you have to source parts and appraise their usability yourself. Yet make it such that it is fun and enjoyable with minimal ‘grunt work’ deliverables and contrived process structures, such that people focus on their project instead of just trying to satisfy requirements, and the takeaways are realized in their other classes after it’s all done. I believe this is all entirely possible to do – I’ve already done it three times.
# 3. 2.00EVT and 2.00Scooter: The Early Years
By which I mean “2010 or something”.
The class of 2011 was some kind of record enrollment year for MechE, and 2.007 in 2009 (my year) was extremely crowded. To alleviate that, the class began soliciting “special section” proposals from, for example, various competitive engineering teams, or from Ocean Engineering, etc. In 2010, the Electric Vehicle Team, Solar Car, and Formula SAE all took on several students each and had them work on alternate projects and assignments. I knew the leadership of the EVT very well during that time, and Shane and I became quasi-TAs for the section in addition to our other 2.007 obligations (and my own other academic obligations).
In the 2010 session, I primarily stayed back teaching-wise and acted as more of a design consult, just like in mainstream 2.007. That year’s session saw the creation of three vehicle(-shaped-things), including this gem, Longbike:
It was during this EV section that I began to really consolidate all the resources that I’d gathered and had provided ad-hoc to people who asked. My first compilation of ‘where to buy stuff’ lecture notes dates to this era, as does my first in-person lecture (to like 4 people, but hey, gotta start somewhere) on the topics; for instance for people who wanted to use commercial motor controllers and wanted to know how to interface to them design for their limits. This was stuff I had been doing already for years, but fellow students found it genuinely interesting, which continued to stoke my belief in needing to create a design class that pushed people to develop skills first.
*Some examples from the 2011 EV Team class*
The 2011 session was still run by the EV Team and saw a total of five students and four vehicles produced, but only three of them ended up working. The emphasis was directly on scooters and all of the students got to keep their parts or vehicles at the end. The limitations were a $300 per vehicle budget, and you could get a “subsidized” Razor scooter to pluck apart for folding hinges and handlebars for $25 (typical retail value $40).
However, by then, the EVT captains were trying to do this thing called *graduate* – and they had limited time to devote to running the class. The organizational structure was fairly weak, the scheduling became broken-down, and Shane & I ended up practically running the class nearing the end. EVT was not planning on running the section again since by the next year, their core leadership will have moved on elsewhere.
I found the class desperately in need of improvement, and at that point I was ready to take it on. It aligned with my interests – electric vehicle design and technology, specifically on the small end of things. By its nature was very open ended – one student chose to modify and re-engineer a commercial electric scooter, which was accepted and ended up working out great, while others went various routes of custom frames and drivetrains. And it conformed to my design class agenda – we assigned each student a $300 (plus or minus) budget and they were allowed to pick and choose parts from Hobbyking, etc., frame materials from McMaster and other vendors, and so on. The basic foundations of what is 2.00gokart today were laid then.
The 2012 class, the first “2.00scooter” that I ran as the lead instructor, is recounted in full detail in its own post. Gee, that lead sentence sounds familiar…
# 4. “2.00scooter” or Electric Vehicle Design 2012
To produce 2.00scooter, I took the rules that the section was supposed to run under in 2011, and made them more concrete. I also wrote a one-for-one map between the mainstream 2.007’s “milestone” weekly submission requirements and my own section, such that students still had a weekly progress check, but departed significantly from the pacing of mainstream 2.007 which spent 4 or 5 weeks just in designing and prototyping. I replaced those instead with a design period of roughly 2 weeks, followed by compilation of a bill of materials and development of the vehicle’s CAD model. The version of the class rules and schedule as run in 2012 will be available at the bottom.
The summary of the schedule was basically:
- Week 1: Come up with concepts, sketch models, etc.
- Week 2: Design calculations and analysis. Top speeds, accelerations, efficiency/power usage estimates, etc.
- Week 3: Compile a first bill of materials to order, and start CADing the vehicle. BOMs are compiled and ordered every week from here on out.
- Week 4: Continue CAD modeling and design refinement, begin fabrication
- Week 5: Finish CAD models, fabrication week.
- Week 6: Fabrication week, prepare vehicle for mechanical inspection.
- Week 7: Fabrication week, vehicle “rolling frame” inspection. The infamous “Milestone 7”!
- Week 8: Address problems found during MS7 inspection; begin electrical assembly.
- Week 9: Electrical assembly and testing.
- Week 10: Electrical inspection; final vehicle inspection
- Week 11: Last fabrication/changes, the final contest.
- Week 12: Reflection/learning summary, final presentations.
It’s important to note that these were not very concrete dividers between weeks. In fact, between weeks 1 and 4, students would often change their designs for a new part or run calculations on-the-fly while shopping for parts. This is fully intentional, and I in fact question students who settle on something very early on about whether or not they have thought their design choice fully through. This kind of back-and-forth approach is in fact something I did, and still do all the time. I never just pick a pile of parts and then try to optimize a design around them – if I’m not forced to use something in particular, the design is fluid until I find a combination I’m satisfied with Some people needed to redesign a subsystem completely after finding out that a part wasn’t quite what they had imagined. Fabrication, too, was not limited to the weeks that said “fabrication week”. In fact, students could start building and prototyping right away if they wanted to, but few chose to do so – in part because that year, I did not provide free “starter materials” to anyone. In short the “milestones” were good ideas and following them would mean you would get done on time, but the actual levels of progress were a wide spectrum during each week.
Purchasing was handled only by myself – so even though I say the students “buy their own parts”, what that meant in practice was student submitting a ‘bill of materials’ to me each week, then I compiled the orders into one set spanning several vendors. In the later weeks, as students started knowing exactly what they need to finish and the timeline became more urgent, I upped the frequency to twice per week, on Mondays and Wednesdays. This allowed very quick work if you stuck with McMaster-Carr or another vendor which was in the northeast area and could turn parts around in 1 or 2 days. For example, McMaster-Carr for us is almost always next-day turnaround. Orders placed on Monday through scooter parts vendors typically came back on Thursday or Friday.
Centralizing the purchasing through one channel meant that I could also sanity-check and monitor what people were buying. Some times, I substituted parts from one vendor with a better price (e.g. a Surplus Center sprocket) for an identical one with another vendor (e.g. McMaster) and allowed the students to keep the ‘original’ price for the budgeting. This was entirely an effort on my end to reduce the shipping and handling overhead of placing many small orders for parts which were all generally available at one.
The one “twist” I added to the section from 2011, besides a functional framework, was the encouragement of a ‘grounding point’ for student designs. In 2011, one of the frustrations expressed was that people didn’t know where to start. You had free reign of every type of part that could end up on your vehicle, which was great if you knew you wanted a certain motor or something, but if not, then it was a sudden infinite-dimensional problem to solve. Here’s what that twist entailed.
As discussed in the dedicated post, I gave people a free pneumatic wheel they could use in their designs. I didn’t say for what – luckily everyone used it as, you know, a wheel. Nor did I even require its use. It could be a freely traded item – one student could have built an 8-wheeled scooter just from everyone elses’ free wheels. But this little twist I think contributed strongly to the success of at least a few students. Often times, trying to start the design somewhere is the hardest part – even for me, and there is no consistent place I end up starting to design a device – whether it be robot or vehicle. The addition of this optional grounding point meant that those who knew what they wanted out of their vehicles could ignore it, but for the less experienced and newbies, they could start designing around it immediately and not fall behind. For those that used it, it was also budgetary alleviation and could help them afford a higher current controller or gaudy lighting or what-have-ye.
There were two final contests – one was a fairly standard drag race of 50 meters length, and the other was “The Garage” That contest, in my opinion, was fairly innovative and I have not seen it done before. Starting around summer 2011, we began doing something with the vehicles we called “garaging” – not sticking them in a garage, but running them up all the decks of a very conveniently shaped on-campus spiral parking garage. At first, it was for amusement and because the garage was about the only place where you could go wide open and not, say, cause expensive property damage or be run over by a taxi – for instance, there’s very few other places you can test the likes of tinykart and bentrike around here. The building is about 150 feet (50 meters, whatever) and about 400 long (130-ish meters?), and had 4 total floors which were ‘interleaved’ so to speak, so you continually went upwards. And those spirally ramps at the end.
We began recording these runs for time and energy consumption later on in 2011 and discovered that the product of the two, time (s) * Wh (Joules), yielded a score that was highly sensitive to several factors. Throttle for one; how much you gunned it. But if you didn’t push it as fast and took longer time, you could still come away with a high Wh * s score anyway. The “line” you took also influenced the score – sweeping, wide, constant velocity vs. short dashes and slowing down for the end turns, but then having to use more energy to accelerate again. (Of course driver weight had to be normalized for since that was by far the biggest influence…)
We began to make ‘maps’ of all our project vehicles courtesy of a Matlab script Shane whipped up:
Garaging continued to be a fun activity, but I decided to make it the central event for 2012 because it was such an unexpected game of optimizing your variables. Students would be armed with a wattmeter which was zeroed at the beginning of their run, then their stopwatch time between start and finish was totaled up and the wattmeter’s energy consumption value in Wh inspected at the finish line. Here’s some of the 2012 vehicles on the same map:
The two axes of the graph represent constant time and constant energy consumption. It shouldn’t be much of a surprise that the only kart of this season that ran, Melonkart, used up more energy because it was that much more heavier, had twice the number of wheels, but a controller that wasn’t much more powerful than the eventual overall winner, Cruscooter. The karts here are clearly in the upper left – low time, high energy, but the scooters that adhered to the good old sensored brushless combo all sort of clustered together at the local minimum. It’s not really reasonable to draw solid conclusions without a normalization for total mass of driver and vehicle, but you get the idea. For the same driver and vehicle, you can move the score point around on the graph by changing your approach or other vehicle characteristics, and that was the fun part.
Finally, if you’ve not seen it before, the 2012 highlights video.
Procedurally, the 2012 contest went extremely smoothly, way better than I was expecting (basically something between a flash mob and lecture when the professor doesn’t show up). The contest procedures, where to meet, and what to bring, etc. were communicated with the students in the final lecture session beforehand. This contest’s procedures has been the reference model for all of the other ones so far, for myself. Before the event, I purchased a set of 2-way radios for the staff, including myself, to communicate with up and down levels of the garage. The 50 meter drag race was “yellable” but the loud ventilation fans in the building underpass meant radios made everything way easier. What I was short on was camera and media personnel since I had to divert people to being track marshals and in the garage, the “per floor” progress checkers. Hence, 2012 has less media than I would have liked, far less than what I’d devote to a project anyway.
2012 did show me some shortcomings of my original idealistic vision. To summarize the summary post (…) whose conclusion I still consider valid, keeping mental track of 10 peoples’ designs was a nightmare for me – I think that semester was my busiest yet, since I hadn’t even figured out the workflow of managing a bunch of students for real yet. In the summary post, I also said that having different vehicle ‘types’ made it hard to compare results between the students, and this is true to a degree – I think if the contest is going to be single-driver time trials anyway, the necessity of having a ‘class’ of design is lessened compared to if I were going to run everyone together.
Overall, I considered 2012 a resounding success. The students who took this class are now all mechanical engineering seniors themselves, and languishing in their senior product design classes. *Muahahahahhaa*.
During the summer of 2012, I built Chibikart 2 as an exercise of designing around only commercial (COTS) parts and accessible (online orderable) machining services. It was a success, and it formed the baseline of 2.00gokart – besides the waterjet cost (to be detailed in Section 9), it otherwise completely fits within the class budget and resource base.
# 5. 2.00gokart, the 2013 edition
Melonkart, to the right, was the inspiration for ‘2.00gokart’ as we know it. The two builders Jackie and David proposed that they team up and combine their budgets of $300 each to construct a larger and more complex vehicle. I was back and forth on allowing it, but hey, experimental section. My decision to make 2.00gokart 2013 a team-of-two effort was swayed also by the fact that there was a 1-person go-kart attempt, but it ended up very tenuously complete and to great frustration of the student during the term. I decide that a $300 budget was just too low to make a good kart (which has more components and more materials usage), and that it was too much to expect a large sample crowd of sophomore MecheEs to complete, in 80% of one semester, and with other class commitments. I didn’t finish LOLrioKart in 1 semester – in fact I think it took me like 1.5 years to get it ‘right’. It is possible to do if it’s the only thing you are doing, I think, but in a class at MIT you have to be compatible with your other obligations.
After collecting some reviews from the students and fellow EV’ers at MIT, I took the Melonkart Theorem and made it into a set of rule addenda to reflect the new class. In the bottom of the 2012 recap post, I gave my ideas for what would become Spring 2013’s 2.00gokart. In short, the rules stay basically the same, but everyone gets more free stuff! I backpedaled a bit on the originally very optimistic new rules. As-run that spring, I used the following:
- Instead of individual projects, students work in teams of two
- I give them 3 sticks of 80/20 extrusion (6 foot lengths), two plates of aluminum, and the pneumatic wheel.
- Up to 3 A123 batteries were allowed for free, with a fourth at a $75.00 budget deduction
- The limited ‘basket’ of other parts was removed and a full $500 allowed to purchase any parts needed.
- The “noise floor” of the budget is 50 cents – you could buy a whole box of screws to use 2 of them, and it would be free unless each screw individually was over 50 cents.
- I added the ‘statically stable’ clause to the rulebook.
With regards to each of those changes, here’s my reasoning.
The biggest difference was, of course, the complete elimination of the ‘basket of parts and materials’ ideas, because that sort of went against my whole philosophy for the class. I’m not sure why I even thought of including it – probably from burnout from ordering the same thing 17 different times from McMaster, but much of the point of the class was to show people how to search for parts and resources, and I didn’t want to dilute that after coming back to my own proposal months later. Pursuant to this, I upped the “drivetrain parts budget” from $150, basically the price of a motor and controller of reasonable quality, to a full $500 for all parts I did not provide. The amount of $500 was roughly what Team Melonkart spent on their vehicle discounting stuff I would eventually provide – a wheel, some materials, and a handful of hardware.
It also happened to mesh well with the then-up-and-coming Power Racing Series. I was on the hunt for examples of other racing series or college/high school courses of a similar budget line, and what I found was that there weren’t really any equivalents. Quite a few schools have electric go-kart or electric car racing teams, but those are usually large team efforts, and my class as-designed is individual or very small teams. Two examples of the ‘big event’ kind of electric kart racing is Purdue EVGP (check out their *very* detailed reference document – this needs to make it into my own lecture notes and class references) in the U.S. and e-Kart , a national championship of many technical schools and universities in France – Shane and I met one of the university organizers at a conference in Monaco in 2011.
*An e-Kart race. The karts are… bigger.*
Where 2.00gokart differs from these events is in the smaller vehicle size/speed class and the focus on individual student efforts instead of a massive team. I leave the latter for the already established senior design classes and team sports to deal with.
I settled upon the 80/20 extrusion method of building after spectating the build of tinykart, then embarking my own build of Chibikart and Chibikart2 (nee DPRC). I thought that the system sped up the design process significantly, and it was easy to work with and easy to connect together. The big enabler for this method is our available on-campus waterjet machining, such that students could make the interconnecting plates and structures quickly. Furthermore, by the start of this term, my How to Build Your Everything Really Really Fast instructable was up and running, and it served as a major resource for the students all through the semester.
I provided each team a 1/8″ aluminum plate and 1/4″ aluminum plate in 2 x 2 foot size, which they were allowed to use up over the course of the semester at no budgetary cost. If they needed other materials, then those had to be specified and purchased. Most vehicles were able to stay within the free materials allowance.
Waterjet machining was provided by the Hobby Shop because of the additional degree of freedom I needed for this class – I taught the students how to tile and arrange waterjettable parts on a plate, so I had to run the machine myself. The main 2.007 section uses a shop where the shop staff cut your parts out for you, usually one at a time with no pre-tiling allowed (i.e. “send one part file, request n copies” with no control give to you as to which orientation or tiling you wanted, and the machine is also busy enough during the term that the much longer cuts needed for my section in much thicker material would have been hard pressed to stuff in. I don’t blame the main shop at all for being inflexible – in fact I think it’s the only way they can process 150+ students who all need to waterjet something for their robots now that the technology is in the open.
One addition to the lab which aided the students’ vehicle development very much was the provision of free birch plywood and high density particleboard (hardboard) to prototype their frame joists and other parts with, using our 36 x 24″ laser cutter. That way, you could iterate quickly through several different designs, test fits and clearances physically, and actually make functional prototypes because the wood material is quite strong on its own (don’t try to ride it though…) before “committing to aluminum”.
The “static stability” clause was a compromise between requiring 4-wheeled vehicles (true go-kart style) and having a complete design free-for-all. It was a shortcut way of making sure nobody built a scooter or recumbent bike or similar. While there’s nothing explicitly wrong with that, this clause was one of my deliberate ‘screw tightening’ changes to the class to mix it up. The final wording of the rule can be found in the example rules sheets in Section 10, and it only required at least 3 *rolling* points of contact with the ground. This explicitly allowed trikes and even scooter or bike-like objects which had a ‘training wheel’. I didn’t know why the hell you would do that, but figured somebody might figure out a creative way to make a lighter and faster vehicle that fit the requirement. For instance, sidecar racers.
There was also no requirement that the training wheel touch the ground under operation – only *static stability* mattered.
That essentially sums up the deltas from the 2012 class. The scheduling also remained unchanged, since the only things which were different were vehicles and construction methods.
I put together a cabinet of “free to use” parts, primary electrical accessories, that included the following:
- 80/20 slide-in nuts
- 3/8″ long and 1/2″ long button-head screws
- Electrical supplies such as 12 gauge red and black wire
- Electrical crimp spade connectors for terminal blocks, ring terminals for motors, and Quick-Disconnect terminals for the battery tabs.
- Bullet connectors, Deans, and XT-60 connectors – three popular R/C type connectors.
Regarding the “50 cent limit” for the budget, it was to offer students some flexibility in not having to literally list every screw and bolt they used. It was a ceiling chosen because I thought $1.00 was too high (many small but important parts, like bushings and some larger individual hardware, are under $1s each) – that was the limit I remembered dealing with in FIRST Robotics. 50 cents means there existed many more choices on McMaster between, say, a mil-spec or otherwise high grade part, and a “generic” one, something that I did want the students to figure out the difference between.
The 2013 contest ran almost exactly like the 2012 contest procedure-wise, since the same venues and scoring methods were used. I paid more attention to media this time, so there’s more video and picture of peoples’ runs. They are not currently all up in a publicly viewable cloud app, but here are some of the better previews taken from the 2013 cleanup post.
*Pictures by Dane Kouttron or Mark Jeunette*
And, of course, the 2013 video.
Finally, we did tally everyone again during the garage race, so the 2013 results:
Notice how there was still a great span of times, but the whole graph is generally shifted upwards towards more energy usage. Bigger motors, more wheels, heavier frames, all of these contribute.
My assessment of the inaugural run of “2.00gokart” is that it, too, was a success. The biggest issues that stuck out to me with this class were relating not to the technical content and the ability of the students to perform them, but more towards future scalability.
There’s two prongs of scalability pertaining to the class I want to put forth. First of all is literally *scalability* – from two runs only, I could tell that the class in its current form would have a difficult time scaling up. It was pretty overwhelming handling 10 different designs mentally in 2012, and in 2013, I had 16 students and 8 vehicles. I think (and with the SUTD summer session’s 27 students) that my working limit is somewhere around that. The issue is a class like this right now still takes one or two “gurus” who know the area in and out both theoretically and practically – you have to both convey the technical knowledge AND debug a motor controller’s blinking lights. You can’t just assign an army of TAs to the task for this reason. I was lucky to have Banks‘ help who also helped me develop the “2.00gokart prototype” the previous fall, which resulted in some of the rule changes discussed above. My overall conclusion for scability is **it’s necessary to have fewer highly knowledgeable instructors** than one instructor and many TAs who don’t really know the details.
I also was ready to deal with “team dynamics” – or people coming to hate their partners – but because the teams were so small and the class generally knew each other already, there was very little of this. I see team dynamics becoming an issue if the groups got larger or the demographic were less self-intimate. With two people a group, everyone had oversight over the whole project, which is small enough in scope to allow this, and I would be “the tiebreaker” if there was a conflict of ideologies, but there were never any – issues were resolved on their own, generally, and maybe a few times involving one party scientifically proving to the other why he was full of it, to the mutual benefit and education of both. I anticipated some “You’re the mechanical guy and I only do electronics” to occur, but absolutely nobody tried to skimp out on their tasks. I think the students understood fully what they were getting into and what was expected of them; combined with my convicted aversion to hardline-graded requirements meant they could more readily focus on their project.
During 2.00gokart, I deviated a bit from the Mechanical Engineering department’s requirement of 2.007 students to maintain a classic written “lab notebook” and allowed (and encouraged) students to start their own build logging websites. I accepted entries on these sites as an equivalent of the notebook entry for the week, but because I was still working within the constructs of 2.007, there was a minimum notebook requirement each week – I just made a note that the meat of the entry is online. It was not required, and a many decided against it. The links to these student websites are found on an index post I made for the purpose. It’s important to remember that some of these posts are also on behalf of their teammates.
I think this sentence from the cleanup post is a great summation of the class, capturing its intent, spirit, and scale in one:
The two top placers in the drag race were a hundredth of a second apart yet represented opposite ends of the traditional EV design spectrum. One was huge, had giant balloon tires, and two massive DC motors. The other was small, lightweight, and had a single brushless motor.
# 6. The SUTD Summer Session 2013
While the 2013 class was running, I was requested by SUTD (the people who fund the center whose fabrication spaces I run, so I better damn listen, eh?) to run a special edition of the class for 28 students that enrolled in a MIT-immersing summer program. For those not in the know, SUTD first started taking students in 2011 and they’re still very small. Basically, I was faced with a chance to reflash like half of their sophomore class to build silly go-karts! Who would turn that down?
Coming hot off the spring 2013 2.00gokart section, I was confident it could happen again. The time constraints of summer posed a unique challenge: I had only 8 weeks to do all this, compared to 13 for a standard academic semester here, and for 28 students working in teams of three (with 2 teams of 2). I had two TAs which I knew well to deputize tasks to, so it wasn’t *total* death. I could no longer assume the students were all uniformly processed through the Mechanical Engineering department beforehand, so I had to prepare to teach literally everything I needed them to know for the class. And I was also interested in the cultural change: It’s a common conception in education that Eastern students are more taught to follow orders and not overstep their bounds, compared with Western students who are taught to be more creative and individualistic – so how differently would my near-constraintless-on-purpose design task be accepted?
The most pressing matter to me was the timeline, both in class pacing and amount of work the students could do in that time. I only had these students from June 10th to August 9th, so I had to condense the 2.00gokart schedule down about a third. Many of the weeks labeled “Fabrication Week” in the 2.00gokart outline were condensed. I sent orders out twice per week as a default, since waiting one week in this round was a much tougher hit on the production schedule than before, and air shipping & 3-daying were the default. This did increase the cost overhead of the class (Section 9 will discuss more fully what is involved cost-wise for the three sessions)
Tempering the scheduling constraint was the fact that I had the undivided attention of these students, instead of having to fight for it because they are taking 5 other classes, working a UROP, running a startup company, and doing things with their frats. Furthermore, it was three students a team. Therefore, I figured the overall workload per person-week was actually lower.
The compressed schedule ended up in the following format:
- Week 1: Team formation and concept sketching – settle on a concept by the end of the first week.
- Week 2: Design calculations, analysis, formation of the first Bill of Materials as you settled.
- Week 3: CAD and prototyping
- Week 4 – 6: Fabrication, with the Mechanical Inspection occuring at the end of week 6
- Week 7 – 8: Electrical fabrication and testing, with final inspection occuring at the end of Week 8
- Weekend 8: Final contest
I also ‘buffered’ lots of parts. To avoid shipping delays of anything over a few days, I pre-stocked a selection of motors from Hobbyking, controllers from Kelly, and about half the warehouse of Monster Scooter Parts. I ordered parts based on a good guess of what students would land on and specify, extracted from the 2.00gokart and 2.00scooter master purchasing list. This isn’t to say that students only could use those parts – for the most part I kept the selection hidden in an Instructors’ Only closet, but if someone “ordered” a part that was in the stock, it was available after 1 or 2 days of artificial delay to keep things fair; I had a queuing area for shipments as they come in, and the parts would magically appear in them.
More parts were also made available in the general supplies cabinet. On top of the previous list of electrical components and 80/20 hardware, I also put in:
- FR-8 (1/2″ bore with flange) ball bearings and G-10/PFR-2214 (5/8″ bore with flange) ball bearings
- #25 chain, connecting links, and sprockets
- Cable brake supplement hardware, including adjuster barrels (example) and pinch bolts (example).
Essentially, some of the small annoying things I would let a 2.007 student forget about and then point out later were filled in for the SUTD class due to time constraints.
I took the gap between the end of 2.00gokart and the start of the summer session to ~~work on my van~~ create a more comprehensive set of lecture slides that covered more of the topics in the class. This will also be a resource for future 2.00gokart sessions. I wanted a fuller set of references for the students to refer to if they had minor questions, since the greatly increased student count meant I needed to pre-emptively load-shed. Not to say I didn’t spend all my time answering questions *anyway*, but hey, I’m the chief instructor for a reason. The lecture subjects formalized much of what I had just drawn on the board or sent out e-mails before. Furthermore, they covered specific mechanical engineering subjects such as using fasteners:
Other topics included designing for “our” manufacturing (DFOM), i.e. laser cutting & waterjetting, and overview of the vehicle electrical system.
Procedurally, the greatest challenge of the class was keeping the TAs on the same page as me. I’ll be straightforward: None of the TAs had as much experience in mechanical design and EV systems as I do, so there were many briefings in which I gave them the gist of the task in case students had questions. Nonetheless, we still face times when students were being “bounced” between the TAs and myself. A student would ask a TA about some possibly obscure controller bug, they would be bounced to me, but I would be in the middle of helping out someone else and ask the student if the TAs had seen the problem yet. This was one of the “themes of complaints” on the end of class evaluation, and shows again why this class might not be scalable unless you had multiple EV hacker gurus.
Another time-consumer for myself and the TAs was having to babysit the waterjet cutter. The students do not use the machine themselves since we in the IDC do not have our own waterjet – instead, they queued up their cuts on DXF files which were submitted to us weekly. Cutting was done by borrowing the Architecture Department’s waterjet for the summer. In part due to the schedule being so crunched, and also because there was just more machining to do than in the spring, in the last 3 weeks I think we spent 9 or 10 hours each time in the waterjet room. I did do a lot of waterjet babysitting for 2.00gokart, too, but because that schedule was much more relaxed, the sessions weren’t as intense.
The end-of-term challenge was changed up completely this time around. Instead of everyone taking individual runs up the garage, I decided to go all-out for fun (since who’s getting graded on this anyway?) and with the other instructors, set up a road track in the parking lot adjacent to the garage of yore. The layout of the track was Scientifically Determined (read: weekend of Instructor go-kart hoonage) and fixed at a nominal length of 150 meters, taking up only half of the parking lot so we could have an easier time with getting it closed off by the MIT Police.
We made up the course to have only two really long straight areas The rest of it was fairly tight, especially the chicane, to encourage the vehicles to be Designed for Turning. One thing that 2.00gokart taught me is that the garage turns were so wide that people didn’t have to design their vehicles’ steering very well at all, and we wanted to head that one off for the summer session. On the average, we ran 25 to 30 second lap times, but the student vehicles tended to take a little longer.
The format of the contest was derived from the garaging runs. Student vehicles were rigged up with the Wattmeters, and were allowed 2 laps to warm up and familiarize, after which the wattmeters were reset and they started from standstill for 3 laps. The total Wh consumed and seconds taken to run the track were recorded. Because the transient that is the first acceleration is very short in time compared to the rest of the race, and everyone used the same procedure, we could still compare between vehicles.
Sadly, the drag race was dropped due to time reasons – if everybody wanted a turn on the vehicles, then the events would run on very long. By time calculations:
- with each lap taking about 30 seconds and students running for 5 total laps
- a minute between each driver to swap vehicles
- 30 seconds
- 1 hour of ‘charge time’ during the lunch break
The event would seemingly run only 3 hours. This is how many people schedule and manage the time for “design contest” style events, and this is why they always run overtime. We actually took up the entire 5 hour timespan, since the vehicles some times broke down on the track, people had to restart and re-run, and repairs had to be made. If everything were mechanically perfect and reliable, then we would have been closer to the calculated time, and I would quit my job. Really, part of the fun for the students was doing that in-the-field servicing, watching people rig fixes and on one occasion, entirely rebuild their front-end after bending a steering component.
The TAs and random friends that showed up the day of to volunteer were crucial to the event going smoothly. While I may wish for TAs with more in-depth EV knowledge during the fact, at the end of the semester when all the labor is done and it’s time to have fun, everyone plays a role in organizing, marshalling, taking pictures, resetting cones, etc. It actually is running a miniature racing event, after all.
I don’t yet have a comprehensive video of the event. Most of the runs were recorded, but they weren’t very exciting to watch. What was exciting, though, was the “grand prix” races we had at the end with the still-running vehicles. They pretty much capture the gist of the event:
After the students left back for Singapore, we hired a shipping company to box up all the vehicles and parts and ship it all back to them. So hopefully, this is the start of SUTD’s own silly electric rideable thing culture!
Here’s the cool part. Because I was not running this class under the auspices of an ABET-accredited department any more, I decided to let the students go full-blog in their documentation. The requirements were reversed: A minimum entry on their website (set up in the first week of class) was mandatory, and then they could put whatever they needed in their physical notebooks. Everyone took me up on it this time, and the websites still exist, variously updated. The class got intense enough that I didn’t enforce the updates very hard – also influenced by my agenda that the class should have as few actual requirements as possible.
- Kart Laputa
- Celeris
- tohzilla speaks
- ORD V3
- WE ARE THE ANSWER
- Team 16.5
- Bear Can Build
- Gold-kart
- Jocose
- ZJatgy’s
# 7. Current state of the class
After three iterations, I think the class has reached a stability plateau in terms of procedures and technical content. I now know what needs to be done to prepare before the session, what needs to happen every week during it, and what to do in terms or organizing the event (either garage or parking lot). My hope – something I’m not actively working towards but feel like someone might take me up on – is a Purdue EVGP or e-Kart style, multiple schools and teams kind of event with karts build to similar specifications as what I have outlined. In the mean time, I intend to run the class and make semesterly little changes and tunings as often as I can, for the length of my, umm, regime at MIT. So, these next two sections will conclude the story and hopefully leave everyone with enough information to understand what kind of shenanigans are happening at MIT with all of our electric rideables here, and how to organize your own league of go-kart legends locally.
**Before the session:**
- Secure a budget for the class and someone who will be the central purchaser. Section 9 will have more information on the resources and budget info. The purchaser should have unfettered access to a purchasing card or requisition system and must be prepared to make an order every week at the least, or preferably twice per week, from several vendors at once. There should not be 17 layers of administration between the instructor and the purchasing / approving staff. Preferably, an instructor
the purchaser and has direct contact with the approver (if applicable) every week. This is how I have been handling the purchasing for the class – I have a MIT procurement card and the ability to make requisitions from preferred vendors such as McMaster-Carr and DigiKey and MSC, et. al. through an internal MIT requisition system, charging directly to a class account.**is** - The class as I run it depends heavily on rapid prototyping machine access – waterjets, laser cutters, the odd 3D printer or two. Negotiate access to these for individual students or have a system where you queue student jobs and run them on the machines. If these machines have a cost element, they need to be factored into the budget (Refer to Section 9 for estimates of how much we busted on waterjetting!)
- Negotiate any sponsorships well beforehand. I depended on the good graces of A123 for donations of batteries in 2012 and 2013, one of the last things they did before imploding back to a small condensed matter state. For the summer session, I had to buy a set of the batteries at retail price, and that was a hefty addition to the budget. For the coming 2.00gokart year, I will need to do that again or hitch up with another battery sponsor. Batteries are easily the most expensive portion of the class materials – unless you’re insane and want to use Hobbyking lipoly batteries on everything.
- A reasonably equipped machine shop is preferred. Everyone can do it up “Chibikart Style” with nothing more than garage tools, but a small metal lathe and mill are valuable additions. The students, as I run the class, have 24/7 access to our fabrication space (pursuant to safety rules such as required personal protective equipment and a strict no-working-alone policy in my shop). It’s probably also possible with fixed lab hours, but that’s against my philosophy.
- “Messy lab” space is absolutely essential – for 2.00gokart and the summer session, I assigned each team 1 table that was on wheels and they could roll in and out of the shop. At the end of their working session, all tools from the shop must be replaced and all of their parts, assemblies, and hardware must be only on their table. For those keeping track, that’s like 10 big lab tables.
- Pin down the design of the final contest and secure the facilities if needed, including any safety office or regional governing authority paperwork. This is key – the first round of 2.00scooter safety meetings took 2 months spanning 3 sessions, but it has been significantly easier afterwards because now everyone has an idea of what the event entails. Feel free to show your own authority the highlights videos linked in this post.
**During the session:**
- The lead instructor(s) should have a very in-depth knowledge of electromechanical systems, both theoretically and practically. Theoretically speaking, the content is not difficult and is generally first-order linear differential equations or closed-form linear constitutive equations (if you understood that, then you can handle teaching everything needed). Practically speaking, a strong debugging sense, experience with past build projects where stuff didn’t work, and strong design experience of mechanical systems is pretty much required. Electrically, most of the systems I use are plug-and-play, but debugging circuitry, bad solder joints (
*MY GOD THE BAD SOLDER JOINTS PEOPLE MAKE*), and other wiring demons should be in your skillset. - TAs are helpful only if they are essentially of the same caliber as instructors, or are handling mostly non-design tasks, such as staffing a shop (e.g. only there to answer machining questions), or grading student work if the class structure dictates it.
- If your class has a milestone or checkpoint format, then you should write these beforehand so everyone knows immediately what is expected of them each week – Section 10 has my example milestones from all 3 sessions so you can also get a feel of how the class evolved. Lectures, if applicable, should also be prepared beforehand since leftover lecture slides are a good resource for people. They should not be made available publicly (in my opinion) until
*after*the lecture, to encourage everyone to come. I reserved the right to refer people to lecture slides if they didn’t come, then expected me to answer very simple questions. **Instructors must be available.**This is**not**a lecture and go-away class, you are all-in, 100%. During these sessions, it was not unusual for me to be “in lab” from noon until midnight – my schedule is a little “off” by most peoples’ standards, but regardless, you are the mother duck to a pack of ducklings. You have to be on top of all parties’ progress and see that nobody is falling too far behind. Many engineering students would prefer not to ask questions and tough/puzzle it out, when it is far more productive to grab a hint and move on, and you should be on the lookout for people who are too stuck trying to optimize in the wrong direction. You must also be able to resolve team conflicts if teams are the format.- Purchasing (if separate) or you (if you
*are*purchasing) must be organized and deadlines for BOM submission made non-lenient. I routinely did cut people off when they swore that they needed 5 more minutes to add a few more parts. The answer is you had 3.5 hours of lab to add these parts. Purchases must be prompt and in the best case should occur during the daytime so most vendors can ship out same-day – if you wait until night or after business hours to order, it most likely will add an extra day of transit. - An order queueing area for students to receive their parts should be reserved. I think opening the boxes of their own parts contributes greatly to student excitement.
- Nearing the end of the term, be aware of how close the event is and any logistics you need to perform to set it up. In the ‘cleanup posts’, I go over what we had to do to set up for the events.
**Contest setup**
This section, of course, is highly variable dependent on what your exact circumstances are, so I can’t really say that much besides give data points on how it went down for us:
- In 2012, the safety office was adamant that we set up some kind of ‘catch netting’ to prevent people from running into the concrete walls of the garage. It became obvious to me that they did not really think the implications and details of this through, nor did I think it was a proper response to people riding scooters around gingerly, but without the signoff from them we would not have gotten the venue. Therefore, I and 3 other folks spent 12 hours on the Saturday before the event rigging up steel cables through construction debris netting, and about 1.5 hours setting it up in the garage. The cost for this ‘feature’ was about $1500 total. The way the nets were set up, it was a reasonable way to stop a person standing up on a scooter (you’d probably just get epicly clotheslined) but the go-karts would have shot right through the two steel cables.
- You are going to need way more time than you anticipate. I, luckily, have some years of event planning and running experience in the sphere of robot contests, so I immediately budgeted 6 hours for the contests. They always needed that or even more, because you’re never going to run vehicles back to back continously.
- In 2013, I piled the garage full of low bricks of fiber insulation (chopped up recycled cloth and paper material) (example link – it comes in bricks). It was far faster to set up, more effective than the netting for go-karts because each row of five weighs 100+ pounds, and if you joined them together they are a flexible soft wall. Luckily none were actually needed. They made a return protecting the curbsides in the summer race. Overall setup and teardown for each of those events were under 30 minutes each and no labor was expended beforehand. Therefore, I approve of these things immensely.
- You are going to need way more time than you anticipate.
- There should be a driver meeting to go over procedures, safety expectations, and starting order, and everyone who is driving should be there.
- I held a separate track marshal/instructors meeting to make sure all the event staff were also on the same page about who is standing where, who’s timing, who is reading wattmeters, etc. All volunteers should know their roles at the start.
**You are going to need way more time than you anticipate.**- Two-or-more-way radios are an immense help to reduce arm flailing and shouting, especially if half your venue is located next to the air intake of a very large biolab building.
- If you actually have a circuit race with multiple racers, a flag system is most likely needed to communicate track issues or people who blew up their motor around the bend.
**You are going to need way more time than you anticipate.**
**Lecture content**
I held weekly mini-lectures on various EV-relevant topics. These were concentrated mainly in the beginning of the semester when people were still in the design phase, and covered topics that were directly relevant to the design task. Here’s the example 2.00gokart (Spring) lecture schedule:
For the summer session, since we had 8 weeks but two sessions per week, I doubled up the content:
I’m going to include the full set of lecture slides from these sessions in Section 10 for reference. The “week one” website lecture there was handled by one of the TAs – we agreed that giving people a primer of what we preferred to see online documentation-wise would help people out if they haven’t had a website to manage before.
The electrical lecture was split up into two sessions since for the most part, the first half of the class is concentrated on mechanical design, so the first Electrical lecture is more of a ‘heads up, these are things you need to pick and include in the CAD model of the vehicle, but don’t worry about putting it together yet’ sort of affair, and second one is where I have more wiring tips and other details.
**Changes to the class I’m intending on making next Spring (2014)**
I foresee running the 2014 contest in the format of spring 2013 with only a few possible changes:
- I’m debating switching the spring class over to the road course, too, since it offers that much more excitement, especially if multiple vehicles are running at once. However, the garage runs offered a reasonably challenging, but still straightforward simulation & prediction element to the design, something that I think should be part of the class if it is running as a departmental lab section – much of MechE still has everything to do with analysis, and it would be beneficial to train up those skills early. I would need to come up with a framework for students to estimate their energy usage and speed, etc. on a track type environment if I go that route (Feel free to suggest good links in the comments…)
- I’m planning on introducing more wildcard components. One of these under heavy consideration is cracking open a hybrid car’s battery pack, typically made of NiMH cell modules. I can get more watt-hours for cheaper by salvaging them (A grand rundown on the one I already opened up is forthcoming). Students may be offered a choice between the A123 bricks and NiMH cell modules.
- I want to start encouraging R/C part based drivetrains – odd, I know, since I’ve spent 2 years trying to stamp them out in the interest of science, but to balance this with the introduction of students to a wide variety of resources means demonstrating how they may be used within the rules of the class. The biggest challenge is getting an R/C system to stay under the maximum amperage of the batteries, and I’m thinking of methods of doing average-current control with a microcontroller frontend – if this bears fruit, it will, of course, be documented here.
# 8. Reasons why I think 2.00gokart is innovative for the space of design-oriented classes
The goal of this section is to explain why I think the electric vehicle design class is different from, and better than, most traditional engineering ‘design classes’ in how it engages students and prepares them for further study or progress in an engineering curriculum. What’s an engineering design class? I take it as any class where the principal objective is to create a functioning product while applying appropriate engineering principles and theory in support of that goal – it doesn’t have to be “Senior Machine Design” or “Automotive Design” specifically. Recall the quote from the start:
a
designclass teaches you to use a practical approach backed by engineering science to solve an ill-defined or open-ended problem
There are things which I call design classes others might call a “lab class” where the focus isn’t entirely on the project. I think in either case the points I will express are applicable. It’s also important to keep in mind that I don’t consider building a silly go-kart to be a panacea for all engineering school ills, but as long as I am here at MIT, I’m going to use that as a way to enhance learning for as many students as I can manage, and hope others also find the resources engaging.
In a non-ordered list, since I can’t bring myself to conclude which point is the most important:
**It builds appreciable engineering and design skills right away. **The class is purposefully designed as a skill-builder for freshmen and sophomore students. I don’t construe this remotely to be a giant senior year engineering showcase. I explicitly keep it lightweight and fun-oriented so you can focus on becoming familiar with basic electromechanical parts, fabrication skills both electrical and mechanical, and using modern design software to visualize your designs before committing to physical materials. You build a robust system that is large enough that misapplied theory, I Saw It On The Internet (Isioti Syndrome), and ignored flaws tend to show themselves quickly.
The classic example I give is shafts running in aligned bearings – on a little robot that has a wimpy sheet metal frame, you can really goof up the fabrication or the placement of parts, etc. but just bend it into shape. It doesn’t take much force to encourage 1/32″ aluminum to go where you want. The same cannot be said about 1″ square frame bars. You appreciate the principles of mechanical design and structures more viscerally as your entire frame sags and your wheels cant inwards because the structure is not rigid enough. Whereas almost any amount of gluing and set screwing will transmit the torque you need to lift a little robot arm, the same joints will rupture instantly when you put the torque of a drive motor behind it. The steering linkage supported by a single bronze bushing will bind up instantly (“but it worked in Solidworks!”). Things have to actually designed carefully – the fudge factor is lowered. *Trust me, though, you can still fudge the hell out of one of these*.
In the class the way I run it, students learn modern rapid prototyping equipment like waterjetting, laser cutting and 3D printing. A lot of people around here snicker at such new-fangled technologies and stick to their CNC guns, but tools are tools. Not introducing students who will be working in the field with these tools, most likely, does not make the tool nonexistent – it just makes you dated. I try to teach *effective use* of these tools, which is why students generally do not use any of these machines (besides the laser cutter for quick prototyping) on-demand – I screen the parts for make-ability using something else we’re both standing immediately next to. I pretty much refuse to 3D print rectangular blocks with holes in them and taunt them mercilessly about rectangles with holes on the waterjet, and ask if it could be made in other ways. For instance, could you have just bought a more precise and better made square with a hole in it, because…
**It introduces students to resources beyond a class supplies cabinet and to the limitations of those resources**. In many design classes, you work around parts that have already been given to you. Caveat: This is completely okay if the problem is entirely solvable within the scope of the parts given. I’m not saying all “lab supply closet” classes are bad – they serve their purpose to teach a specific method or produce a specific end goal. But in the more general case, you’re given “a task” – to make a product, to design a machine, to optimize this toothbrush, and the professor says Go. What do you do?
If you are a traditional engineering student, you kind of sit blankly on Google trying to decypher the cryptic Chinglish descriptions of some part you’re looking for, because you haven’t actually done this before. Neither have your friends. The first time many MechE students *see* shopping for parts here *is* in our senior classes. I have counted more instances of people producing very convoluted solutions to a problem that 5 minutes of rummaging through the McMaster-Carr catalog could have avoided, than I care to disclose. I have done the same, but years ago, so now I know a little better.
In the class, I introduce students to a sample plate of places to shop for mechanically oriented projects as part of the mini-lecture series (See lecture material in Section 10). You learn how several vendors can offer the same part in minor variations, and how to discern the differences from some cryptic specifications. For people who always deal in precision industrial or medical parts, everything comes with a full-out datasheet, but not everything is like that; I show people the back roads and shadetree shops of parts buying in case they ever need to use them. You learn that ordering from China without an express post option is a horrible, horrible idea because the class is gonna be **over** before your parts come. This has happened.
But the physical examples aside, the key theme is more like…
**It encourages students to seek information out on their own and discourages ‘thinking in terms of classes’**. There have been many instances of a student presenting me with a part from a place I’ve never heard of because it was the only place they could find the part to their specifications. Or they found it lower priced on some other website, or even found it locally. What it comes down to is the class stops being its own little universe where all issues are resolved at office hours with the instructor. It doesn’t stop at physical products you can buy either – discovering that a part exists on one site, finding the CAD file of it on another, getting the best price from yet another is more an example of what I mean.
That’s how I shop for bearings.
I shamelessly tell people to research it if I get asked a very open-ended or poorly formed design question. Good example: “How do I make my steering linkage?” or “Which of these motors do you think will work?” – notice how those questions are phrased, such that if you answer them, you basically give the correct answer. I’ve received feedback a few times that the class was really’ you teaching yourself’, both expressing that positively *and* negatively. The positive folks discover a world of go-kart design websites and records of prior work by their classmates and grow their bill of materials by 150% the day after.
The people who don’t take to that as well? I hope I have just introduced them to another side of the engineering dice. It comes down to being *resourceful* and discovering ways to help yourself (and your team) through the project, and reserving raising your hand and asking the teacher for very specific moments.
**It exposes students to engineering management issues like supply chains, project schedules, and project scoping on a personal level. **If I did have to pick favorites, this might be it. A very important skill that I learned through building many, *many* small crawly or rolley objects is how engineering projects tend to go. Many times, the issues with design classes isn’t that people *produce bad work* or *are total noobs*, but that nobody has a sense of the organization require to successfully execute all the plans that a brainstorming committee might produce. The plans are way too ambitious or complex for everyone’s collective skill level and the technical demonstration is like, two weeks away, guys…
One example of how I accomplish this in the class is the make-versus-buy tradeoff that is the weekly waterjetting. Waterjet machining is not on demand – I queue up the parts and run them once a week. In that week of waiting for your parts, could you have made that part in the machine shop 15 times over? How much did that impact your fabrication schedule to have a week of deadtime? Were you able to fill that week attacking a different part of the vehicle? The best case I’ve seen is students sending me the truly weirdly shaped combination motor mount and Pizza Hut and Taco Bell to be waterjetted, then spending the intervening time cutting out frame pieces and prototyping assemblies, finishing it just in time for me to deliver the piece on Friday. The same general pattern – do this now, or wait for someone else to do it so you can focus on another aspect – is present all over the world of engineering.
Once you have been through the cycle, you start to understand that on a more profound level than an engineering management lecturer telling you about timelines and delegation and Gantt charts. I throw all of this at you in a project that’s just complex enough to require planning and ahead-of-time integration (I do **not** allow controllers and batteries zip tied to anything), but small enough that you still know the state of every aspect of it. I don’t fudge deadlines or try to bail people out. It’s a veritable microcosm of a larger, team-based, big-budget show. Once you see the parts of a small project, you understand better how the same structures apply to larger ones.
**It commits you to standing behind your work instead of whipping something up just to pass a requirement**. I make you ride it. In front of all your friends.
This ties into the reason why I try to keep the class as free of hard turn-in requirements as possible. Too many requirements, milestones, deadlines, and evaluations means people just try to get something together so they can get the next 10 points. The project becomes a mishmash of bad hacks and we’ll-fix-this-laters. My schedule is purposefully very fluid with the exception of two or three hard deadlines: the vehicle inspections. If you rig something for that, it’s pretty much staying rigged and being raced that way. I think the 2.00gokart schedule in particular worked out immensely well – people completed far ahead of time and I had to schedule several drive testing nights because they were excited to run their semester-babies. Trading go-karts in particular is a hilariously chaotic part of the last week.
Overall, I think the message is worth repeating, from the “introduction”: **Give**** students the tools of practicality and channel their creativity first****. **The department has plenty of ways to back student knowledge with theory and science, and my hope is that when the two worlds crash back together again, my students will be extra-prepared.
*Going hard in the week before the mechanical inspection…*
# 9. Resource base and budget discussion
I left out one more reason. In my opinion, for the value you get out of a group of students who take the class and can go on to do bigger and better engineering projects with their skills, the class is relatively inexpensive. Let’s be frank: building things, especially large things, costs money. I don’t want to contrast with the build-a-little-robot sort of class, since the scale of the parts costs is almost an order of magnitude, if not more. A little robot drive motor costs single dollars and a nice small-EV motor can be over a hundred dollars. I also do not want to draw comparisons to the large organized kart racing events – they can spend tens upon tens of thousands of dollars on just a build, not to mention getting a team to competition and the associated support equipment costs.
I contend that a 2.00gokart style class is in sort of its own middle ground league with respect to cost. The specific reason I call it “relatively” inexpensive is the level of in-depth exposure the student gets to so many facets of modern engineering – mechanical, electrical (wiring), electronics (programming, sensing circuitry, controllers), fabrication and machining, CAD, etc. It’s all embodied in one person. Contrast this with a large team build where one student might have only one role in a much larger, costlier project scope and I argue the overall value of the class is decreased *if it’s to teach engineering and engineering management skills, *like what I describe above – if the role is to exercise them (such as in a sr. design class) then the comparison should not be made. One MIT professor terms this the “engineering Renaissance Man” effect.
I originally made it the goal of 2.00scooter to keep the budget to around $500 per student *inclusive* of any overhead such as shipping. Section 10 holds the actual accounting spreadsheet I used to tally costs for both years. Shipping was considered a fully separate category to keep its cost ratio calculable. I included the cost of “free” stuff that the students received from me in their budgets, including that year’s “semi-free” Razor scooters. I also kept a category of “consumables”, considered to be stuff like public hardware. The findings of the semester were:
- Net cost per student including shipping and free stuff overhead of $503
- Total money spent of about $6100 with consumable costs of $1100
- The parts cost overhead % was right around 33% – the hidden costs of each vehicle added 33% to their values
It’s important to note that this does NOT include the following:
- Access to a really nice and finished shop. I don’t have a
*list of literally everything you need to run the class*like, say, MAS.863 does! - Batteries, which were donated for free. The cost of each A123 brick is $120 – so if you had to buy all the batteries and gave them out 3-for-free like I did, then the class cost might double!
The 2.00gokart session in 2013 broke down like so:
- Net cost per student including overhead of $632
- Total money spent of $10,200 with consumables cost of $3500
- Overhead of 45% of parts cost.
Something about that seems off, right? I included $1500 of MIT waterjetting (done through the Hobby Shop) and $1000 of Big Blue Saw waterjetting for the sessions before my machine charge approval went through (that got it through REALLY QUICK!) and that I put in initially with the Consumables.
If the waterjet charge was moved to the one-time costs bracket, then the overhead drops to only 24.1%. This puts it in line with the trend of grouping people together in twos means less total orders and less shipping costs per part ordered, hence why the net cost was only an increase of about $100 for an increase in hilarity of 10,000%. But, for those without free waterjet access, the cost will be very real, so I would say the more fair comparison is the original.
For the SUTD summer session, I did not keep an equivalent accounting doc since it was too chaotic and this time I had the accounting kept track of by one of the financial administrators in the IDC who was working with the program (Oh, dear, I now have admin staff…). I know the final numbers, however, enough to draw some comparisons:
- $21,000 spent total, everything included, for 28 students, which comes out to be $750 per student
- $3090 of that was waterjetting costs alone!
The extra increase is explained by the fact that I bought something like $5000 of parts beforehand in order to make sure we wouldn’t suffer from shipping gaps. Many were included into the vehicles, but ultimately that skewed the cost per vehicle because a good percentage, maybe 40%, ended up unused. If I run with the 40% fallout rate estimate, it becomes $670 per student. But that doesn’t mean Singapore didn’t get its money’s worth – I bunched all of that up and shipped it back with the karts. Good luck over there.
The general idea is a Few Hundred Dollars Per Student. Remember again to not compare to a giant team effort – spending, say, $6000 on a project but with a 17-person team might mean a low per-person cost, but everybody may not get the same level of immersion in the content. I’m unsure of how much 2.007 proper spends per-student not including facility costs (i.e. just parts) – I’ve heard $250-300 floated around before.
Lastly, here’s a summary of what resources I have in my ‘sampler basket’ of places to shop. If you have seen my Instructables or been in one of my resource lectures at any venue (e.g. Dragon*Con’s Maker Resources panel) then you have seen the whole gamut.
A slide from one of the resources presentations showing a website and a part of interest
In terms of parts, we concentrated mostly on the usual suspects for small electric vehicles:
- HobbyKing for motors, primarily. For class consumables, I ordered wiring, connectors (XTs, Deans, bullets), heat shrink tubing, and 5-pin JST-XH harnesses for people who wanted to easily create sensor rigs
- Monster Scooter Parts, TNC Scooters, PartsForScooters, ElectricScooterParts… the list goes on. These houses deal specifically with the kind of generic Chinese scooter and e-bike whose parts we repurpose for our own applications. Specifically worth mentioning in some of these include TNC stocking cheap go-kart throttle pedals when other people don’t.
- McMaster-Carr for maybe 75% of the non-motor-and-ESC mechanical hardware on any one of these vehicles. I also purchased large amounts of (less expensive) wiring
- Kelly Controller‘s KDS and KBS lines were the most popular for motor controller choices.
- Burden Surplus Center was popular for some things which were found
*much*cheaper – vehicles with large amounts of sprockets, for instance. Seats and steering parts also tended to come from here, as did some tires. - Harbor Freight is worth a mention since a majority of the IDC shop tools come from here.
~~Don’t laugh~~. Several teams opted to use their $8-10 wheelbarrow tires for their main drive wheels, which saved money over a scooter-grade one but had to be “re-engineered” slightly. - SDP-SI was the supplier of belt drives and timing belts almost exclusively. Most of their stuff is very small and precise – timing belt parts are about the only thing I get from them that’s EV-relevant.
- We ordered 80/20 rails directly from 80/20-the-company – they have local dealers which cut us severe discounts. McMaster has the extrusion material at a slightly higher price.
- Online Metals took the brunt of my massive 500 pound aluminum plate order
- Speedy Metals was the go-to for students who wanted small metal orders thereafter. McMaster was acceptable for some alloys, but expensive for others, and not all students took advantage of this – some decided to go ahead with metal orders from McM for speed of delivery.
- Our batteries came from A123. Yours could, hypothetically, too.
- I ordered bulk ball bearings for the summer session from USA Bearings and Belts. Else, during 2.00gokart, people usually ordered straight from McMaster or VXB, another vendor of Inexpensive Chinese Ball Bearings.
- I designed these Hall Sensor conversion kits for brushless motors and sell them through my “company”, Equals Zero Designs. For the class, to avoid conflicts of interest, I supplied students with the sensor rigs for free (produced using lab tools, of course – I wasn’t literally giving away free stuff).
- Quite a students found parts on eBay they wanted to substitute for a more expensive but similar part on one of these websites. My credit card approvers tend to dislike eBay/Paypal transactions, so I often just offered to replace those items for no cost penalty to their budget.
I’d say 99% of any one vehicle on average came from those vendors. Occasionally, the odd hardware store nut or bike-shop brake cable were used – these are also great resources if you have competent ones in your area. We made very little use of Home Depot, which seemed to be a staple of the garage go-kart builder at one point. Maybe now that we have the Shopbot, everyone’s karts will be made of MDF and OSB next year.
Resource- and reading-wise, the students primarily used lecture content by myself and the TAs. However, I did curate for them a good amount of readin’ material:
- How to Build Your Everything Really Really Fast was essentially “the class textbook” for 2.00gokart and the summer session.
- Electric Scooter Power Systems was… I don’t know, the second volume of the textbook? I can be making
*so much money*as a professor with a captive book-buying audience right now. - Chibikart also was a reference for many folks – you can see it if you look at everyone’s steering kingpin and upright assemblies very carefully…
- Gizmology, RoyMech.co.uk (oldie but goodie) were two websites I sent people to for some general mechanical engineering knowledge, particular their notes on spur gears, bearings, shafts and coupling thereto, beams and structures, bolted joints… you get the gist.
- Fundamentals of Design is an invaluable resource that’s basically all of MIT Mechanical Engineering in one epic 7 course dinner you have to puke in the middle of to keep eating. It dives deeper into the why of mechanical engineering than I do. I refer people to its sections regularly, especially those focusing on structures, fasteners, bearings, and power transmission.
- Many builders’ and makers’ websites and build blogs. An acutely toxic dose of them are in my own left sidebar – I’m not going to list them all out because everyone who builds and documents is a resource to others.
There are ones I’m probably forgetting at this moment, so this list may be updated without notice. This list of resources is not comprehensive – remember, I don’t have a “class in a box” list because the whole point of the class is not to have a damned box. You may feel free to supplement any of your own resources.
# 10. Reference document section
In this section will be reference documents and full lecture note sets from 2012 and 2013, including any other resources I see fit.
The terms of use: These documents are freely available for you to use as purely a reference for your own project build or for basing your own class, seminar, or other learny-activity involving electric vehicle design. They are provided as-is and I will not answer questions or offer support pertinent to their use. You are not forbidden from using the material verbatim in your own lecture notes with attribution; however, remember that the class is purposefully unbounded and this may not capture the extent of the information. If you redistribute them, you must include the license file in the zipped folders that says basically this.
- 2012 “2.00scooter” reference lectures
- 2012 “2.00scooter” reference milestone documents
- 2012 example accounting sheet
- 2013 “2.00gokart” reference lectures
- 2013 “2.00gokart reference milestones
- 2013 example accounting sheet
- 2012 safety office document
- 2013 safety office document
- 2013 summer session reference milestones
- 2013 summer session reference lectures
It’s not my hope for this class to spin off a thousand copies of itself. I rather hope for this document-article-post-thesis-novel to make people think about the goals of design classes and the role of individual learning in engineering education. I saw a condition at MIT which I found unsatisfactory, so I used what tools and connections I had in order to try and craft an alternative. Your circumstances will most likely not be the same. 2.00gokart is not an attempt to start some sort of design revolution, nor is it meant to overturn the foundations of a modern engineering education. It is simply one other path, out of myriads, that motivated and spirited people have built in the interest of advancing the quality and accessibility of education for everyone. After all, even grizzled old professors at one point used to be young and boisterous. I don’t intend to stop running the class after this – *hell* no, so don’t think this is some kind of final mass ejection, but have written this summary as yet another exercise of my drive to document as many things I do as possible. In my opinion, you can never have access to too much information if you are determined to search for your own knowledge’s sake, and the more high quality and comprehensive information there is, the easier that search becomes.
Thanks for your time, now go spawn a go-kart.
I’m an engineering student at UVic, and I wish we had a class like this one, basically everything is theory for us.
Bravo,
This is an intense writeup. The detail is awesome and everything is in one place. Just putting something like this together takes time, and, i may be speaking for the internet here, but,
Thanks From The HIVEMIND.
-Dane
Not sure you want +10 points of efame, but if you had someone shoot it, this would be a great addition to the open courseware. I’m sure many people will find the lecture slides useful, but if you could get
~~all~~some of those hours of Q&A on film, there’s probably massive amounts of information that isn’t captured anywhere else. Many neckbeards throughout the internet would be highly entertained.Either way, that’s for the contribution.
I agree with you entirely Charles. I love how you pretty much encompassed the idea of engineering competitions in this course.
I learned a lot more from 6 week in FIRST robotics then probably all the design classes I’ve had combined. I am also part of a Formula Hybrid team and it clearly true that everyone that is on the team and builds a significant portion of the car will learn the concepts of their courses way ahead of time making their lectures both more interesting and more effective in creating good engineers.
I wish you taught everywhere so that you will have a larger impact on the way engineers are trained in schools.
You should write a book. Seriously. | true | true | true | null | 2024-10-12 00:00:00 | 2013-10-01 00:00:00 | null | null | etotheipiplusone.net | On 2.00Gokart; Or, Designing a Design Class to Disrupt Design Classes as We Know It; Or, How to Make MIT Undergraduates Build Silly Go-Karts so You Don’t Have To | null | null |
7,434,720 | http://speakingjs.com/es5/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,403,250 | https://brid.gy/ | Bridgy | null | Bridgy connects your web site to social media.
Likes, reposts, mentions, cross-posting, and more...
Looking for Bridgy Fed instead? (What's the difference?)
Already signed up? Find your user page here.
Bridgy connects your web site to social media.
Likes, reposts, mentions, cross-posting, and more...
Looking for Bridgy Fed instead? (What's the difference?)
Already signed up? Find your user page here. | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
30,621,187 | https://en.wikipedia.org/wiki/Seigniorage | Seigniorage - Wikipedia | Authority control databases | # Seigniorage
**Seigniorage** /ˈseɪnjərɪdʒ/, also spelled **seignorage** or **seigneurage** (from Old French * seigneuriage* 'right of the lord (
*seigneur*) to mint money'), is the difference between the value of money and the cost to produce and distribute it. The term can be applied in two ways:
- Seigniorage derived from specie (metal coins) is a tax added to the total cost of a coin (metal content and production costs) that a customer of the mint had to pay, and which was sent to the sovereign of the political region.
[1] - Seigniorage derived from notes is more indirect; it is the difference between interest earned on securities acquired in exchange for banknotes and the cost of printing and distributing the notes.
[2]
"Monetary seigniorage" is where sovereign-issued securities are exchanged for newly printed banknotes by a central bank, allowing the sovereign to "borrow" without needing to repay.[3] Monetary seigniorage is sovereign revenue obtained through routine debt monetization, including expansion of the money supply during GDP growth and meeting yearly inflation targets.[3]
Seigniorage can be a convenient source of revenue for a government. By providing the government with increased purchasing power at the expense of public purchasing power, it imposes what is metaphorically known as an inflation tax on the public.
## Examples
[edit]Seigniorage is the positive return, or carry, on issued notes and coins (money in circulation). Demurrage, the opposite, is the cost of holding currency.
An example of an exchange of gold for "paper" where no seigniorage occurs is when a person has one ounce of gold, trades it for a government-issued gold certificate (providing for redemption in one ounce of gold), keeps that certificate for a year, and redeems it in gold. That person began with and ends up with exactly one ounce of gold.
In another scenario, instead of issuing gold certificates a government converts gold into non-gold standard based currency at the market rate by printing paper notes. A person exchanges one ounce of gold for its value in that currency, keeps the currency for one year, and exchanges it for an amount of gold at the new market value. If the value of the currency relative to gold has changed in the interim, the second exchange will yield less (or more) than one ounce of gold (assuming that the value, or purchasing power, of one ounce of gold remains constant through the year). If the value of the currency relative to gold has decreased, the person receives less than one ounce of gold and seigniorage occurred. If the value of the currency relative to gold has increased, the person receives more than one ounce of gold and demurrage occurred; seigniorage did not occur.
## Ordinary seigniorage
[edit]Ordinarily, seigniorage is an interest-free loan (of gold, for example) to the issuer of the coin or banknote. When the currency is worn out the issuer buys it back at face value, balancing the revenue received when it was put into circulation without any additional amount for the interest value of what the issuer received.
Historically, seigniorage was the profit resulting from producing coins. Silver and gold were mixed with base metals to make durable coins. The British pound sterling was 92.5 percent silver; the base metal added (and the pure silver retained by the government mint) was, less costs, the profit – the seigniorage. Before 1933, United States gold coins were 90 percent gold and 10 percent copper. To make up for the lack of gold, the coins were over-weighted.[4] A one-ounce Gold American Eagle will have as much of the alloy as needed to contain a total of one ounce of gold (which will be over one ounce). Seigniorage is earned by selling the coins above the melt value in exchange for guaranteeing the weight of the coin.
Under the rules governing the monetary operations of major central banks (including the United States Federal Reserve), seigniorage on banknotes is the interest payments received by central banks on the total amount of currency issued. This usually takes the form of interest payments on treasury bonds purchased by central banks, putting more money into circulation. If the currency is collected, or is otherwise taken permanently out of circulation, the currency is never returned to the central bank; the issuer of the currency keeps the seigniorage profit by not having to buy back worn-out currency at face value.
### Solvency constraints of central banks
[edit]The solvency constraint of a standard central bank requires that the present discounted value of its net non-monetary liabilities (separate from monetary liabilities accrued through seigniorage attempts) be zero or negative in the long run. Its monetary liabilities are liabilities in name only, since they are irredeemable. The holder of base money cannot insist on the redemption of a given amount into anything other than the same amount of itself, unless the holder of the base money is another central bank reclaiming the value of its original interest-free loan.
## Seigniorage as a tax
[edit]Economists regard seigniorage as a form of **inflation tax**, returning resources to the currency issuer. Issuing new currency, rather than collecting taxes paid with existing money, is considered a tax on holders of existing currency.[5] Inflation of the money supply causes a general rise in prices, due to the currency's reduced purchasing power.
This is a reason offered in support of free banking, a gold or silver standard, or (at a minimum) the reduction of political control of central banks, which could then ensure currency stability by controlling monetary expansion (limiting inflation). Hard-money advocates argue that central banks have failed to attain a stable currency. Orthodox economists counter that deflation is difficult to control once it sets in, and its effects are more damaging than modest, consistent inflation.
Banks (or governments) relying heavily on seigniorage and fractional reserve sources of revenue may find them counterproductive.[6] Rational expectations of inflation take into account a bank's seigniorage strategy, and inflationary expectations can maintain high inflation. Instead of accruing seigniorage from fiat money and credit, most governments opt to raise revenue primarily through formal taxation and other means
## Contemporary use
[edit]The 50 State Quarters series of quarters (25-cent coins) began in 1999. The U.S. government thought that many people, collecting each new quarter as it rolled out of the United States Mint, would remove the coins from circulation.[7] Each complete set of quarters (the 50 states, the five inhabited U.S. territories, and the District of Columbia) is worth $14.00. Since it costs the mint about five cents to produce one quarter, the government made a profit when someone collected a coin.[8] The Treasury Department estimates that it earned about $6.3 billion in seigniorage from the quarters during the program.[9]
Some countries' national mints report the amount of seigniorage provided to their governments; the Royal Canadian Mint reported that in 2006 it generated $93 million in seigniorage for the government of Canada.[10] The U.S. government, the largest beneficiary of seigniorage, earned about $25 billion in 2000.[11] For coins only, the U.S. Treasury received 45 cents per dollar issued in seigniorage for the 2011 fiscal year.[12]
Occasionally, central banks have issued limited quantities of higher-value banknotes in unusual denominations for collecting; the denomination will usually coincide with an anniversary of national significance. The potential seigniorage from such printings has been limited, since the unusual denomination makes the notes more difficult to circulate and only a relatively-small number of people collect higher-value notes.
Over half of Zimbabwe's government revenue in 2008 was reportedly seigniorage.[13] The country has experienced hyperinflation ever since, with an annualized rate of about 24,000 percent in July 2008 (prices doubling every 46 days).[14]
## International circulation
[edit]The international circulation of banknotes is a profitable form of seigniorage. Although the cost of printing banknotes is minimal, the foreign entity must provide goods and services at the note's face value. The banknote is retained as a store of value, since the entity values it more than the local currency. Foreign circulation generally involves large-value banknotes, and can be used for private transactions (some of which are illegal).
American currency has been circulating globally for most of the 20th century, and the amount of currency in circulation increased several-fold during World War II. Large-scale printing of the United States one-hundred-dollar bill began when the Soviet Union dissolved in 1991; production quadrupled, with the first trillion-dollar printing of the bill. At the end of 2008, U.S. currency in public circulation amounted to $824 billion and 76 percent of the currency supply was in the form of $100 bills (twenty $100 bills per U.S. citizen).[15] The amount of U.S. currency circulating abroad is controversial. According to Porter and Judson,[16] 53 to 67 percent was overseas during the mid-1990s. Feige[17] estimates that about 40 percent is abroad. In a New York Federal Reserve publication, Goldberg[18] writes that "about 65 percent ($580 billion) of all banknotes are in circulation outside of the country". These figures are largely contradicted by Federal Reserve Board of Governors Flow of Funds statistics,[19][20] which indicate that $313 billion (36.7 percent) of U.S. currency was held abroad at the end of March 2009. Feige calculates that since 1964, "the cumulative seigniorage earnings accruing to the U.S. by virtue of the currency held by foreigners amounted to $167–$185 billion and over the past two decades seigniorage revenues from foreigners have averaged $6–$7 billion dollars per year".[21]
The American $100 bill has competition from the €500 note, which facilitates the transport of larger amounts of money. One million dollars in $100 bills weighs 22 pounds (10 kg), and it is difficult to carry this much money without a briefcase and physical security. The same amount in €500 notes would weigh less than three pounds (1.4 kg), which could be dispersed in clothing and luggage without attracting attention or alerting security devices. In illegal operations, transporting currency is logistically more difficult than transporting cocaine because of its size and weight, and the ease of transporting its banknotes makes the euro attractive to Latin American drug cartels.[22]
The Swiss 1,000-franc note, worth slightly more than $1,000, is probably the only other banknote in circulation outside its home country. However, it does not have a significant advantage over the €500 note to the non-Swiss; there are 20 times as many €500 notes in circulation, and they are more widely recognized. As a reserve currency, it makes up about 0.1% of the currency composition of official foreign-exchange reserves.[ citation needed]
Governments vary in their issuance of large banknotes; in August 2009, the number of Fr. 1,000 notes in circulation was over three times the population of Switzerland. For comparison, the number of circulating £50 banknotes is slightly less than three times the population of the United Kingdom; the Fr. 1,000 franc note is worth about £600. The British government has been wary of large banknotes since the counterfeiting Operation Bernhard during World War II, which caused the Bank of England to withdraw all notes larger than £5 from circulation. The bank did not reintroduce other denominations until the early 1960s (£10), 1970 (£20) and March 20, 1981 (£50).
## See also
[edit]## References
[edit]**^**"Quarterly Review" (PDF).*Minneapolisfed.org*. 1997. Retrieved 14 January 2019.**^**Bank of Canada (March 2012). "Backgrounders: Seigniorage" (PDF). Retrieved 2 January 2013.- ^
**a**Neumann, Manfred J.M. "Seigniorage in the United States: How Much Does the U.S. Government Make from Money Production?" (PDF). Federal Reserve Bank of St. Louis. Retrieved 17 June 2014.**b** **^**Friedman, Milton (1992). "Franklin D. Roosevelt, Silver, and China".*Journal of Political Economy*.**100**(1): 62–83. doi:10.1086/261807. ISSN 0022-3808. JSTOR 2138806. S2CID 153937120.**^**Snowdon, Brian; Vane, Howard R. (11 April 2018).*An Encyclopedia of Macroeconomics*. Edward Elgar. ISBN 9781840643879. Retrieved 11 April 2018 – via Google Books.**^**Tara McIndoe-Calder,*Hyperinflation in Zimbabwe: Money Demand, Seigniorage and Aid shocks*, Central Bank of Ireland; Trinity College Dublin - Institute for International Integration Studies, May 1, 2009**^**United States Mint 50 State Quarters® Design Use Policy Archived 2010-04-20 at the Wayback Machine, Usmint.gov, Retrieved December 5, 2013**^**"Frequently Asked Questions".*The 50 State Quarters Program of the United States Mint*. United States Mint. Archived from the original on 2007-07-13. Retrieved 2009-10-18.**^**"50 State Quarters Program Earned $6.3 Billion in Seigniorage - Coin Update".*news.coinupdate.com*. Archived from the original on 8 July 2011. Retrieved 11 April 2018.**^**"Canadian Mint Annual Report 2006" (PDF).*Royal Canadian Mint*. Retrieved 6 September 2023.**^**"Citizen's Guide to Dollarization". Archived from the original on 2009-11-04. Retrieved 2009-10-31.**^**United States Mint FY 2013 President’s Budget Submission*United States Treasury***^**Gerson, Michael (2008-02-20). "Dying Silently In Zimbabwe".*The Washington Post*. Retrieved 2009-05-29.**^**"How Zimbabwe lost control of inflation". Archived from the original on 2014-06-17. Retrieved 2010-01-10.**^**Edgar L. Feige (September 2009). ""New estimates of overseas U.S. currency holdings, the Underground economy and the "Tax Gap" Forthcoming in Crime, Law and Social Change".*Mpra Paper*.**^**Porter and Judson, 1996, R. D. Porter and R. A. Judson, The location of U.S. currency: How much is abroad? Federal Reserve Bulletin 82 (1996), pp. 883–903**^**E. L. Feige (1997). "Revised estimates of the underground economy: Implications of U.S. currency held abroad, in O. Lippert and M. Walker (ed.) The Underground economy: Global evidence of its size and impact".*Mpra Paper*: 151–208.**^**Goldberg, 2010, L. S. Goldberg, Is the International Role of the Dollar Changing? Federal Reserve Bank Of New York, Current Issues in Economics and Finance, 16(1) (2010) pp. 1–7.**^**"The Fed - Financial Accounts of the United States - Z.1 - Current Release".*Federalreserve.gov*. Retrieved 14 January 2019.**^**Edgar L. Feige (September 2009). ""New estimates of overseas U.S. currency holdings, the Underground economy and the "Tax Gap"".*Mpra Paper*.**^**Feige, Edgar L. (1996), "Overseas Holdings of U.S. Currency and the Underground Economy",*Exploring the Underground Economy: Studies of Illegal and Unreported Activity*, W.E. Upjohn Institute, pp. 5–62, doi:10.17848/9780880994279.ch2, ISBN 9780880994279**^**"Latin American drug cartels find home in West Africa".*CNN*. September 21, 2009.
## External links
[edit]- "A better way to account for fiat money at the Central Bank" by Thomas Colignatus, December 31, 2005
- Creating New Money: A Monetary Reform for the Information Age (PDF), by Joseph Huber and James Robertson
- Extensive discussion
- Information about Seigniorage
- Sovereignty & Seignorage (PDF)
- "The temptation of dollar seigniorage", by Kosuke Takahashi of Asia Times Online, January 23, 2009.
- Dollar notes to be replaced by coins – The Royal Mint view By The Royal Mint, January 16, 2013 | true | true | true | null | 2024-10-12 00:00:00 | 2002-08-18 00:00:00 | null | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
40,801,638 | https://docs.google.com/document/d/1Ar4M11mQj_jlsSj68BoUfnU_XzF3Lh72Wk0vAFCeDec/mobilebasic | PUBLIC losing the idea of progress (2024-03-14) | null | I'm studying physics at MIT right now. My two favorite professors are David Pritchard and Alan Guth. They're 84 and 76 years old, respectively.
At one of the dinners I attended with a big-shot Harvard Professor he shared his scientific journey to particle physics.
He started doing physics research in undergrad, then went to the Institute for Advanced Study for PhD in physics. He spent 10 years doing physics. Then he came to Berkeley, met some experimentalists and realized that he wasn't doing physics, he was just doing math. 10 years.
He told us that he was coming up with fun extensions of string theory that had "interesting correspondence with the standard model" and now thinks it was all "bullshit".
What I conclude is:
Most seminars I go to contain no physics content. They're engineering, data fitting, endless complicated extensions of existing models, things somehow inspired by physical phenomena that devolve into just doing math, and so on.
As far as I can tell, nobody finds this problematic. Moreover most of the physicists at Harvard and MIT[b] don't even realize that what they're doing is not physics at all, not even science, really.
How come?
People naturally gravitate towards the most fun research, which is math, not physics. Real research is punishing, usually doesn't work, and makes it much more difficult to get tenure. Research will be about fun math and not about science, unless it must. Glass Bead Game [c]was about that.
The current generation of theoretical physicists never made a real physical discovery and neither did their mentors. They're working on external data, never talk to experimentalists, or just do math.
The Center for Theoretical Physics Department at MIT literally has its own building, removing them from even accidentally interacting with experimentalists.
When I go to experimentalist seminars, there are no theoretical physicists there. When I go to theoretical seminars, there are no experimentalists there.
Experimentalists don't usually think about science -- they do engineering and without theoretical guidance produce no progress (market competition with more technologically advanced competitor could help them, but this is academia).
Unless something changes, theoretical physics might forever remain extensions to the Standard Model (they very misleadingly call this "Beyond Standard Model" research -- it's not) and fun math.
Bottom line is: if you never had the experience of discovering new knowledge about the world OR had a mentor who had it, you literally won't know what science is or how to do it. After 2-3 generations, this experience simply gets lost, and people doing "science" don't know what science is. Again, this has already happened at IAS and is happening very quickly at Harvard and MIT.
So what happens if we run out of frontier on Earth and either get no major scientific discoveries for 2-3 generations or get majorly disruptive, as happened to civilization multiple times over the last 5,000 years?
I think we might very well not recover the idea of progress for a very long time (I wrote 1,000 years in my notes but who knows). We've already seen this happen with space 1970-2000, until Musk came along – technology was literally declining.[d]
I only would have appreciated more how Rajeev and I hadn’t come any closer to The Grail. This was yet another surprise: that not every part of the frontier is equally earthshaking and that some are sort of trivial. Despite how knotty our project seemed to me, our conclusions were of extraordinarily limited scope.
“Quantum Gravity on a Circle …”
The paper we wrote did indeed flesh out a consistent theory of quantum gravity, one in which the concept of distance is defined by the phase of a quantum mechanical wavefunction. It even predicted black holes. But—and this is a very big but—the theory could only be true in a hypothetical one-dimensional universe that’s shaped like a ring, in other words a world nothing at all like the three-dimensional one in which you and I live, pay taxes, and die.
What we’d created is called a “toy model”: an exact solution to an approximate version of an actual problem. This, I learned, is what becomes of a colossal conundrum like quantum gravity after 70-plus years of failed attempts to solve it. All the frontal attacks and obvious ideas have been tried. Every imaginable path has hit a dead end. Therefore researchers retreat, set up camp, and start building tools to help them attempt more indirect routes. Toy models are such tools. Rajeev’s river almost certainly didn’t run to The Grail. The hope was that some side stream of one of its many tributaries (Virasoro, Yamabe …) would.
Actually that was my hope, not Rajeev’s. Rajeev, I believe, just liked doing the math. The thing was a puzzle he could solve, so he solved it. For him that was enough.
[a](as far as I can tell, the work of the Professor described above is a fad and has no real physical content either, although he's now working with data somehow)
[b]I have no idea if the highlighted statement is true or not, but at a more personal and wider societal level: I was never taught directly about the scientific method during my time at school. Despite all the scientific theories and mathematical relationships that are learnt the concept of what we're doing when making those statements isn't focused on.
On this idea specifically. Could research into the other mathematical implications of the mathematics of physics generate unexpected hypotheses that can be empirically tested? Possibly. Is it possible to expend a lot of effort in an area which never could be? Probably.
1 total reaction
Alexey Guzey reacted with 🤔 at 2024-06-12 15:28 PM
[c]Great book
[d](very tentative and I don't know what i'm talking about, but i'm saying the rest at maximum confidence because i think that's 10x more useful than being handwavy)
I've wondered why teach people physics if they want to go into finance and work at a hedgefund, and people often say "problem solving" or something. I mean they do teach finance, but a physics background is supposed to give you that general problem solving ability. But then why not teach that? Why keep up this proxy if people aren't gonna use physics or math or whatever they teach in CS classes that doesn't apply to software eng jobs specifically? Everyone who's worked with Musk notices he has a distinct way of getting things done, he sees things in a different way that generates just wildly divergent ideas on what to do and how. These ideas don't occur to high schoolers in their basement without exposure to that way of thinking, which is why the frontier isn't being claimed that way. Maybe some people are born with it, but I doubt it, they just learned it a different way. (Mark Zuckerberg's dad had his own dental practice, etc.) We have poor methods of optimizing things that way and how to represent optimization progress for complex things like companies, or beyond single variable "transistor count" or "drag coefficient" type measures. idk what GDP is but do people actually think the economy – like everything people in a country are up to at all times – is one number? That's why Moore's law doesn't exist so for everything good, even if improvement like that is possible. If you can't see early trends, you can't bet on them continuing.
I found this write-up inspirational bc if the smartest people in the world are getting stuck in ways they can't see and don't make progress, at least without competition, it means what they have is not what's necessary. What's necessary is being able to see better, to get better composites, and to what transfer what gets you Musk's composites to other people. and that can be learned, and is available to everyone. | true | true | true | Intro I'm studying physics at MIT right now. My two favorite professors are David Pritchard and Alan Guth. They're 84 and 76 years old, respectively. At one of the dinners I attended with a big-shot Harvard Professor he shared his scientific journey to particle physics. He started doing physics ... | 2024-10-12 00:00:00 | 2024-06-12 00:00:00 | https://lh7-us.googleusercontent.com/docs/AHkbwyKa2HcpDZ9h5AWkfVEIbQnoVUhP-L6N7gxdK5B-1zS4DICHb4mLhKQlc5KvpWQyQ5ThKrJHFHS7rr0GDps7iAPIMBPxghDShWoDIzS_FWSOaCuYpoUW=w1200-h630-p | article | google.com | Google Docs | null | null |
17,485,445 | https://spin.atomicobject.com/2018/07/07/better-dev-slowly/ | Three Habits That Will Slowly Make You a Better Developer | William Shawn | There are lots of ways to improve as a developer quickly—keeping track of what’s going on in the industry, reading books, maintaining outside projects, and watching talks are some of the obvious ones. But there are a few habits you can adopt in your day-to-day work that will slowly improve a different set of skills over time.
## 1. Keep a Development Journal
When we run into issues in the course of software development, we frequently get lucky and find the answer to our problem in the top result on the first page of a Google search. Other times, though, it’s not so easy, and we can spend an hour or more following links and going down rabbit holes until we finally find a solution in a downvoted Stack Overflow answer on the fifth page of our fifth Google search.
The thought process that led to the answer is usually pretty intricate and full of a lot of small insights. As you try things and they fail, you’re learning something, but the garbage collector in our brains tends to throw these small lessons away as soon as the problem is solved. It’s worth taking a few moments to document these situations.
Not only will you have a documented solution to a problem that might save you or a teammate an hour or two in the future–you’ll also have a miniature detective story that might help you next time you find yourself going past the first page of search results. Learning how to follow leads and make sense of disparate and conflicting pieces of information is an incredibly valuable skill that can only be developed through experience.
Make it a point to read through your notes once a month or so to remind yourself of past successes and internalize some of the small lessons.
## 2. Review Your Own Code
Everyone writes code that’s less than ideal. As uncomfortable as it might be to go back and read your code from six months ago, it’s a good habit. It will expose blind spots you didn’t know you had, make it easier for you to review other people’s code, and improve the overall health of the codebase.
Every developer has had the experience of writing something complicated that made sense in the moment, and then revisiting it months later and being completely baffled by it. What’s even worse is when someone else questions you about it, and you’re unable to provide answers.
Finding these spots is instructive because it teaches you what kinds of things you tend to overcomplicate. Everyone has a different style and writes messy code in different ways. But we often can’t see our own ways until we get some distance from the code and revisit it. Try to root out these areas and fix them before you confuse yourself or someone else down the line.
It’s also good to occasionally dip your foot back into a part of the codebase that maybe you haven’t seen in a while. Doing this will remind you how it works, and it might also reveal other opportunities to refactor.
When you’re looking at old code, ask yourself how you would structure it if you were starting it from scratch today, given the lessons you’ve learned since it was written and the domain knowledge you’ve accumulated. You probably aren’t ever going to be able to throw it out and rewrite it unless it’s a personal project. Nevertheless, it’s a good thought exercise, and it might give you some ideas for less invasive ways to improve the codebase.
## 3. Learn a New Shortcut; Try a New Tool
The old cliche is that when all you have is a hammer, everything looks like a nail. Think about the tools and shortcuts that you love and couldn’t live without. You can probably remember a time before they were part of your repertoire when things seemed just fine. For example, before you learned that you could move the cursor forward and backward by word, you moved it forward and backward by character and had no idea what you were missing.
The editors and tools we use have so much functionality that there are probably hundreds of things out there that would become indispensable if you only knew what they were. Make it a point to seek out and explore shortcuts, configuration options, and plugins for your tools, and you will start building up an impressive bag of tricks.
With regards to point 1 I wrote an article in a similar vein. See, https://sionwilliams.com/post/making-ticketing-systems-useful/
Some developers may wish to add this to their journal too, so they can take it with them. This was born from the frustration of constantly being told “oh, we tried that, we got the same result” but never finding any record.
Great ideas, I’ll start implementing them starting now! :) | true | true | true | Experience is a great teacher—but only if we remember what we learn. These day-to-day habits will keep you learning and growing in little ways that really add up. | 2024-10-12 00:00:00 | 2018-07-07 00:00:00 | article | atomicobject.com | Atomic Object | null | null |
|
11,492,456 | https://www.uscis.gov/news/alerts/uscis-completes-h-1b-cap-random-selection-process-fy-2017 | USCIS Completes the H-1B Cap Random Selection Process for FY 2017 | null | # USCIS Completes the H-1B Cap Random Selection Process for FY 2017
U.S. Citizenship and Immigration Services (USCIS) announced on April 7, 2016, that it has received enough H-1B petitions to reach the statutory cap of 65,000 visas for fiscal year (FY) 2017. USCIS has also received more than the limit of 20,000 H-1B petitions filed under the advanced degree exemption, also known as the master’s cap.
USCIS received over 236,000 H-1B petitions during the filing period, which began April 1, including petitions filed for the advanced degree exemption. On April 9, USCIS used a computer-generated random selection process, or lottery, to select enough petitions to meet the 65,000 general-category cap and the 20,000 cap under the advanced degree exemption. USCIS will reject and return all unselected petitions with their filing fees, unless the petition is found to be a duplicate filing.
The agency conducted the selection process for the advanced degree exemption first. All unselected advanced degree petitions then became part of the random selection process for the 65,000 limit.
As announced on March 16, 2016, USCIS will begin premium processing for H-1B cap cases no later than May 16, 2016.
USCIS will continue to accept and process petitions that are otherwise exempt from the cap. Petitions filed on behalf of current H-1B workers who have been counted previously against the cap will also not be counted towards the congressionally mandated FY 2017 H-1B cap. USCIS will continue to accept and process petitions filed to:
- Extend the amount of time a current H-1B worker may remain in the United States;
- Change the terms of employment for current H-1B workers;
- Allow current H-1B workers to change employers; and
- Allow current H-1B workers to work concurrently in a second H-1B position. U.S. businesses use the H-1B program to employ foreign workers in occupations that require highly specialized knowledge in fields such as science, engineering, and computer programming.
For more information on USCIS and its programs, please visit uscis.gov or follow us on Facebook (/uscis), Twitter (@uscis), YouTube (/uscis) and the USCIS blog *The Beacon*. | true | true | true | USCIS used a computer-generated random selection process, or lottery, to select enough petitions to meet the 65,000 general-category cap and the 20,000 cap under the advanced degree exemption. | 2024-10-12 00:00:00 | 2016-04-12 00:00:00 | article | uscis.gov | USCIS | null | null |
|
33,733,513 | https://www.ft.com/content/5f081f77-ed30-4a06-864e-7e4cc3204017 | UK households face largest fall in living standards in six decades | null | UK households face largest fall in living standards in six decades
was $468 now $279 for your first year, equivalent to $23.25 per month. Make up your own mind. Build robust opinions with the FT’s trusted journalism. Take this offer before 24 October.
Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.
FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday.
Terms & Conditions apply
See why over a million readers pay to read the Financial Times. | true | true | true | null | 2024-10-12 00:00:00 | 2024-01-01 00:00:00 | null | website | null | Financial Times | null | null |
14,722,691 | http://www.superted.io/business/2017/07/07/publicly-credit-your-employees.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,949,414 | https://www.getrevolv.com/architectures | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,532,555 | http://www.bbc.com/travel/story/20160916-the-kung-fu-nuns-of-nepal | The Kung Fu nuns of Nepal | Swati Jain | # The Kung Fu nuns of Nepal
**Dressed in traditional maroon robes modified in the style of karate uniforms, the nuns’ smiling faces conceal an incredible energy and strength.**
It was barely 5am, but at Druk Gawa Khilwa nunnery in Kathmandu, Nepal, the nuns were already practicing Kung Fu.
With one leg folded forward and the other one stretched out backward, they lunged in the air repeatedly, striving for perfection in a series of impeccable kicks. Cries of energy punctuated each movement, a shrill accompaniment to the booming drums. Dressed in traditional maroon robes modified in the style of karate uniforms, the women’s smiling faces concealed an incredible energy and strength.
These are the Kung-Fu nuns: Nepal’s only female order to practice the deadly martial art made famous by Bruce Lee. In the inherently patriarchal Buddhist monastic system, women are considered inferior to men. Monks usually occupy all positions of leadership, leaving nuns to the household duties and other tedious chores. But in 2008, the leader of the 1,000-year-old Drukpa lineage, His Holiness The Gyalwang Drukpa, changed all that.
After a visit to Vietnam where he saw nuns receiving combat training, he decided to bring the idea back to Nepal by encouraging his nuns to learn self-defence.
His simple motive: to promote gender equality and empower the young women, who mostly come from poor backgrounds in India and Tibet.
Every day, 350 nuns, aged between 10 and 25, take part in three intense training sessions where they practice the exercises taught to them by their teacher, who visits twice a year from Vietnam.
As well as perfecting their postures, they handle traditional weapons, such as the *ki am* (sword), small *dao* (sabre), big *dao* (halberd), *tong* (lance) and *nunchaku* (chain attached to two metal bars).
Those with exceptional physical and mental strength are taught the brick-breaking technique, made famous in countless martial arts movies, which is only performed on special occasions like His Holiness’ birthday.
The nuns, most of them with black belts, agree that Kung Fu helps them feel safe, develops self-confidence, gets them strong and keeps them fit. But an added bonus is the benefit of concentration, which allows them to sit and meditate for longer periods of time.
Jigme Konchok, a nun in her early 20s who has been practicing Kung Fu for more than five years, explained the process:
“I need to be constantly aware of my movement, know whether it is right or not, and correct it immediately if necessary. I must focus my attention on the sequence of movements that I have memorized and on each movement at once. If the mind wanders, then the movement is not right or the stick falls. It is the same in meditation.”
In the name of gender equality, The Gyalwang Drukpa also encourages his nuns to learn traditionally masculine skills, such as plumbing, electrical fitting, typing, cycling and English. Under his guidance, they’re taught to lead prayers and are given basic business skills – typically work done by monks – and they run the nunnery’s guesthouse and coffee shop. The progressive women even drive 4X4s down Druk Amitabha mountain to Kathmandu, about 30km away, to get supplies.
Imbued with a new confidence, they are starting to use their skills and energy in community development.
When Nepal was hit with a massive earthquake in April 2015, the nuns refused to move to a safer area and instead trekked to nearby villages to help remove rubble and clear pathways. They distributed food to the survivors and helped pitch tents for shelter.
Early this year these nuns – led by His Holiness himself – cycled 2,200km from Kathmandu to Delhi to spread the message of environmental awareness and encourage people to use bicycles instead of cars.
And when the nuns visit areas plagued by violence, like Kashmir, they deliver lectures on the importance of diversity and tolerance.
Foremost on the nuns’ agenda, however, is the promotion of female empowerment.
“Kung Fu helps us to develop a certain kind of confidence to take care of ourselves and others in times of need.” Konchok explained.
* If you liked this story, **sign up for the weekly bbc.com features newsletter**, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, Travel and Autos, delivered to your inbox every Friday.* | true | true | true | Dressed in traditional maroon robes modified in the style of karate uniforms, the nuns’ smiling faces conceal an incredible energy and strength. | 2024-10-12 00:00:00 | 2016-09-19 00:00:00 | newsarticle | bbc.com | BBC | null | null |
|
17,323,371 | https://makecode.com/blog/maker/hello | Maker: A MakeCode editor for breadboarding | null | **Beta zone** The maker is still in beta and evolving, join the fun!
# Maker: A MakeCode editor for breadboarding
Many devices supported by MakeCode, such as the micro:bit and the Adafruit Circuit Playground Express, have a set of built-in sensors and outputs. But Arduino-style boards require wiring of sensors and actuators to the board’s header pins. The user selects a set of parts, wires them up to the board and then codes the system they have made.
## Code first
In MakeCode for makers, we turn this paradigm on its head: MakeCode’s simulator selects basic parts and generates wiring for them from the user’s program. That is, the user expresses the behavior that they want with code, and MakeCode uses that code to configure the simulator, as well as to generate the make instructions that can be printed out. This experience is great for beginners to the Arduino style of making.
Most tutorials and kits out there have you wire everything together before you can experience the behavior. MakeCode requires no knowledge of how breadboards work or how the individual components are wired. Users can rapidly prototype many different behaviors and the hardware follows along. A process that would be much more cumbersome if users had to manually assemble the hardware. Users also don’t need to own the parts to see it work.
## Example: Play a tune
Above is a simple example: the user creates a two-block program to play a tune when a button is pressed. MakeCode detects the hardware requirements from the two blocks: an audio player and a button are needed. MakeCode then automatically chooses hardware, lays it out, wires it, and provides a simulation. The button can be clicked with a mouse to play the tune in the browser.
# Breadboard Simulator
The simulator provides an interactive experience: the buttons are clickable, servos are animated, and audio comes out of the web app. There’s a lot of detail and learning opportunities available in the simulator.
Hovering over the breadboard shows you how it’s connected internally, while hovering over wires shows how the component connects.
Users might notice that the speaker and button don’t require a connection to positive voltage, while the servo, knob, and LEDs do. MakeCode isn’t explicitly teaching this (today), but users can make connections on their own. They experience hardware in a way that is usually only achievable by having the hardware in front of you.
The breadboard simulator is useful to more people than just beginners: debugging program behavior is much quicker in a simulator, so the “inner loop” of development is rapidly sped up.
# Assembly Instructions
For every project, MakeCode can generate a PDF file with step-by-step instructions that correspond to the parts and wiring shown in the breadboard simulator. This tailored file lists the set of parts required, guides the user step-by-step and part-by-part to build the final system.
This on-demand instruction generation is great for use in the education and can support teachers in rapidly developing and modifying projects for the classroom. There’s no need to wait for the next version of a kit - you can just change the code and print new instructions.
As in every aspect of MakeCode, there are opportunities to learn here. A completed project can look like a daunting mess of wires. The assembly instructions let you learn about a project one step at a time. Some users might feel intimidated working with batteries. It’s not obvious what the rules are: what is allowed to connect to what? What can be damaged? The assembly instructions take users on a safe route and include printed warnings if there is something tricky or easy to make a mistake on.
# Help needed!
We welcome pull requests! Go to https://github.com/Microsoft/pxt-maker to add your board or learn more about the project. | true | true | true | Many devices supported by MakeCode, such as the <a target="_blank" rel="nofollow noopener" href="https://makecode.microbit.org/">micro:bit</a> and the <a target="_blank" rel="nofollow noopener" href="https://makecode.adafruit.com/">Adafruit Circuit Playground Express</a>, have a set of built-in sensors and outputs. But Arduino-style boards require wiring of sensors and actuators to the board’s header pins. The user selects a set of parts, wires them up to the board and then codes the system they have made. | 2024-10-12 00:00:00 | 2022-01-01 00:00:00 | website | null | Microsoft MakeCode | null | null |
|
6,430,173 | http://blog.getblimp.com/2013/09/lets-share-the-love/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,052,142 | https://github.com/torappinfo/uweb | GitHub - torappinfo/uweb: uweb browser: minimal suckless android web browser for geeks | Torappinfo | Amazon appstore Galaxy.Store Downloads
Uweb browser: downloads, plugins and tips
(Mirrors: gitlab frama codeberg repo fleek netlify surge kinsta zeabur deno bitbucket pages vercel render glitch More...)
- Powerful: any native functionality with html5 enhancement and still secure; any urls to host website; javascript and shell scripting for general processing.
- Customizable: user-defined menus, (new) buttons and gestures for user agents, bookmarklets, url services, shell commands, internal functionality links and text processing etc.
- Convenient: Any AI chatbot/book (pdf/djvu)/dictionary (mdict)/txt/command line/app/webapp (web extensions) can be search engine.
- Tiny: less than 250k.
- Fast: run fast, even with thousands of user provided css/scripts/htmls.
- Efficient: less touches, one click to reach any number of search engines without repeated input; automate online services.
- URL bar command line support ("!" and .js file as command).
- Site-specific JS/CSS/HTML/preprocessing.
- Online play/preview/preprocess for downloadable resources.
- Multiple type profiles: switch any data including website logins, user configurations orthogonally.
- Supports enhanced user "hosts" file. Empty IP address to lift all server-imposed limitations.
- Website test automation scripting. crontab support (alarm clock and more).
Custom paper size PDF export and long vector screenshot, TTS, text reflow, resource sniffer, translation, reader's mode, user-defined url redirection, webdav/http backup & restore, auto next page, sending/receiving msg/file(s), site config (UA, no JS, no image, no 3rd party script/resource,active script, global scripts), http(s)/socks proxy, enabling html5 apps for local files (pdf/djvu/epub viewer, mdict dictionary lookup etc.).
- Bookmarklets (works for CSP sites and with option to auto apply to similar sites)
- AD blocking (block whole root domain trees etc.)
- Serverless local sites: PWA-kind web extension (chrome .crx & firefox .xpi) support.
- Resizable floating video support.
#### Ebrowser for Windows, MacOS and Linux
Ebrowser is a simple version of uweb browser on the desktop.
- Fully open source.
- Capture long screenshot as vector graphics.
- Enabling web tech for vector designing to replace Adobe Illustrator/Inkscape.
We encourage everyone to help with localization. The following is how to do.
- Fork this repository
- Copy res/values/strings.xml to path like res/values-%(lang)/, replace %(lang) with the ISO 639-1 language code.
- Translate res/values-%(lang)/strings.xml
- Translate assets/help_%(lang).html from assets/help_en.html
- Make a Pull Request | true | true | true | uweb browser: minimal suckless android web browser for geeks - torappinfo/uweb | 2024-10-12 00:00:00 | 2020-10-08 00:00:00 | https://opengraph.githubassets.com/3d0b5411480017acae16a9db7fc825408d3e6045ef4bd0d210e95529b9c9a2a4/torappinfo/uweb | object | github.com | GitHub | null | null |
18,309,498 | https://blog.acolyer.org/2018/10/26/robinhood-tail-latency-aware-caching-dynamic-reallocation-from-cache-rich-to-cache-poor/ | RobinHood: tail latency aware caching – dynamic reallocation from cache-rich to cache-poor | Adrian Colyer | RobinHood: tail latency aware caching – dynamic reallocation from cache-rich to cache-poor Berger et al., *OSDI’18*
It’s time to rethink everything you thought you knew about caching! My mental model goes something like this: we have a set of items that probably follow a power-law of popularity.
We have a certain finite cache capacity, and we use it to cache the most frequently requested items, speeding up request processing.
Now, there’s a long tail of less frequently requested items, and if we request one of these that’s not in the cache the request is going to take longer (higher latency). But it makes no sense whatsoever to try and improve the latency for these requests by ‘shifting our cache to the right.’
Hence the received wisdom that unless the full working set fits entirely in the cache, then a caching layer doesn’t address tail latency.
So far we’ve been talking about one uniform cache. But in a typical web application one incoming request might fan out to many back-end service requests processed in parallel. The OneRF page rendering framework at Microsoft (which serves msn.com, microsoft.com and xbox.com among others) relies on more than 20 backend systems for example.
The cache is shared across these back-end requests, either with a static allocation per back-end that has been empirically tuned, or perhaps with dynamic allocation so that more popular back-ends get a bigger share of the cache.
The thing about this common pattern is that we need to wait for all of these back-end requests to complete before returning to the user. So improving the *average* latency of these requests doesn’t help us one little bit.
Since each request must wait for all of its queries to complete, the overall request latency is defined to be the latency of the request’s slowest query. Even if almost all backends have low tail latencies, the tail latency of the maximum of several queries could be high.
(See ‘The Tail at Scale’).
The user can easily see P99 latency or greater.
Techniques to mitigate tail latencies include making redundant requests, clever use of scheduling, auto-scaling and capacity provisioning, and approximate computing. Robin Hood takes a different (complementary) approach: use the cache to improve tail latency!
Robin Hood doesn’t necessarily allocate caching resources to the most popular back-ends, instead, it allocates caching resources to the backends (currently) responsible for the highest tail latency.
…RobinHood dynamically allocates cache space to those backends responsible for high request tail latency (cache-poor) backends, while stealing space from backends that do not affect the request tail latency (cache-rich backends). In doing so, Robin Hood makes compromises that may seem counter-intuitive (e.g., significantly increasing the tail latencies of certain backends).
If you’re still not yet a believer that caching can help with tail latencies, the evaluation results should do the trick. RobinHood is evaluated with production traces from a 50-server cluster with 20 different backend systems. It’s able to address tail latency even when working sets are much larger than the cache size.
In the presence of load spikes, RobinHood meets a 150ms P99 goal 99.7% of the time, whereas the next best policy meets this goal only 70% of the time.
Look at that beautiful blue line!
When RobinHood allocates extra cache space to a backend experience high tail latency, the hit ratio for that backend typically improves. We get a double benefit:
- Since backend query latency is highly variable in practice, decreasing the number of queries to a backend will decrease the number of high-latency queries observed, improving the P99 request latency.
- The backend system will see fewer requests. As we’ve studied before on The Morning Paper, small reductions in resource congestion can have an outsized impact on backend latency once a system has started degrading.
### Caching challenges
Why can’t we just figure out which backends contribute the most to tail latency and just statically assign more cache space to them? Because the latencies of different backends tends to vary wildly over time: they are complex distributed systems in their own right. The backends are often shared across several customers too (either within the company, or perhaps you’re calling an external service). So the changing demands from other consumers can impact the latency you see.
Most existing cache systems implicitly assume that latency is balanced. They focus on optimizing cache-centric metrics (e.g., hit ratio), which can be a poor representation of overall performance if latency is imbalanced.
Query latency is not correlated with query *popularity*, but instead reflects a more holistic state of the backed system at some point in time.
An analysis of OneRF traces over a 24 hour period shows that the seventh most queried backend receives only about 0.06x as many queries as the most queried backend, but has 3x the query latency. Yet shared caching systems inherently favour backends with higher query rates (they have more shots at getting something in the cache).
### The RobinHood caching system
RobinHood operates in 5 second time windows, repeatedly taxing every backend by reclaiming 1% of its cache space and redistributing the wealth to cache-poor backends. Within each window RobinHood tracks the latency of each request, and chooses a small interval (P98.5 to P99.5) around P99 to focus on, since the goal is to minimise the P99 latency. For each request that falls within this interval, RobinHood tracks the ID of the backend corresponding to the slowest query in the request. At the end of the window RobinHood calculates the *request blocking count* (RBC) of each backend – the number of times it was responsible for the slowest query.
Backends with a high RBC are frequently the bottleneck in slow requests. RobinHood thus considers a backend’s RBC as a measure of how cache-poor it is, and distributes the pooled tax to each backend in proportion to its RBC.
RBC neatly encapsulates the dual considerations of how likely a backend is to have high latency, and how many times that backend is queried during request processing.
Since some backends are slow to make use of the additional cache space (e.g., if their hit rations are already high). RobinHood monitors the gap between the allocated and used cache capacity for each backend, and temporarily ignores the RBC of any backend with more than a 30% gap.
When load balancing across a set of servers RobinHood makes allocation decisions locally on each server. To avoid divergence of cache allocations over time, RobinHood controllers exchange RBC data. With a time window of 5 seconds, RobinHood caches converge to the average allocation within about 30 minutes.
The RobinHood implementation uses off-the-shelf memcached instances to form the caching layer in each application server. A lightweight cache controller at each node implements the RobinHood algorithm and issues resize commands to the local cache partitions. A centralised RBC server is used for exchange of RBC information. RBC components store only soft state (aggregated RBC for the last one million requests, in a ring buffer), so can quickly recover after a crash or restart.
### Key evaluation results
The RobinHood evaluation is based on detailed statistics of production traffic in the OneRF system for several days in 2018. The dataset describes queries to more than 40 distinct backend systems. RobinHood is compared against the existing OneRF policy, the policy from Facebook’s TAO, and three research systems Cliffhanger, FAIR, and LAMA. Here are the key results:
- RobinHood brings SLO violations down to 0.3%, compared to 30% SLO violations under the next best policy.
- For quickly increasing backend load imbalances, RobinHood maintains SLO violations below 1.5%, compared to 38% SLO violations under the next best policy.
- Under simultaneous latency spikes, RobinHood maintains less than 5% SLO violations, while other policies do significantly worse.
- Compared to the maximum allocation for each backend under RobinHood, even a perfectly clairvoyant static allocation would need 73% more cache space.
- RobinHood introduces negligible overhead on network, CPU, and memory usage.
Our evaluation shows that RobinHood can reduce SLO violations from 30% to 0.3% for highly variable workloads such an OneRF. RobinHood is also lightweight, scalable, and can be deployed on top of an off-the-shelf software stack… RobinHood shows that, contrary to popular belief, a properly designed caching layer
canbe used to reduce higher percentiles of request latency. | true | true | true | null | 2024-10-12 00:00:00 | 2018-10-26 00:00:00 | null | null | acolyer.org | blog.acolyer.org | null | null |
11,494,800 | http://www.pbs.org/newshour/updates/paralyzed-man-moves-fingers-plays-guitar-hero-with-brain-implant-milestone/ | Paralyzed man moves fingers, plays Guitar Hero with brain implant milestone | null | By — Nsikan Akpan Nsikan Akpan Leave your feedback Share Copy URL https://www.pbs.org/newshour/science/paralyzed-man-moves-fingers-plays-guitar-hero-with-brain-implant-milestone Email Facebook Twitter LinkedIn Pinterest Tumblr Share on Facebook Share on Twitter Paralyzed man moves fingers, plays Guitar Hero with brain implant milestone Science Apr 13, 2016 2:12 PM EDT Nick Annetta, right, of Battelle, watches as Ian Burkhart, 24, plays a guitar video game using his paralyzed hand. A computer chip in Burkhart`s brain reads his thoughts, decodes them, then sends signals to a sleeve on his arm, that allows him to move his hand. Photo by Ohio State University Wexner Medical Center/ Batelle Six years ago, while swimming in the ocean surf, 24-year-old Ian Burkhart lost the ability to control his hands and legs. He dove under the surf, and the waves shoved him into a sandbar. But today, thanks to a computer chip implanted into the motor cortex of his brain that decodes his brain activity, he can move his fingers again. He can pick up a glass and pour from it. He can even play Guitar Hero. It’s the first time a brain-machine interface has restored muscle control to a paralyzed human being. But Burkhart can only perform these activities while hooked up in a lab. While this represents a tremendous advance, it is still a far cry from restoring everyday movement to the millions of disabled across the globe. The project’s findings are published today in the journal Nature. Not a cyborg Burkhart’s situation differs from previous computerized neuroprosthetics, which have allowed patients to manipulate cursors on a computer screen or guide robotic arms. Rather than manipulate a digital or mechanical prosthetic, his implant bypasses his spinal injury by feeding his brain signals directly into an electronic sleeve, which in turn, allows movement of his natural arm. “There’s technology that links brain signals to complex forms of exoskeletons,” said Ali Rezai, study co-author and director of Ohio State University’s Center for Neuromodulation. “Our approach wanted to move toward as minimally invasive as possible, the least number of surgeries, and to provide…in this case, a wearable, garment-like device that can be worn by the patient.” The machine and the person are actually learning together.Burkhart’s implant consists of a chip, smaller and thinner than a dime, that projects 96 wires — electrodes — into the outer layers of the brain. Picture a microscopic hairbrush with tiny metal teeth. These electrodes scan impulses from hundreds of neurons in the same brain area that would control his hand. A metal casing protects the computer chip when Burkhart isn’t connected to the sleeve. “I don’t really notice it that much anymore. I might notice when I’m having someone help me get dressed in the morning, pulling my T-shirt over my head or something like that,” Burkhart said. “It’s about the size of a few coins stacked on top of your head that’s sticking out that has a protective cap over it. So it’s really not too obtrusive to my everyday life.” Swiping a credit card was something Ian Burkhart, 24, never thought he would do again. Burkhart was paralyzed from the shoulders down after a diving accident in 2010, but regained functional use of his hand through the use of neural bypass technology. Photo by Ohio State University Wexner Medical Center/ Batelle In the lab, scientists plug a gray cigarette-box-sized device into the chip in Burkhart’s brain. This box contains electronic amplifiers that decode the neural signals picked up by the fine electrodes in the brain. A computer uses machine-learning algorithms, which decipher and remember nerve patterns, in order to control an electronic sleeve. The sleeve beams electric pulses onto 130 spots on Burkhart’s arm, forcing his muscles to move in a fluid manner. “The machine and the person are actually learning together,” said bioengineer Chad Bouton, who co-developed the new implant while working for Battelle Memorial Institute in Ohio. “There are millions of different ways, millions of combinations you could come up with in connecting the neurons and reconnecting those neurons to the muscles.” The researchers used machine-learning algorithms to unpack these patterns, which are unique to each person. “You’re not going to be looked on as, “Oh, I’m a cyborg now because I have this big, huge prosthetic on the side of my arm,” Burkhart said. “It’s something a lot more natural and intuitive to learn, because I can see my own hand reacting when I think of something, and it’s just your normal thoughts.” Hands are supreme In 2004, a survey asked 681 people with paralysis — quadriplegics (like Burkhart) and paraplegics (those with leg paralysis) — to rank the functions that they would most like to have restored. Recovering motion to the hands and arms topped the list. “At the top of that list for people who were unable to use their arms and legs was arm and hand function,” said neurologist and engineer Leigh Hochberg of Brown University, who wasn’t directly involved with the Burkhart study. “Being able to restore those abilities with one’s own limbs is really in many ways the dream for the research and the hope for millions of people with paralysis.” This is really early research. It is not something that turns around tomorrow and becomes a useful clinical product that would help people with paralysis. He should know. Hochberg is one of the principal investigators for BrainGate, a 20-year-old neuroengineering effort that developed the technology behind Burkhart’s implant. Hochberg spoke highly of the study, but also cautioned that the research is preliminary: “This is really early research. It is not something that turns around tomorrow and becomes a useful clinical product that would help people with paralysis.” That’s true for a number of reasons. For one, Burkhart can only use the electronic sleeve and the rest of its hardware inside a lab. Once there, he has to retrain his brain to operate the sleeve. “Initially we would do a short session, and I would feel like I was completely and mentally fatigued and exhausted. Right along the lines of taking a six or seven-hour exam,” Burkhart said. The reintroduction period has gotten much easier, he said, so it takes less time to get re-acquainted and learn new tasks. The machine-learning algorithms aid this recalibration by adapting as his mind adjusts. However, his body to some degree is working against the foreign electrodes in his head. Their ability to pick up nerve signals has faded over time. “We’ve had to really develop new ways to continue to acquire a good signal, and one that can really provide this ability to have natural movement,” said Bouton, who is now with Feinstein Institute for Medical Research in New York. “What we’ve done is listen to larger groups of neurons. These neurons that are in and around this electrode array in the brain. That’s worked out very well. Those types of signals are looking still very good over the almost two-year period for this study.” Patient Ian Burkhart, seated, poses with members of the research team (from left) Dr. Ali Rezai and Dr. Marcie Bockbrader of The Ohio State University Wexner Medical Center and Nick Annetta of Battelle during a neural bypass training session. Photo by Ohio State University Wexner Medical Center/ Batelle Another barrier to widespread use is the physical limit of the hardware. Plugs and wires are bulky and somewhat dangerous if they bump into something else. One of Hochberg’s colleagues at Brown University — Arto Nurmikko — has developed a wireless prototype of the BrainGate implant, but so far, it’s only been tested in nonhuman primates and pigs. However, this WiFi-based device has gotten past a major hurdle: bandwidth. Implants like Burkhart’s beam a gigabyte of data every three minutes, according to Battelle electrical engineer Nick Annetta. That’s a tremendous amount of data being tossed back and forth, and it needs to happen in the split seconds required to move an appendage. Nurmikko’s WiFi implant can transmit neural data at 24 Megabits per second, which is only about twice as fast as the best LTE network for a smartphone. “There’s still years of neuroscience, of engineering and of clinical research that needs to be done to get to the point that we all hope this gets to — being a valuable device that helps people with paralysis,” Hochberg said. But in the meantime, Hochberg believes tremendous credit is due to the participants in this research. He said patients enroll in these trials, whether BrainGate research or others, not because they’re hoping for personal benefit, but because they want to help develop and test a system that will help other people with paralysis in the future. Rezai agreed: “We’re here because of Ian and millions of patients like Ian who have physical disabilities. And the success of the study is due to Ian. He’s the rock star here.” By — Nsikan Akpan Nsikan Akpan Nsikan Akpan is the digital science producer for PBS NewsHour and co-creator of the award-winning, NewsHour digital series ScienceScope. @MoNscience
Nick Annetta, right, of Battelle, watches as Ian Burkhart, 24, plays a guitar video game using his paralyzed hand. A computer chip in Burkhart`s brain reads his thoughts, decodes them, then sends signals to a sleeve on his arm, that allows him to move his hand. Photo by Ohio State University Wexner Medical Center/ Batelle Six years ago, while swimming in the ocean surf, 24-year-old Ian Burkhart lost the ability to control his hands and legs. He dove under the surf, and the waves shoved him into a sandbar. But today, thanks to a computer chip implanted into the motor cortex of his brain that decodes his brain activity, he can move his fingers again. He can pick up a glass and pour from it. He can even play Guitar Hero. It’s the first time a brain-machine interface has restored muscle control to a paralyzed human being. But Burkhart can only perform these activities while hooked up in a lab. While this represents a tremendous advance, it is still a far cry from restoring everyday movement to the millions of disabled across the globe. The project’s findings are published today in the journal Nature. Not a cyborg Burkhart’s situation differs from previous computerized neuroprosthetics, which have allowed patients to manipulate cursors on a computer screen or guide robotic arms. Rather than manipulate a digital or mechanical prosthetic, his implant bypasses his spinal injury by feeding his brain signals directly into an electronic sleeve, which in turn, allows movement of his natural arm. “There’s technology that links brain signals to complex forms of exoskeletons,” said Ali Rezai, study co-author and director of Ohio State University’s Center for Neuromodulation. “Our approach wanted to move toward as minimally invasive as possible, the least number of surgeries, and to provide…in this case, a wearable, garment-like device that can be worn by the patient.” The machine and the person are actually learning together.Burkhart’s implant consists of a chip, smaller and thinner than a dime, that projects 96 wires — electrodes — into the outer layers of the brain. Picture a microscopic hairbrush with tiny metal teeth. These electrodes scan impulses from hundreds of neurons in the same brain area that would control his hand. A metal casing protects the computer chip when Burkhart isn’t connected to the sleeve. “I don’t really notice it that much anymore. I might notice when I’m having someone help me get dressed in the morning, pulling my T-shirt over my head or something like that,” Burkhart said. “It’s about the size of a few coins stacked on top of your head that’s sticking out that has a protective cap over it. So it’s really not too obtrusive to my everyday life.” Swiping a credit card was something Ian Burkhart, 24, never thought he would do again. Burkhart was paralyzed from the shoulders down after a diving accident in 2010, but regained functional use of his hand through the use of neural bypass technology. Photo by Ohio State University Wexner Medical Center/ Batelle In the lab, scientists plug a gray cigarette-box-sized device into the chip in Burkhart’s brain. This box contains electronic amplifiers that decode the neural signals picked up by the fine electrodes in the brain. A computer uses machine-learning algorithms, which decipher and remember nerve patterns, in order to control an electronic sleeve. The sleeve beams electric pulses onto 130 spots on Burkhart’s arm, forcing his muscles to move in a fluid manner. “The machine and the person are actually learning together,” said bioengineer Chad Bouton, who co-developed the new implant while working for Battelle Memorial Institute in Ohio. “There are millions of different ways, millions of combinations you could come up with in connecting the neurons and reconnecting those neurons to the muscles.” The researchers used machine-learning algorithms to unpack these patterns, which are unique to each person. “You’re not going to be looked on as, “Oh, I’m a cyborg now because I have this big, huge prosthetic on the side of my arm,” Burkhart said. “It’s something a lot more natural and intuitive to learn, because I can see my own hand reacting when I think of something, and it’s just your normal thoughts.” Hands are supreme In 2004, a survey asked 681 people with paralysis — quadriplegics (like Burkhart) and paraplegics (those with leg paralysis) — to rank the functions that they would most like to have restored. Recovering motion to the hands and arms topped the list. “At the top of that list for people who were unable to use their arms and legs was arm and hand function,” said neurologist and engineer Leigh Hochberg of Brown University, who wasn’t directly involved with the Burkhart study. “Being able to restore those abilities with one’s own limbs is really in many ways the dream for the research and the hope for millions of people with paralysis.” This is really early research. It is not something that turns around tomorrow and becomes a useful clinical product that would help people with paralysis. He should know. Hochberg is one of the principal investigators for BrainGate, a 20-year-old neuroengineering effort that developed the technology behind Burkhart’s implant. Hochberg spoke highly of the study, but also cautioned that the research is preliminary: “This is really early research. It is not something that turns around tomorrow and becomes a useful clinical product that would help people with paralysis.” That’s true for a number of reasons. For one, Burkhart can only use the electronic sleeve and the rest of its hardware inside a lab. Once there, he has to retrain his brain to operate the sleeve. “Initially we would do a short session, and I would feel like I was completely and mentally fatigued and exhausted. Right along the lines of taking a six or seven-hour exam,” Burkhart said. The reintroduction period has gotten much easier, he said, so it takes less time to get re-acquainted and learn new tasks. The machine-learning algorithms aid this recalibration by adapting as his mind adjusts. However, his body to some degree is working against the foreign electrodes in his head. Their ability to pick up nerve signals has faded over time. “We’ve had to really develop new ways to continue to acquire a good signal, and one that can really provide this ability to have natural movement,” said Bouton, who is now with Feinstein Institute for Medical Research in New York. “What we’ve done is listen to larger groups of neurons. These neurons that are in and around this electrode array in the brain. That’s worked out very well. Those types of signals are looking still very good over the almost two-year period for this study.” Patient Ian Burkhart, seated, poses with members of the research team (from left) Dr. Ali Rezai and Dr. Marcie Bockbrader of The Ohio State University Wexner Medical Center and Nick Annetta of Battelle during a neural bypass training session. Photo by Ohio State University Wexner Medical Center/ Batelle Another barrier to widespread use is the physical limit of the hardware. Plugs and wires are bulky and somewhat dangerous if they bump into something else. One of Hochberg’s colleagues at Brown University — Arto Nurmikko — has developed a wireless prototype of the BrainGate implant, but so far, it’s only been tested in nonhuman primates and pigs. However, this WiFi-based device has gotten past a major hurdle: bandwidth. Implants like Burkhart’s beam a gigabyte of data every three minutes, according to Battelle electrical engineer Nick Annetta. That’s a tremendous amount of data being tossed back and forth, and it needs to happen in the split seconds required to move an appendage. Nurmikko’s WiFi implant can transmit neural data at 24 Megabits per second, which is only about twice as fast as the best LTE network for a smartphone. “There’s still years of neuroscience, of engineering and of clinical research that needs to be done to get to the point that we all hope this gets to — being a valuable device that helps people with paralysis,” Hochberg said. But in the meantime, Hochberg believes tremendous credit is due to the participants in this research. He said patients enroll in these trials, whether BrainGate research or others, not because they’re hoping for personal benefit, but because they want to help develop and test a system that will help other people with paralysis in the future. Rezai agreed: “We’re here because of Ian and millions of patients like Ian who have physical disabilities. And the success of the study is due to Ian. He’s the rock star here.” | true | true | true | A brain implant and an electronic sleeve have restored hand movements in a paralyzed quadriplegic person for the first time. | 2024-10-12 00:00:00 | 2016-04-13 00:00:00 | article | pbs.org | PBS News | null | null |
|
11,437,989 | http://arc-team-open-research.blogspot.com/2016/02/digital-archaeological-drawing-on-field.html | Digital archaeological drawing on the field with QGIS | Luca Bezzi | I should have written this post since long, but time is always missing... The topic regards the digital archaeological (vector) drawing on the field.
During the CAA conference of 2015, held in Siena Italy), I participate, among others, in the session 9A (
The old video below (2014) shows how we manage the digital archaeological drawing on the field with QGIS.
The vector layers can be related to georeferenced photomosaic (bidimensional photomapping), or to georeferenced orthophoto (coming from 3D operations based on SfM/MVSR techniques). Of course orthophoto are the best solution, but currently the 3D work-flow with standard hardware is pretty slow. This is the reason why for almost all the palimpsestic documentation we still work with both the system: 2D photomapping and 3D SfM; depending on the time-table we have on the field, we choose the post-processing operations.
Within QGIS it is possible to draw vector layers in different ways. Basing on our experience (as you can see in the video), the best two best solution are:
1. to use the plugin Freehand Editing (if you want to experience something that is really similar to the old-traditional methodology, with the pencil and the paper)
2. to use the standard vector drawing tools (if you want to avoid too complex sahpes, like polygons with too many nodes)
IMHO, Freehand Editing plugin is a perfect solution for field operation, so that I am planning to add it into qgis-archeos-plugin, for ArcheOS Hypatia.
*Towards a Theory of Practice in Applied Digital Field Methods*), moderated by +nicolò dell'unto (Lund University) and James Stuart Taylor. After my speech I was asked if we (Arc-Team), as a professional archaeological society, were really able to perform the digital documentation in real-time during an ordinary excavation. I answered that, at least in Italy, this point is very important for a professional society and that in normal conditions (but this can also happens during most of the emergency excavations) we complete the digital archaeological documentation directly on the field. The reason is simple and it is every year more evident: money are always less and less, as the time goes by, for cultural heritage matters; at least this is the trend of the last decade. For this reason, if on the one hand we have to try to counter this phenomenon, on the other we have to adapt our methodology to the current reality and this means to use the economic resources for the excavation also to produce the related documentation (without counting on a post-excavation budget).The old video below (2014) shows how we manage the digital archaeological drawing on the field with QGIS.
The vector layers can be related to georeferenced photomosaic (bidimensional photomapping), or to georeferenced orthophoto (coming from 3D operations based on SfM/MVSR techniques). Of course orthophoto are the best solution, but currently the 3D work-flow with standard hardware is pretty slow. This is the reason why for almost all the palimpsestic documentation we still work with both the system: 2D photomapping and 3D SfM; depending on the time-table we have on the field, we choose the post-processing operations.
Within QGIS it is possible to draw vector layers in different ways. Basing on our experience (as you can see in the video), the best two best solution are:
1. to use the plugin Freehand Editing (if you want to experience something that is really similar to the old-traditional methodology, with the pencil and the paper)
2. to use the standard vector drawing tools (if you want to avoid too complex sahpes, like polygons with too many nodes)
IMHO, Freehand Editing plugin is a perfect solution for field operation, so that I am planning to add it into qgis-archeos-plugin, for ArcheOS Hypatia.
## No comments:
## Post a Comment | true | true | true | I should have written this post since long, but time is always missing... The topic regards the digital archaeological (vector) drawing on... | 2024-10-12 00:00:00 | 2016-02-06 00:00:00 | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_uc8vYY1HZDjnQW8egQv4fPBg0dYUlPc96bUXpdscZlpeplunQAmsqFgMUFoh_i_u5doGpNFA7DrPyaKkhfHZUhyDFkVZUJvLScwg=w1200-h630-n-k-no-nu | null | blogspot.com | arc-team-open-research.blogspot.com | null | null |
28,412,925 | https://www.youtube.com/watch?v=AaZ_RSt0KP8 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |