id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
37,413,644
https://www.patternblockhead.com/4444/onefour.htm
Numbers larger than four
null
Four Fours problem: Any number with One Four The four fours puzzle is a recreational puzzle that asks: can you create an expression using four fours and some arithmetic operations that evaluates to some number n? The problem often specifies some operations such as addition, subtraction, multiplication, division, exponentiation, concatenation (e.g. 44), negation, decimal point, repeating decimal, square root, nth root, and factorial. To get numbers larger than 100 (for example 113) you must use some additional operations. Different people like to introduce operations such as percent (see David Bevan’s site) or the gamma function (see Dave Wheeler’s site). Pete Karsanow’s site has a number of good links to other solutions sites. Some people have demonstrated that it is possible to get any number with three fours if you introduce the logarithm function (for example see here). (Thus the logarithm function is generally disallowed as an operation in the four fours problem.) Paul Bourke has a contribution by “whetstone” that shows how to get any number with two fours if you use the logarithm function and the percent sign. This page shows that it is possible to create any number with one four using trigonometric functions. In short: n = sec(atan(…(sec(atan(4))…)) (n2-42 times), for n>4 n = tan(asec(…(tan(asec(4))…)) (42-n2 times), for n=0, 1, 2, 3 n = 4, for n = 4 Using the trigonometric relation for the tangent and secant functions tan2(u)+1 = sec2(u) you can then derive that sec(atan(x)) = sqrt(x2+1) (This does not depend on whether you are using degrees or radians!) Therefore sec(atan(sec(atan(x)))) = sqrt((sqrt(x2+1))2+1) = sqrt(x2+2) sec(atan(…(sec(atan(x))…)) (k times) = sqrt(x2+k) Thus repeated application of sec(atan()) will create the next largest square root in a sequence. For example, the table shows a formula that evaluates to 5 using just one 4 and the secant and arctangent functions. x | sec(atan(x)) | | 4 | 4.123105626 | sec(atan(4)) | 4.123105626 | 4.242640687 | sec(atan(sec(atan(4)))) | 4.242640687 | 4.358898944 | sec(atan(sec(atan(sec(atan(4)))))) | 4.358898944 | 4.472135955 | sec(atan(sec(atan(sec(atan(sec(atan(4)))))))) | 4.472135955 | 4.582575695 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(4)))))))))) | 4.582575695 | 4.69041576 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(4)))))))))))) | 4.69041576 | 4.795831523 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(4)))))))))))))) | 4.795831523 | 4.898979486 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(4)))))))))))))))) | 4.898979486 | 5 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(4)))))))))))))))))) | The last line shows the expression for 5 that uses just one four. For any integer n greater than 4, there is a k that satisfies the equation above (specifically k = n2-42). This means that the number of times you need to apply sec(atan()) is n2-42 for any n greater than 4. (It takes a lot of operations, but it is possible!) n = sec(atan(…(sec(atan(4))…)) (n2-42 times), for n>4 To create an expression for the numbers 3, 2, 1, and 0 using one four, the same expression tan2(u) = sec2(u)-1 yields the formula tan(asec(x)) = sqrt(x2-1) So repeated application of tan(asec()) gives the next lowest square root in a sequence. n = tan(asec(…(tan(asec(4))…)) (42-n2 times), for n=0, 1, 2, 3 The following table shows the expression for 3, 2, 1, and 0. x | tan(asec(x)) | | 4 | 3.872983346 | tan(asec(4)) | 3.872983346 | 3.741657387 | tan(asec(tan(asec(4)))) | 3.741657387 | 3.605551275 | tan(asec(tan(asec(tan(asec(4)))))) | 3.605551275 | 3.464101615 | tan(asec(tan(asec(tan(asec(tan(asec(4)))))))) | 3.464101615 | 3.31662479 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))) | 3.31662479 | 3.16227766 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))) | 3.16227766 | 3 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))) | 3 | 2.828427125 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))) | 2.828427125 | 2.645751311 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))) | 2.645751311 | 2.449489743 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))) | 2.449489743 | 2.236067977 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))) | 2.236067977 | 2 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))))) | 2 | 1.732050808 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))))))) | 1.732050808 | 1.414213562 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))))))))) | 1.414213562 | 1 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))))))))))) | 1 | 0 | tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(tan(asec(4)))))))))))))))))))))))))))))))) | Some people like to do the problem using combinations of different numbers besides four, for example, four zeros. Using the approach above, you can create any number using one of any digit, for example zero. The table below shows the expressions for 0, 1, 2, and 3 using one zero. x | sec(atan(x)) | | 0 | 1 | sec(atan(0)) | 1 | 1.414213562 | sec(atan(sec(atan(0)))) | 1.414213562 | 1.732050808 | sec(atan(sec(atan(sec(atan(0)))))) | 1.732050808 | 2 | sec(atan(sec(atan(sec(atan(sec(atan(0)))))))) | 2 | 2.236067977 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(0)))))))))) | 2.236067977 | 2.449489743 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(0)))))))))))) | 2.449489743 | 2.645751311 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(0)))))))))))))) | 2.645751311 | 2.828427125 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(0)))))))))))))))) | 2.828427125 | 3 | sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(sec(atan(0)))))))))))))))))) | The number of operations sec(atan()) you need to create an expression that evaluates to n is n2. n = sec(atan(…(sec(atan(0))…)) (n2 times), for n>0 The same approach is possible using the cosecant and cotangent functions. cot2(u)+1 = csc2(u) Therefore csc(acot(x)) = sqrt(x2+1) cot(acsc(x)) = sqrt(x2-1) So repeated applications of csc(acot()) give the next largest square root in a sequence, similar to sec(atan()) above. Back to fractal designs using pattern blocks. *Copyright 2006 by Jim Millar*
true
true
true
null
2024-10-12 00:00:00
2006-01-01 00:00:00
null
null
null
null
null
null
4,133,956
http://thomaslarock.com/2012/06/knowledge-vs-applied-skills-cant-have-one-without-the-other/
Knowledge vs. Applied Skills: Can't Have One Without the Other
Thomas LaRock
Asians are better at math due to their language and because they live in rice paddies. But don’t just take my word for it, take Malcolm Gladwell’s. That’s one of his assertions in “Outliers: The Story of Success.” I’ve written before about that book and I found myself thinking about it again the other day while at TechEd. My thought was this: Assume that the author is correct, and that Asians are better at math because of language and rice paddies. Does that matter? Who cares how quickly one can count to forty? What matters more is the *concept* of forty, and most important * how that knowledge can be applied*. When I was teaching at Washington State we had discussions about the use of calculators in the classroom. Some of us (myself included) felt that they shouldn’t be allowed, that students should be able to do the work without any such aid. Other instructors felt they should be allowed because such aids were going to be allowed in the real world, so students needed to learn how to use them and we should focus on testing the application of concepts. Looking back I can see how I was wrong. Having a calculator doesn’t matter. What matters is knowing how to use it. The same is true for technology in general. Knowing when to apply the right piece of technology is what matters most. Like Buck Woody (blog | @buckwoody) would say, “use what works”. There are times when SQL Server makes the most sense. Other times it could be Oracle. You may be better served by a Linux box for some things. In some cases Powershell could be what is needed. Perhaps NoSQL is the right solution for that web project. Just knowing isn’t enough. You have to be able to apply that knowledge in order to come up with the right solution. I agree with the knowing what tool to use and how to use it is important and that *gasp* SQL may not be the best solution for a DB every time. The conflict I’ve encountered more & more often is that there are so many different solutions available the options are to: 1 – focus in one area and get really good at it while knowing something about the others 2 – spread your knowledge amongst most, if not all, without really mastering any. Then you run the real risk either pigeon holing yourself into a specialty or becoming the victim of “Jack of all trades, Master of None” syndrome.
true
true
true
It isn't what you know, it's how you apply what you know.
2024-10-12 00:00:00
2012-06-19 00:00:00
https://thomaslarock.com…012/06/asian.jpg
article
thomaslarock.com
Thomas LaRock
null
null
27,549,070
https://medicalxpress.com/news/2021-06-mrna-vaccine-yields-full-malaria.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,823,139
https://github.com/mozilla/playdoh
GitHub - mozilla/playdoh: PROJECT DEPRECATED (WAS: "Mozilla's Web application base template. Half Django, half awesomeness, half not good at math.")
Mozilla
# Use sugardough instead! *playdoh* is Mozilla's old Django app template, but it has been replaced by sugardough, which you should use instead. The old documentation for playdoh is available if you need it. This software is licensed under the New BSD License. For more information, read the file `LICENSE` .
true
true
true
PROJECT DEPRECATED (WAS: "Mozilla's Web application base template. Half Django, half awesomeness, half not good at math.") - mozilla/playdoh
2024-10-12 00:00:00
2010-12-28 00:00:00
https://opengraph.githubassets.com/96086a6f96829810577ba09cfcd927f165765dbc602955f9830c06e1f350797e/mozilla/playdoh
object
github.com
GitHub
null
null
34,350,436
https://kristall.random-projects.net/
Kristall Small-Internet Browser
null
██╗ ██╗██████╗ ██╗███████╗████████╗ █████╗ ██╗ ██╗ ██║ ██╔╝██╔══██╗██║██╔════╝╚══██╔══╝██╔══██╗██║ ██║ █████╔╝ ██████╔╝██║███████╗ ██║ ███████║██║ ██║ ██╔═██╗ ██╔══██╗██║╚════██║ ██║ ██╔══██║██║ ██║ ██║ ██╗██║ ██║██║███████║ ██║ ██║ ██║███████╗███████╗ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚══════╝ Kristall is a browser without support for css/js/wasm or graphical websites. It can display user-styled documents in several formats, including gemini, html, markdown, … provided by a server via gemini, gopher, http, finger, … I've never heard these words before, where can i learn more? Non-exhaustive feature list: Kristall is available and tested regulary on several operating systems, including: Note that Kristall may requires the Microsoft Visual C++ 2010 Redistributable Package installed on windows. Otherwise it will fail with a message that MSVCR100.dll is missing. Most windows systems have this already installed. Currently no stable release build is available, but there are packages for Kristall 0.3 available for The project source code is managed on GitHub: https://github.com/MasterQ32/kristall If you want to participate in development or file issues, please use GitHub or write me a mail to You can also contact me (xq) for support or feature proposals on IRC:
true
true
true
null
2024-10-12 00:00:00
2010-01-01 00:00:00
null
null
null
null
null
null
146,057
http://ilounge.com/index.php/news/comments/apple-receives-patent-for-ipod-scroll-wheel/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,944,595
http://fissionlink.com/blog/happy-wednesday-5-things-to-do-for-the-weekend/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,329,977
https://news.efinancialcareers.com/us-en/326810/open-source-algo-platforms-quants
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,632,354
http://flux7.com/blogs/benchmarks/benchmarking-cpu-performance-analysis-of-c3-instances-using-coremark/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,558,547
http://www.networkworld.com/community/node/64307
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,815,416
https://twitter.com/briandavidearp/status/1079164114784714752
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
40,077,348
https://thegrayzone.com/2024/04/17/uk-insurers-refuse-pay-nord-stream/
UK insurers refuse to pay Nord Stream because blasts were 'government' backed - The Grayzone
Wyatt Reed
**The legal team representing high-powered insurers Lloyd’s and Arch says that since the Nord Stream explosions were “more likely than not to have been inflicted by… a government,” they have no responsibility to pay for damages to the pipelines. To succeed with that defense, the companies will presumably be compelled to prove, in court, who carried out those attacks. ** British insurers are arguing that they have no obligation to honor their coverage of the Nord Stream pipelines, which were blown up in September 2022, because the unprecedented act of industrial sabotage was likely carried out by a national government. The insurers’ filing contradicts reports the Washington Post and other legacy media publications asserting that a private Ukrainian team was responsible for the massive act of industrial sabotage. A legal brief filed on behalf of UK-based firms Lloyd’s Insurance Company and Arch Insurance states that the “defendants will rely on, inter alia, the fact that the explosion Damage could only have (or, at least, was more likely than not to have) been inflicted by or under the order of a government.” As a result, they argue, “the Explosion Damage was “directly or indirectly occasioned by, happening through, or in consequence of” the conflict between Russia and Ukraine” and falls under an exclusion relating to military conflicts. BREAKING: The "defense" of Nord Stream AG's insurance companies has been filed. LLoyds and Arch argue that the damage was inflicted by, or under order of, a GOVERNMENT , and therefore they don't need pay. –> pic.twitter.com/Unyh6Dtqqa — Erik Andersson 🐘 (@Erkperk) April 16, 2024 The brief comes a month after Switzerland-based Nord Stream AG filed a lawsuit against the insurers for their refusal to compensate the company. Nord Stream, which estimated the cost incurred by the attack at between €1.2 billion and €1.35 billion, is seeking to recoup over €400 million in damages. Swedish engineer Erik Andersson, who led the first private investigative expedition to the blast sites of the Nord Stream pipelines, describes the insurers’ legal strategy as a desperate attempt to find an excuse to avoid honoring their indemnity obligations. “If it’s an act of war and ordered by a government, that’s the only way they can escape their responsibility to pay,” Andersson told The Grayzone. Following a report by Pulitzer Prize-winning journalist Seymour Hersh which alleged that the US government was responsible for the Nord Stream explosion, Western governments quickly spun out a narrative placing blame on a team of rogue Ukrainian operatives. Given the lack of conclusive evidence, however, proving that the explosions were “inflicted by or under the order of a government” would be a major challenge for defense lawyers. Even if the plaintiffs in the case are able to wrest back the funds in court, they are likely to face other serious hurdles. Later in the brief, lawyers for Lloyd’s and Arch suggest that even if they were required to pay up, anti-Russian sanctions would leave their hands tied. “In the event that the Defendants are found to be liable to pay an indemnity and/or damages to the Claimant,” the brief states, “the Defendants reserve their position as to whether any such payment would be prohibited by any applicable economic sanctions that may be in force at the time any such payment is required to be made.” After they were threatened with sanctions by the US government, in 2021 Lloyd’s and Arch both withdrew from their agreement to cover damages to the second of the pipelines, Nord Stream 2. But though they remain on the hook for damages to the first line, the language used by the insurers’ lawyers seems to be alluding to a possible future sanctions package that would release them from their financial obligations. “Nord Stream 1 was not affected by those sanctions, but apparently sanctions might work retroactively to the benefit of insurers,” observes Andersson. The plaintiffs may face an uphill battle at the British High Court in London, the city where Lloyd’s has been headquartered since its creation in 1689. As former State Department cybersecurity official Mike Benz observed, “Lloyd’s of London is the prize of the London banking establishment,” and “London is the driving force behind the transatlantic side of the Blob’s “Seize Eurasia” designs on Russia.” Incredible. Lloyd's of London is the prize of the London banking establishment. London is the driving force behind the transatlantic side of the Blob's "Seize Eurasia" designs on Russia. If anyone were in position to know the role of "a government" in Nordstream bombing… https://t.co/Tui4TwffGM — Mike Benz (@MikeBenzCyber) April 16, 2024 But if their arguments are enough to convince a court in London, a decision in favor of the insurers would likely be a double-edged sword. Following Lloyd’s submission to US sanctions and its refusal to insure ships carrying Iranian oil, Western insurance underwriters (like their colleagues in the banking sector) are increasingly in danger of losing their global reputation for relative independence from the state. Should the West ultimately lose its grip on the global insurance market — or its reputation as a safe haven for foreign assets — €400 million will be unlikely to buy it back.
true
true
true
The legal team representing high-powered insurers Lloyd’s and Arch says that since the Nord Stream explosions were “more likely than not to have been inflicted by… a government,” they have no responsibility to pay for damages to the pipelines. To succeed with that defense, the companies will presumably be compelled to prove, in court, who carried out those attacks. British insurers are arguing that they have no obligation to honor their coverage of the Nord Stream pipelines, which were blown […]
2024-10-12 00:00:00
2024-04-17 00:00:00
https://thegrayzone.com/…/Gasutslapp2.jpg
article
thegrayzone.com
The Grayzone
null
null
593,345
http://www.guardian.co.uk/technology/blog/2009/may/04/google-chrome-mac-alpha
Google Chrome on the Mac: what's the holdup? (UpdateD)
Charles Arthur
Want Google Chrome for Mac? You can have it - though note that there's plenty that's not actually, um, *working* just at the moment. It's odd how many months it's taking Google to do this port (and how the shine seems to have come off Chrome, which arrived in such a blaze of light back in September). Manu J, an independent Ruby on Rails developer, has a page where you can get the updated Google Chrome downloads for Mac (Intel processor, OSX 10.5/Leopard only). Why his page? Because the official Google Chrome for Mac page is just a signup for an email. Huh. One has to say that it's hard to feel enthused by the list of "what does work" and "what doesn't work" in this one (which is officially version 0.1, build 15170 from May 1st:**What Works** - Basic Websites (Gmail works sometimes) - Bookmark pages - Most visited sites - Open link in new tab - Open new tabs - Omnibox - Back, Forward, Reload - Full Screen Browsing!! - Open link in new window - Drag a tab to make a window - Launch new tab - Cut, Copy, Paste - Keyboard shortcuts - about:version, about:dns, about:crash, about:histograms **What Doesn't Work** - Open link in new tab *fixed in Rev 13759* - Plugins (No Flash -> No YouTube) - History (You can view it through this link chrome-ui://history/ You will also be able to do a full text search there) - Omnibox *fixed in Rev 13759* - Bookmarks Bar - Find - about:network, about:memory - Web Inspector - Input methods such as Kotoeri (Japanese) - Preferences But even so, we suspect we're going to give it a try from time to time, and bookmark the page. Can you ever have too many browsers? We may find out. So far, though, Chrome on the Mac seems.. OK; though this version doesn't have the tab-by-tab viewing of how much processing is being sucked up. (Ah, just got my first spinning pizza of death, trying to scroll up in a window.) Onwards and upwards! **Update:** interestingly, Chrome on the Mac does indeed give you per-tab process control. You have to view it in the Activity Monitor program, which is like Process Manager on Windows. So far the problem is that it seems to think that every tab is "not responding" (ie stuck), but it's nice - initially - to be able to choose per tab which one you want to kill. See the picture below. However, given my own tendency to have literally 100 tabs open across dozens of windows, I think that the processes might need slightly more useful names - or a tab for viewing them inside Chrome itself. At present, choosing which one to kill would be a lottery. ## Comments (…) Sign in or create your Guardian account to join the discussion
true
true
true
Want Google Chrome for Mac? You can have it - though note that there's plenty that's not actually, um, working just at the moment. It's odd how many months it's taking Google to do this port (and how the shine seems to have come off Chrome, which arrived in such a blaze of light back in September)
2024-10-12 00:00:00
2009-05-04 00:00:00
https://assets.guim.co.u…allback-logo.png
article
theguardian.com
The Guardian
null
null
23,473,288
https://www.wired2fish.com/fishing-tips/introducing-the-chicken-rig-for-bass-fishing/
Introducing the Chicken Rig for Bass Fishing
TJ Maglio
For most bass fishermen, tinkering with baits and rigs is just part of the job. Add some suspend dots to a jerkbait; dye the tail of your worm chartreuse. For legendary bait designed Gary Yamamoto, tinkering with tackle is not only a passion, it’s his livelihood. That’s what ultimately led him to create the **Chicken Rig for bass fishing**. Yamamoto sees the fishing world a little different, a fact that has resulted in some of the most innovative baits and bass fishing rigs in the game. He looked at a Bic pen and saw a Senko, turned a day of short striking fish into the Kut Tail worm, and introduced everyone to the bizarre but effective Double-Tailed Hula Grub – all staples in bass fishing now. In addition to baits, Yamamoto has also long been an innovator when it comes to rigging his creations. In 2011, he broke the FLW Outdoors single day weight record for Lake Champlain with a 24-pound, 4-ounce stringer on his way to a second place finish in that year’s final FLW Tour Open. He caught the entire bag skipping docks with a Senko tail-weighted with a No. 7 finishing screw. Fast forward to 2014, and he did it again. This time he rode his newest creation to a 5th-place finish at the Rayovac FLW Series event held on the Upper Mississippi in September. The rig is a modification of the Neko rig (nail weighted Senko), but instead uses the new Yamamoto 7.75 inch Kut Tail worm. And we’re calling it the Chicken rig. That name isn’t intended to insinuate that the Chicken rig is a cowardly bait, it’s because “backward wacky weighted Kut Tail” is a tongue twister; and it condenses down to BWWK, which any child under the age of four will tell you is the sound a chicken makes. ## The rig “In Japan, anglers are always trying new things,” Yamamoto said. “You have to use a lot of finesse to catch bass there, so most fishermen are very good at customizing their baits.” His appreciation for the Neko rig’s bizarre fall and fish catching prowess led him to try it with a Kut Tail worm, and several months prior to the Rayovac event he had a friend from Japan in his boat for a day of fishing that truly demonstrated the rig’s potential. “We were fishing rip rap, and I was fishing with a regular plastic,” Yamamoto said. “My friend picked up the Kut Tail rigged backwards and started catching all the fish behind me. That was with the little 6.5-inch worm. In tournaments we are specifically targeting 3-5 pound fish, so when we were developing the new 7.75 inch version of the worm. I upsized the rig and it works great.” Rigging the Chicken rig is fairly simple. The key is to use a 4/0 or 5/0 straight-shank, worm hook and insert the point about an inch behind the egg sack of the Kut Tail. Once there, thread it on past the egg sack, pop the point out and flip it around just like you’re rigging a normal Texas rig. When done properly, the hook will be about two thirds of the way down the bait, leaving an inch and a half of the meatiest part of the bait left and insert a standard drywall screw in that portion. “I like to find a drywall screw with a big head so that most of the weight is near the end, but generally any old drywall screw will work,” he said. ## How to fish it Yamamoto generally fishes the Chicken rig on medium heavy spinning tackle, using 15 to 20-pound braid as a mainline with a 16 or 20-pound Sugoi Fluorocarbon leader. “The rig can be fished on a fairly heavy setup,” Yamamoto said. “The 7.75-inch Kut Tail is a big bait, so you need to have enough rod to handle it. Lots of people will probably fish it on baitcasting tackle as well, but I have always preferred spinning tackle.” In the Rayovac event, Yamamoto was fishing the Chicken rig around bridge pilings, barge tieups, docks, rip rap, and other hard cover found right in the city of La Crosse. These are the types of places the rig excels. “I like to cast it out there and let it fall on slack line,” Yamamoto said. “Once it hits bottom, I’ll lift it up a couple feet and let it drop again. Once I do that, if I haven’t gotten bit, I’ll reel it back up and throw it out again. It’s not really finesse fishing where you’ll let your bait sit for a long time. I use it a lot more like a power fishing tool than anything else.” Although the rig fishes a lot like a jig or other Texas-rigged plastic, Yamamoto said that the hook set is a little different with the Chicken rig. “When a fish bites, they are almost always swimming with it. If you jerk really hard on it, you’ll likely lose a lot of fish. I recommend anglers just reel down and pull with hard steady pressure. You’ll lose a lot less fish than if you set the hook like you’re flipping.” ## Why it works In many places across the country, bass are bombarded almost daily with a steady dose of lures. Finesse fishing has even become so popular that bass sometimes even turn their noses at a shakey head or drop shot. It’s almost a guarantee that they haven’t seen something like the Chicken rig before. “We as anglers need to show the bass something different if we really want to maximize our success while fishing,” he said. “When that backwards weighted Kut-Tail falls in front of them, it presents an image that they haven’t seen before that looks really natural. They have no choice but to strike.” The Chicken rig also offers a several other advantages over similar wacky or backwards weighted setups that include the following: **It’s weedless**– The Neko rig is great, but it’s not weedless and it can be frustrating to fish around cover because the wacky hook tends to hang up. The Chicken rig is weedless, so it can be fished through, over, and around brush, rocks, and docks that you couldn’t with a Neko rig.**It’s has big bass appea**l – Many of the wacky or weirdly weighted innovations are meant designed for finesse fishing. The Chicken rig is not. Although only 7.75 inches long, the big Kut-Tail is just that, and it provides big fish appeal in an action that most bass haven’t seen.**It’s durable**– Since learning about the rig, I’ve spent a fair amount of time experimenting with it and unlike many worm rigs, you can usually get quite a few fish out of each worm. Something about how the rig is hooked seems to make it slide up the line when you set the hook, and the worm usually comes back relatively unscathed.
true
true
true
For most bass fishermen, tinkering with baits and rigs is just part of the job. Add some suspend dots to a jerkbait; dye the tail of your worm chartreuse. For legendary bait designed Gary Yamamoto, tinkering with tackle is not only a passion, it’s his livelihood. That’s what ultimately led him to create the Chicken [...]Read More... from Introducing the Chicken Rig for Bass Fishing
2024-10-12 00:00:00
2015-02-24 00:00:00
https://assets.wired2fis…5a5933180602.jpg
article
wired2fish.com
Wired2Fish
null
null
1,035,148
http://trenchant.org/daily/2010/1/6/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,397,039
https://phys.org/news/2023-01-chemists-cook-brand-new-kind-nanomaterial.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,832,143
https://www.theguardian.com/lifeandstyle/2023/feb/17/humans-may-need-more-sleep-in-winter-study-finds
Humans ‘may need more sleep in winter’, study finds
Jane Clinton
For those of us who struggle to leave our beds in the winter, taunts of “lazy” could well be misplaced. New research suggests that while humans do not hibernate, we may need more sleep during the colder months. Analysis of people undergoing sleep studies found that people get more REM (rapid eye movement) sleep in the winter. While total sleep time appeared to be about an hour longer in the winter than the summer, this result was not considered statistically significant. However, REM sleep – known to be directly linked to the circadian clock, which is affected by changing light – was 30 minutes longer in the winter than in summer. The research suggests that even in an urban population experiencing disrupted sleep, humans experience longer REM sleep in winter than summer and less deep sleep in autumn. Researchers say if the study’s findings can be replicated in people with healthy sleep, this would provide the first evidence for a need to adjust sleep habits to season – perhaps by going to sleep earlier in the darker and colder months. Dr Dieter Kunz, corresponding author of the study, based at the Clinic for Sleep & Chronomedicine at the St Hedwig hospital, Germany, said: “Seasonality is ubiquitous in any living being on this planet. “Even though we still perform unchanged over the winter, human physiology is down-regulated, with a sensation of ‘running-on-empty’ in February or March. “In general, societies need to adjust sleep habits including length and timing to season, or adjust school and working schedules to seasonal sleep needs.” During REM sleep, brain activity increases and people may dream. Normal sleep starts with three stages of non-REM sleep at first, followed by a short period of REM sleep. While the researchers acknowledge the results would need to be validated in people with no sleep difficulties, the seasonal changes may be even greater in a healthy population. In the study, a team of scientists recruited 292 patients that had undergone sleep studies called polysomnographies. These are regularly carried out on patients who experience sleep-related difficulties. They are asked to sleep naturally in a special laboratory without an alarm clock, and the quality and type of sleep can be monitored as well as the length of sleep. After exclusions were made for people taking sleep-affecting medication, technical errors and for those who may have skipped the first REM stage, 188 patients remained in the new study. The findings are published in the journal Frontiers in Neuroscience.
true
true
true
Research shows people get more deep REM sleep than in summer, and may need to adjust habits to season
2024-10-12 00:00:00
2023-02-17 00:00:00
https://i.guim.co.uk/img…29a1ecd10353beb7
article
theguardian.com
The Guardian
null
null
25,585,315
https://beyang.org/time-to-build-dev-tools.html
It's time to build developer tools · 2020.12.29
null
## It's time to build developer tools · 2020.12.29 We are living through a golden age in developer tools. Every company is now a software company. Every software company needs developers—and developer tools. The ecosystem of companies building tools for developers is booming. If you are a developer who loves the art and craft of programming, if you like working on technical problems beyond scaling CRUD, or if you love the idea of accelerating technological progress by building tools for people like yourself, you should think about working on dev tools! ## Historic forces, immense opportunity Every company that now writes code has discovered (or is discovering) that software development is really tricky. The world needs more software, but it’s not as simple as hiring more developers. Building software doesn’t scale or parallelize easily. Serial dependencies coupled with unforeseen complexity lead to gross underestimates of timeline and budget, which in turn lead to bad business outcomes and even existential crises. To address the challenges of developing software quickly and efficiently at scale, more and more companies are building and buying developer tools. It’s no longer just the Googles, Facebooks, Microsofts, and Amazons of the world spending serious money doing so. To satisfy this increasingly apparent need, a rapidly growing new industry of developer tool companies has arisen in recent years. Individual names like GitHub, GitLab, and HashiCorp are now well known and the emergence of the field has not gone unnoticed. It’s difficult to overstate how quickly the market for developer tools has grown in the past decade. When Quinn and I started Sourcegraph back in 2013, we quickly discovered that “dev tool” was a dirty word among investors. Developer tools, it was thought, were super valuable, but most of them were single player tools that didn’t make enough money (e.g., JetBrains, Sublime Text), internal tools built for the specific needs of a particular big tech company (e.g., Google Code Search), or open-source projects that no one paid for and which you could only make money from by selling services and support. Investors told us there was just one billion-dollar company that sold to developers—Red Hat—and it was the exception that proved the rule. Developers were building lots of valuable software that was sold to other folks—salespeople, marketers, designers, etc.—but when it came to their own work, the common refrain was, “the cobbler’s children have no shoes”. An article in TechCrunch lamented, “Will Developer Tools Startups Ever Find Investors?" But then, somewhere along the line of the past seven years, something changed. Evidence of this shift is in the valuations: GitHub acquired by Microsoft for $7.5 billion, GitLab valued at $6 billion, HashiCorp at $5 billion, JetBrains estimated at $7 billion, the list goes on. But though the big numbers with lots of zeroes raise eyebrows and make the headlines, they are just the byproduct of the underlying factors driving big changes in where and how software is built. To list a few of these: **Big Tech:**Competition from Big Tech and tech-enabled startups moving into every industry is driving every company to prioritize software as a core competency.**Big Code:**The amount of code in the world has been growing rapidly, and more and more companies have reached a tipping point previously reached only by the largest tech companies.**Enterprise OSS:**The maturation of “the cloud” as a software platform built on top of open-source software (Linux and Kubernetes), in contrast to older software platforms, which were vertically integrated proprietary ecosystems (e.g., Windows + .NET + Visual Studio). There are many other trends at play here, but one thing is certain: almost every valuable or growing company is now building software and most have realized they need great developer tools to compete. This is fantastic news. “Tech” has traditionally been a distinct sector of the economy, and the ability to build software effectively at scale was confined to a handful of big technology companies. Now, companies in every sector understand that code has to become a core part of their DNA. And this means the impact of code and software is amplified. No longer does the value proposition of software have to flow through some product or service offered by a “tech company”—companies across every sector of the economy are now learning how to build great software themselves. As the broader economy learns to build software, progress will accelerate. All the promises that the future holds—cures for cancer, life-saving medicine, mass individualized transportation, rocketships, and more—will arrive sooner in large part thanks to code. Software has long been recognized as a technological accelerant. But developer tools are now a *second-order accelerant* on technological progress. This is an immense opportunity. How best to pursue it? ## Large companies, open source, and startups There are three places where you can work on developer tools. (There may be more, but for brevity’s sake, we’ll focus on these.) The first is inside a large non-developer-tool company. Many large companies that build software are investing heavily in internal tools. There have been a lot of great dev tools created inside large companies, and many such tools have inspired the creation of similar tools outside the company. Blaze, the build system of Google, inspired other build systems like Pants and Facebook’s Buck and was later itself open-sourced as Bazel. Large companies also offer the benefit of a large, stable salary. The downside is that the direct impact of your work will likely be limited to a single organization. It’s possible that your work will be open-sourced down the road, but there is no guarantee, and it can often take years to get the necessary legal and bureaucratic approvals to do so. And, of course, you don’t have a direct piece of the financial upside if the tool becomes super widely used. The second place you can build dev tools is on an open-source project. Working in the open guarantees you will receive recognition for your work, and you also have the benefit of not having any pesky pricing considerations standing in between your users and your product. If mass adoption is your primary goal, then open source holds great appeal. Many of the most widely used developer tools—Git, Linux, Emacs, Vim, etc.—are open source projects. The downside, of course, is lack of revenue. Most open-source authors and maintainers have an alternate source of income (often working at a large tech company). Patreon and GitHub Sponsors are great, but it’s likely that only a small fraction of open-source creators will ever earn enough to make a comfortable living through sponsorship alone. The number of hours you can devote to working on open source will be constrained by where you can find sources of income. The third place is at a developer tools company. Most such companies will be startups, because of the rapid growth in market size for dev tools over the past few years. Dev tools companies offer the benefit of aligned incentives: your users, your customers, and even your fellow engineering teammates are often the same people. If you build a great tool, then your users are happy, you get paid so you are also happy, and you and your teammates can also make use of the same tool, so that should make you doubly happy! Dev tool startups have the additional benefit of financial upside (assuming part of your compensation is an equity stake in the business). This upside could be substantial. Personally, I think we are in the early days of the developer tools market. I believe the impact of high-quality, broadly useful dev tools will someday far exceed the impact of ads-driven web search, PC operating systems, and social media. Which is to say, I think that the value of developer tool companies will someday exceed the combined value of the most valuable tech companies today. Of course, startups also carry a downside risk. There is a high chance that the company will fail to meet its lofty goals, or may need to lay people off, or fail completely. There is no one-size-fits-all prescription for the best place to work. Personally, the principle I follow is, “Maximize expected utility subject to minimizing the risk of ruin,” where the definition of “utility” and “ruin” is up to you to define. Regardless of whether you opt for large companies, open source, or dev tool startups, there is still the question of how to evaluate which specific opportunities and tools are worthy of your time. For that, you’ll need to rely on a combination of intuition and worldview, which we’ll discuss next. ## Scratch your own itch, but understand the big picture Jamie Zawinski once said, “The best motivator in the world is scratching your own itch.” He was talking about developer tools, and he was right. Most developer tools start with a programmer noticing they have a problem, imagining a way to fix the problem through software automation, and then writing the code that implements that automation. Building for yourself means you get to wear the hat of product manager, engineer, and customer simultaneously. This is a fantastic recipe for building something truly useful, not just for yourself, but for other people who feel the same pain. As you evaluate the landscape of enterprise developer tools, you should rely heavily on your own intuition for where pain points exist and which products offer effective solutions. However, you’ll also want to combine your direct intuition with a broader view of how software development works—what is common across companies and sectors and what is different and specific to the segment of the market you are building for. It’s important to understand where your itch fits into the overall picture, how others experience that itch, and how *their* itch fits into *their* picture. The way software is developed varies widely from organization to organization and even individual to individual, but there is a general “software development lifecycle” template that is fairly universal: - Plan and describe what the software should do (e.g., implement a feature or fix a bug) - Read and understand the code being modified - Write, run, and debug the new code - Test the code - Review the code - Deploy the code - Monitor the code in production and react to incidents There are many variations on this lifecycle: - An individual programmer working on a personal project may use their editor for reading and writing code, a simple unit test framework for testing, distribute the application as a single binary, and receive feedback and bug reports through a small issue tracker. - A team of developers might use a more sophisticated issue tracker for project planning, a code search tool for understanding existing code, a variety of different editors for writing code, a CI service like Buildkite or CircleCI, Docker on top of AWS or Google Cloud for deployment, and a simple log aggregator for monitoring and error detection. - An large engineering organization may have entire teams or departments responsible for the development environment, CI/CD, provisioning compute resources, deploying to production, and monitoring and routing critical production issues to the proper first responder. This general process and its specific instantiation by your customers is important to understand. It’s also important to understand whose lifecycle you’re accelerating—is it the individual’s, the team’s, the organization’s, or maybe some combination of the above? There’s nothing inherently wrong with selling just to individual developers, but many of the most successful developer startups sell to teams and organizations. Selling to a team means making a case to the representative of the team’s interest—perhaps someone with the title of “engineering manager”, “director”, or “head of developer productivity”. This individual may not code day-to-day. When evaluating which dev tools and dev tool companies are worthy of your time, you should ask whether they sell to teams or individual developers, how they articulate their value proposition to different customer stakeholders, and how the tool you will help build fits into the customer’s software development lifecycle. ## A brief sampling of dev tools startups Let’s apply the software development lifecycle framework to a few dev tool startups. The following are some companies that I’ve had the good fortune of getting to know over the years and which I’ve spoken to directly on The Sourcegraph Podcast. (Incidentally, these companies are also where I’d start my job search if I were looking to join a dev tools startup—I think all of them are doing fantastic work.) - Sentry alerts developers to errors in production and helps you quickly identify the point in code where the fix should be made. It focuses on making Stage 7 of the software development lifecycle (“monitor and react”) more accessible to application developers who often spend most of their time in Stages 1-5. It also helps surface issues in staging environments, catching issues before they reach Stage 7. - Honeycomb detects production errors and anomalies and lets you drill down into an “infinitely wide data table” that provides enough context to identify the root cause of any issue. Its value prop is anchored to the concept of Observability, which encapsulates a new school of thought for how to effectively manage Stage 7, in contrast to other Stage 7 tools which use the label “monitoring” or “APM”. - Pulumi lets you describe your infrastructure as code in your favorite programming language. You define your deployment state by instantiating objects in, say, TypeScript, and the system takes care of reflecting this into production state. It makes Stage 6 more accessible to developers whose area of competence is Stages 1-5. - YourBase is a test-running service that intelligently figures out how to parallelize and optimize your builds. It inspects syscalls and performs language analysis to infer the build dependency structure so it can make choices that yield far lower build times for large codebases. It speeds the heck out of Stage 4, which can otherwise become a critical bottleneck for many teams. - Tilt is building the first-class developer environment for multi-service applications. Technologies like Kubernetes have made multi-service applications much easier to deploy, but multi-service development environments are still largely roll-your-own and janky. It resolves pain points in Stage 3 that have arisen due to recent innovation in Stage 6. - Caddy is a web server and reverse proxy that emphasizes great developer experience, extensibility, and good defaults like automatic HTTPS. It is used in Stage 6, but much more accessible to developers who spend most of their time in Stages 1-5. - Wasmer is building a WebAssembly virtual machine that runs on the server (outside the web browser). It has the potential to impact Stages 2 and 3 (Wapm raises the possibility of inter-language source code dependencies), but its most obvious value is to Stage 6 by providing a performant, easy-to-use, and secure deployment environment for a wide range of server-side applications. - Codestream is a code discussion tool that facilitates communication and information exchange around code. It aims to “shift left” a lot of the communication that takes place in code review (Stage 5) to conversations that organically happen in Stages 2 and 3. - Sourcegraph (the company I co-founded) is a code search tool that lets you find patterns, anti-patterns, symbols, references, and error messages across your codebase. It also makes it easy to dive into any piece of unfamiliar code and build a working understanding of how things work and how they relate to other parts of the code. Our core product targets Stage 2, but we have integrations with editors (Stage 3), code review tools (Stage 5), code coverage tools (Stage 4), and monitoring tools (Stage 7), because finding and reading code is something you do throughout the software development lifecycle. A lot of buzzwords in software engineering can also be thought of in terms of what they mean for the software development lifecycle. Understanding them in this way helps me think past the hype and clarify what they actually mean: **DevOps**is about 2 things: The first is making ops more accessible to developers (making Stage 6-7 accessible to people who spend most of their time in Stages 1-5). The second is automating ops with software so less manual work from sysadmins is required and the sysadmin role becomes more like the developer role (automating Stages 6-7).**Infrastructure as Code**can be thought of as one facet of DevOps that involves moving the definition of deployment state into a developer-friendly language, thereby facilitating automation of the deployment process.**Shift left**means catching bugs and issues in earlier stages of the software lifecycle. The generally accepted rule is that bugs become 10x more expensive to fix per stage in the process. If your tool helps catch issues in Stage 4 rather than Stage 7, that means they were 100-1000x cheaper to fix in terms of time, effort, and money.**Microservices**address bottlenecks in Stages 1-5 by adjusting how software is deployed in Stage 6. You have lower complexity at the source code level, but likely greater complexity in deployment (but hopefully not so much more complexity that it outweighs the simplicity gains in earlier stages). A lot of the contention between microservices and monoliths comes down to where people feel comfortable dealing with complexity. There are general statements that can be made about which is better, but a lot also depends on the domain competency of the software team members, how much of their experience is in ops (Stages 6-7) versus dev (Stages 1-5), and their familiarity with tools to manage the complexities of any given stage. I’ll caveat all of this by saying this analysis is my own point of view. The companies discussed above might have a different articulation of their value proposition and how they impact the software development lifecycle. So might you if you do the homework of learning more about them, trying out their products, and fitting them into your own developer worldview. And who knows? Perhaps investigating enough such companies and reflecting on personal experience will reveal to you gaps and opportunities that a new tool or company could fill. You may conclude that your best course of action is not to join any existing dev tools startup, but to create your own—but that is a subject for another post. ## Parting thoughts The past few years have witnessed an inflection point in the market for developer tools. Software development now permeates every sector of the economy and even “non-tech” companies today employ millions of programmers working on ever larger codebases that depend on an ever growing open-source universe. The opportunities awaiting builders of dev tools are immense. So, scratch that itch. Advance the state of our craft. Become a second-order accelerator of technological progress. The cadence and drumbeat of the global economy moving forward will be “developers, developers, developers”. For makers of developer tools, that means *it’s time to build.* ## Get in touch If you found this post interesting or thought-provoking and would like to chat, please shoot me an email or reach out on social media. I enjoy learning from and helping others, especially those who are working on or would like to work on great dev tools.
true
true
true
null
2024-10-12 00:00:00
2020-12-29 00:00:00
null
null
null
null
null
null
8,070,182
http://kentwilliam.com/articles/saving-time-staying-sane-pros-cons-of-react-js
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,271,208
http://www.bbc.com/news/science-environment-34338775
'Something weird' in European car emissions tests, say analysts
Matt McGrath
# 'Something weird' in European car emissions tests, say analysts - Published **Environmental campaigners say they believe that cheating similar to what happened at VW is going on in Europe.** The German car manufacturer admitted to using a "defeat device" to rig emissions from its cars in the US. But green group researchers told BBC News that their analysis of some European diesel cars pointed to a "different sort of defeat device being used". Industry bodies deny there is any deliberate deception in European tests. While there has been widespread indignation at the scale of VW's manipulation in the US, the number of diesel cars being used across the states mean the impact on the environment and air quality is likely to be limited. But in Europe it is a different story. Over half the cars purchased in the EU in 2014 were diesel powered. Several European cities, including London and Paris have had significant issues with nitrogen dioxide, a harmful gas produced as a result of diesel use and linked to increased deaths from heart attacks and asthma. But according to the latest research, external from Brussels-based Transport & Environment, only one in 10 new diesel cars sold in Europe meets the standards for emissions for nitrous oxides (NOx). "New diesel cars should be achieving 80mg of NOx per kilometre, typically an average diesel is producing five times more than that at the present time," said Greg Archer, a former UK government adviser and now head of clean vehicles at Transport & Environment. ## Extreme lengths? Testing in Europe is said to be more open to manipulation than in the US because the evaluations are carried out by companies paid for by the manufacturers, and they are generally done before the cars go into full production. Many researchers acknowledge there is widespread "gaming" of the system. "There is a widespread appreciation that there has been gaming going on," Prof Alastair Lewis, from the University of York, told BBC News. "But what VW shows is the extreme lengths to which manufacturers are going, way beyond what a reasonable person would appreciate was an appropriate level of gaming." According to Greg Archer at Transport & Environment, there is much more than gaming the system going on in European tests. "There are car models out there which are 50-60% difference between tests and real world performance," he said. "We think that gaming will give you about a 25% difference, we can't explain how these vehicles are achieving real world performances 50% higher, unless something weird is going on in the way they are being tested, that would point to something similar (to VW), a different sort of defeat device being used." Mr Archer refused to name the manufacturers that he believes are cheating the system in Europe. Motor manufacturers were quick to deny that any organised or deliberate cheating was going on in EU car tests. "The EU operates a fundamentally different system to the US - with all European tests performed in strict conditions as required by EU law and witnessed by a government-approved independent approval agency," said the chief executive of the UK's Society of Motor Manufacturers and Traders, Mike Hawes. "There is no evidence that manufacturers cheat the cycle." Be it gaming the system or outright cheating, researchers including Prof Alastair Lewis say the impact of dodgy data on UK government attempts to clean the air are significant. The UK is currently being prosecuted by the European Commission because of its inability to reduce levels of nitrogen dioxide in many locations to safe levels. "The UK has terrible problems with nitrous dioxide in city centres but the government can only work with the data provided by manufacturers and that has proved to be highly unreliable," said Prof Lewis. Follow Matt on Twitter, external.
true
true
true
Environmental campaigners say they believe that manipulation similar to what happened at VW is going on in Europe.
2024-10-12 00:00:00
2015-09-23 00:00:00
https://ichef.bbci.co.uk…4_img_1061-1.jpg
article
bbc.com
BBC News
null
null
6,279,587
http://happymonster.co/2013/08/22/im-going-to-ask-you-to-go-ahead-and-come-in-on-saturday/
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
11,742,389
https://medium.com/desktop-apps/welcome-to-mediumdesk-8f8aa3e90545#.xe8sh2c09
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,583,722
https://osvoyager.wordpress.com/2018/11/30/what-makes-beos-and-haiku-unique/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,493,387
http://www.evolus.vn/Pencil/Home.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,107,536
http://www.nytimes.com/2010/02/07/business/07digi.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
32,101,057
https://www.copypassword.com/
Strong Password Generator
null
Why Copy Password? Because we never store your password in a database, send it across the internet, or risk someone stealing it. The simplicity of this tool is a feature in itself. Whether you are generating temporary login creds, a unique API key or just need a solid random string - we hope you enjoy it as much as we do. Example Use Cases: - 1. The most obvious: a secure password to be used for an application login. - 2. An API key for a customer - 3. A password to secure a file that you are sharing (pdf, image, etc.) - 4. Setting your home network password to something unique. - 5. Generating a temporary password for a shared login. - 6. A unique identifier for a database row.
true
true
true
A strong password generator. Passwords are never stored or sent across the internet.
2024-10-12 00:00:00
2022-01-01 00:00:00
null
null
null
null
null
null
1,042,081
http://www.pcmag.com/article2/0,2817,2357924,00.asp
Teardown Prices Google Nexus One at $175
PCMag Staff January 9
# Teardown Prices Google Nexus One at $175 Google's upcoming Nexus One smartphone costs approximately $175 to build, according to a Friday teardown released by iSuppli. LAS VEGAS - Google's upcoming Nexus One smartphone costs approximately $175 to build, according to a Friday teardown released by iSuppli. Hardware and components for the Nexus One costs $174.15 a total that does not include expenses like manufacturing, software, box contents, accessories, or royalties. Google will sell the Nexus One, which it and here, for $179 with a two-year contract from T-Mobile or unsubsidized for $529. "Items like the durable unibody construction, the blazingly fast Snapdragon baseband processor and the bright and sharp Active-Matrix Organic Light Emitting Diode (AM-OLED) display all have been seen in previous phones, but never before combined into a single design," Kevin Keller, a senior analyst for iSuppli, said in a statement. "This gives the Nexus One the most advanced features of any smartphone ever dissected by iSuppli's teardown analysis servicea remarkable feat given the product's [price] is similar to comparable products introduced during the past year." The phone's unibody design, which means that it is enclosed in a single part, makes it the most "Apple-like" phone on the market, Keller said. That design gives the phone more structural rigidity and protection from the elements, but it also drives up manufacturing costs. Nonetheless, Keller expects other manufacturers to follow this design route. A unique element within the Nexus One is a dual microphone that cancels background noise achieved via an audio voice processor chip from Audience Semiconductor. Though the Android-based includes a noise cancellation feature, iSuppli said the Nexus One is the first phone it has seen with a part from Audience Semiconductor. Overall, there are 17 components within the Nexus One, the most expensive of which is the Qualcomm baseband processor at $30.50, or 20.4 percent of the cost. The processor, which has a 1 GHz clock speed, is known as the Snapdragon. This is also featured in the Windows Mobile-based Toshiba TGO1, but iSuppli found that the Android OS is better able to capitalize on the Snapdragon's fast performance. "This processing muscle also gives the Nexus One some advanced capabilities, most notably high-definition 720p video playback," Keller said. The Samsung OLED display, meanwhile, costs $23.50. The only other phone to feature an OLED screen is Samsung's I7500, though that has a 3.2-inch screen to the Nexus One's 3.7-inch screen. In addition, the 4Gbit memory from Samsung Semiconductor runs $20.40 per phone. Most comparable smartphones include 1Gbit or 2Gbit of DRAM, but 4Gbit are needed with the Nexus One to support the Snapdragon processor, iSuppli said. The Nexus One's $8.50 MicroSD card, however, only holds 4GB, compared to 16GB on the Droid or the iPhone. Google made this choice to keep costs down, but unlike the , there is the option to switch out the SD card for a larger one. All of the other pieces are under $20, including the Synaptics touchscreen at $17.50, the camera for $12.50, and a $5.25 battery. Earlier this week, a separate iSuppli report said that the Nexus One will help Google better market its Android OS. "iSuppli believes the Nexus One allows Google to demonstrate all the capabilities of its operating system more effectively than other phones that employ customized versions of Android," said Tina Teng, iSuppli senior analyst for wireless communications. "The Nexus One also gives Google direct access to end customers, yielding key information on how users interact with applications and utilize data." In addition, the phone will provide Google with more data on customer usage. "Wireless carriers have gathered a great deal of useful information about smart-phone usage in recent years," Teng said. "However, all this information relates to data traffic generated during online activities, and doesn't cover offline actions, such as how users interact with applications and how customers make use of information derived from applications. Such information is invaluable for application and user interface developers as they try to create next-generation software and services." Google has the ability to embed an applet into the Nexus One that can send reports on user behavior back to the company's database for analysis. "Android fanatics provide the best forum for a group study on usage patterns, making the Nexus One a potential information goldmine," Teng said.
true
true
true
Google's upcoming Nexus One smartphone costs approximately $175 to build, according to a Friday teardown released by iSuppli.
2024-10-12 00:00:00
2010-01-09 00:00:00
https://www.pcmag.com/im…social-share.png
website
pcmag.com
PCMAG
null
null
2,419,114
http://timesofindia.indiatimes.com/tech/news/internet/Google-to-overhaul-YouTube/articleshow/7897923.cms
Google to overhaul YouTube - Times of India
PTI; Updated Apr 7
HOUSTON: Google's popular video sharing website YouTube will soon be overhauled and turned into a premium content competitor with organised channels and professionally produced video. YouTube is trying to position itself to better handle the age of Internet-connected televisions, a report said, citing people familiar with the matter. YouTube is reorganising its home page around "channels," or topics, such as sports and arts. The website is working to include about 20 "premium channels" that would showcase five to 10 hours of professionally produced, original programming each week. The changes, which reportedly will cost YouTube about USD 100 million, should start to be phased in by the end of the year. The YouTube is looking to align itself with the growing trend of Internet-connected television. YouTube designers are working to develop channels that would make it easier for users -- who would be viewing the site on their computers and on their TVs -- to find the content that they want to watch. Analyst say that this is the time when You Tube need to make changes as not much has changed since it started. The report says that YouTube will now will have categorised channels ("such as arts and sports"), and that premium-grade content is on its way. "YouTube is looking to introduce 20 or so 'premium channels' that would feature five to 10 hours of professionally-produced original programming a week," an insider told WSJ. These changes are slated to roll in by the end of the year, and Google is looking for new recruits to work on the project. And Google isn't simply working on adding organisation to YouTube; meetings with reputable Hollywood talent firms would suggest it's serious about nabbing some famous personnel, be those behind or in front of the camera. YouTube certainly is enough of a household name to command at least some attention, and if it plans to keep and grow its user base and make the type of money it wants to off advertising, this is a natural step. With such massive change, there's always the possibility of isolating passionate YouTubers who have been there since its start, but it's probably worth it for Google to further break into our living rooms.
true
true
true
Tech News News: Google's popular video sharing website YouTube will soon be overhauled and turned into a premium content competitor with organised channels and profes
2024-10-12 00:00:00
2011-04-07 00:00:00
https://static.toiimg.co…pad-40/photo.jpg
article
indiatimes.com
Times Of India
null
null
6,017,232
http://my.safaribooksonline.com/book/programming/javascript/9781782169062
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,697,427
https://redditfavorites.com/books
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,015,577
http://mvanier.livejournal.com/2897.html?
The Y Combinator (Slight Return)
null
### The Y Combinator (Slight Return) or: __How to Succeed at Recursion Without Really Recursing__ Tiger got to hunt, Bird got to fly; Lisper got to sit and wonder, (Y (Y Y))? Tiger got to sleep, Bird got to land; Lisper got to tell himself he understand. — Kurt Vonnegut, modified by Darius Bacon | ## Introduction I recently wrote a blog post about the Y combinator. Since then, I've received so many useful comments that I thought it was appropriate to expand the post into a more complete article. This article will go into greater depth on the subject, but I hope it'll be more comprehensible as well. You don't need to have read the previous post to understand this one (in fact, it's probably better if you haven't.) The only background knowledge I require is a tiny knowledge of the Scheme programming language including recursion and first-class functions, which I will review. Comments are (again) welcome. ## Why Y? Before I get into the details of what Y actually is, I'd like to address the question of why you, as a programmer, should bother to learn about it. To be honest, there aren't a lot of good nuts-and-bolts practical reasons for learning about Y. Even though it does have a few practical applications, for the most part it's mainly of interest to computer language theorists. Nevertheless, I do think it's worth your while to know something about Y for the following reasons: It's one of the most beautiful ideas in all of programming. If you have any sense of programming aesthetics, you're sure to be delighted by Y. It shows in a very stark way how amazingly powerful the simple ideas of functional programming are. In 1959, the British scientist C. P. Snow gave a famous lecture called The Two Cultures where he bemoaned the fact that many intelligent and well-educated people of the time had almost no knowledge of science. He used knowledge of the Second Law of Thermodynamics as a kind of dividing line between those who were scientifically literate and those who weren't. I think we can similarly use knowledge of the Y combinator as a dividing line between programmers who are "functionally literate" (*i.e.* have a reasonably deep knowledge of functional programming) and those who aren't. There are other topics that could serve just as well as Y (notably monads), but Y will do nicely. So if you aspire to have the True Lambda-Nature, read on. By the way, Paul Graham (the Lisp hacker, Lisp book author, essayist, and now venture capitalist) apparently thinks so highly of Y that he named his startup incubator company Y Combinator. Paul got rich from his knowledge of ideas like these; maybe someone else will too. Maybe even you. ## A puzzle ### Factorials We'll start our exploration of the Y combinator by defining some functions to compute factorials. The factorial of a non-negative integer `n` is the product of all integers starting from `1` and going up to and including `n` . Thus we have: factorial 1 = 1 factorial 2 = 2 * 1 = 2 factorial 3 = 3 * 2 * 1 = 6 factorial 4 = 4 * 3 * 2 * 1 = 24 and so on. (I'm using a function notation without parentheses here, so `factorial 3` is the same as what is usually written as `factorial(3)` . Humor me.) Factorials increase very rapidly with increasing `n` ; the factorial of `20` is `2432902008176640000` . The factorial of `0` is defined to be `1` ; this turns out to be the appropriate definition for the kinds of things factorials are actually used for (like solving problems in combinatorics). ### Recursive definitions of the factorial function It's easy to write a function in a programming language to compute factorials using some kind of a looping control construct like a `while` or `for` loop (*e.g.* in C or Java). However, it's also easy to write a recursive function to compute factorials, because factorials have a very natural recursive definition: factorial 0 = 1 factorial n = n * factorial (n - 1) where the second line applies for all `n` greater than zero. In fact, in the computer language Haskell, that's the way you actually define the factorial function. In Scheme, the language we'll be using here, this function would be written like this: (define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1))))) Scheme uses a parenthesized prefix notation for everything, so something like ` (- n 1) ` represents what is usually written ` n - 1 ` in most programming languages. The reasons for this are beyond the scope of this article, but getting used to this notation isn't very hard. In fact, the above definition of the factorial function in Scheme could also be written in a slightly more explicit way as follows: (define factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) The keyword `lambda` simply indicates that the thing we're defining (*i.e.* whatever is enclosed by the open parenthesis to the immediate left of the `lambda` and its corresponding close parenthesis) is a function. What comes immediately after the word `lambda` , in parentheses, are the *formal arguments* of the function; here there is just one argument, which is `n` . The *body* of the function comes after the formal arguments, and here consists of the expression ``` (if (= n 0) 1 (* n (factorial (- n 1)))) ``` . This kind of function is an *anonymous function*. Here you do give the anonymous function the name `factorial` after you've defined it, but you don't have to, and often it's handy not to if you're only going to be using it once. In Scheme and some other languages, anonymous functions are also called *lambda expressions*. Many programming languages besides Scheme allow you to define anonymous functions, including Python, Ruby, Javascript, Ocaml, and Haskell (but not C, C++, or Java, unfortunately). We'll be using lambda expressions a lot below. In the Scheme language, the definition of `factorial` just given is identical to the one before it; Scheme simply translates the first definition into the second one before evaluating it. So all functions in Scheme are really lambda expressions. Note that the body of the function has a call to the `factorial` function (which we're in the process of defining) inside it, which makes this a recursive definition. I will call this kind of definition, where the name of the function being defined is used in the body of the function, an *explicitly recursive definition*. (You might wonder what an "implicitly recursive" function would be. I'm not going to use that expression, but the notion I have in mind is a recursive function which is generated through non-recursive means — keep reading!) For the sake of argument, we're going to assume that our version of Scheme doesn't have the equivalent of `for` or `while` loops in C or Java (although in fact, real Scheme implementations do have such constructs, but under a different name), so that in order to define a function like `factorial` , we pretty much have to use recursion. Scheme is often used as a teaching language partly for this reason: it forces students to learn to think recursively. ### Functions as data and higher-order functions Scheme is a cool language for many reasons, but one that is relevant to us here is that it allows you to use functions as "first class" data objects (this is often expressed by saying that Scheme supports *first-class functions*). This means that in Scheme, we can pass a function to another function as an argument, we can return a function as the result of evaluating another function applied to its arguments, and we can create functions on-the-fly as we need them (using the `lambda` notation shown above). This is the essence of functional programming, and it will feature prominently in the ensuing discussion. Functions which take other functions as arguments, and/or which return other functions as their results, are usually referred to as *higher-order functions*. ### Eliminating explicit recursion Now, here's the puzzle: what if you were asked to define the `factorial` function in Scheme, but were told that you could not use recursive function calls in the definition (for instance, in the `factorial` function given above you cannot use the word `factorial` anywhere in the body of the function). However, you *are* allowed to use first-class functions and higher-order functions any way you see fit. With this knowledge, can you define the `factorial` function? The answer to this question is yes, and it will lead us directly to the Y combinator. ## What the Y combinator is and what it does The Y combinator is a higher-order function. It takes a single argument, which is a function that isn't recursive. It returns a version of the function which is recursive. We will walk through this process of generating recursive functions from non-recursive ones using Y in great detail below, but that's the basic idea. More generally, Y gives us a way to get recursion in a programming language that supports first-class functions but that doesn't have recursion built in to it. So what Y shows us is that such a language already allows us to define recursive functions, even though the language definition itself says nothing about recursion. This is a Beautiful Thing: it shows us that functional programming alone can allow us to do things that we would never expect to be able to do (and it's not the only example of this). ### Lazy or strict evaluation? We will be looking at two broad classes of computer languages: those that use *lazy evaluation* and those that use *strict evaluation*. Lazy evaluation means that in order to evaluate an expression in the language, you only evaluate as much of the expression as is needed to get the final result. So (for instance) if there is a part of the expression that doesn't need to get evaluated (because the result will not depend on it) it won't be evaluated. In contrast, strict evaluation means that all parts of an evaluation will be evaluated completely before the value of the expression as a whole is determined (with some necessary exceptions, such as `if` expressions, which have to be lazy to work properly). In practice, lazy evaluation is more general, but strict evaluation is more predictable and often more efficient. Most programming languages use strict evaluation. The programming language Haskell uses lazy evaluation, and this is one of the most interesting things about that language. We will use both kinds of evaluation in what follows. ### One Y combinator or many? Even though we often refer to Y as "the" Y combinator, in actual fact there are an infinite number of Y combinators. We will only be concerned with two of these, one lazy and one strict. We need two Y combinators because the Y combinator we define for lazy languages will not work for strict languages. The lazy Y combinator is often referred to as the *normal-order Y combinator* and the strict one is referred to as the *applicative-order Y combinator*. Basically, *normal-order* is another way of saying "lazy" and *applicative-order* is another way of saying "strict". ### Static or dynamic typing? Another big dividing line in programming languages is between *static typing* and *dynamic typing*. A statically-typed language is one where the types of all expressions are determined at compile time, and any type errors cause the compilation to fail. A dynamically-typed language doesn't do any type checking until run time, and if a function is applied to arguments of invalid types (*e.g.* by trying to add together an integer and a string), then an error is reported. Among commonly-used programming languages, C, C++ and Java are statically typed, and Perl, Python and Ruby are dynamically typed. Scheme (the language we'll be using for our examples) is also dynamically typed. (There are also languages that straddle the border between statically-typed and dynamically-typed, but I won't discuss this further.) One often hears static typing referred to as *strong typing* and dynamic typing referred to as *weak typing*, but this is an abuse of terminology. Strong typing simply means that every value in the language has one and only one type, whereas weak typing means that some values can have multiple types. So Scheme, which is dynamically typed, is also strongly typed, while C, which is statically typed, is weakly typed (because you can cast a pointer to one kind of object into a pointer to another type of object without altering the pointer's value). I will only be concerned with strongly typed languages here. It turns out to be much simpler to define the Y combinator in dynamically typed languages, so that's what I'll do. It is possible to define a Y combinator in many statically typed languages, but (at least in the examples I've seen) such definitions usually require some non-obvious type hackery, because the Y combinator itself doesn't have a straightforward static type. That's beyond the scope of this article, so I won't mention it further. ### What a "combinator" is A combinator is just a *lambda expression* with no *free variables*. We saw above what lambda expressions are (they're just anonymous functions), but what's a free variable? It's a variable (*i.e. * a name or identifier in the language) which isn't a *bound variable*. Happy now? No? OK, let me explain. A bound variable is simply a variable which is contained inside the body of a lambda expression that has that variable name as one of its arguments. Let's look at some examples of lambda expressions and free and bound variables: `(lambda (x) x)` `(lambda (x) y)` `(lambda (x) (lambda (y) x))` `(lambda (x) (lambda (y) (x y)))` `(x (lambda (y) y))` `((lambda (x) x) y)` Are the variables in the body of these lambda expressions free variables or bound variables? We'll ignore the formal arguments of the lambda expressions, because only variables in the body of the lambda expression can be considered free or bound. As for the other variables, here are the answers: - The `x` in the body of the lambda expression is a bound variable, because the formal argument of the lambda expression is also`x` . This lambda expression has no other variables, therefore it has no free variables, therefore it's a combinator. - The `y` in the lambda body is a free variable. This lambda expression is therefore not a combinator. - Aside from the formal arguments of the lambda expression, there is only one variable, the final `x` , which is a bound variable (it's bound by the formal argument of the outer lambda expression). Therefore, this lambda expression as a whole has no free variables, so this is a combinator. - Aside from the formal arguments of the lambda expression, there are two variables, the final `x` and`y` , both bound variables. This is a combinator. - The entire expression is not a lambda expression, so it's by definition not a combinator. Nevertheless, the `x` is a free variable and the final`y` is a bound variable. - Again, the entire expression isn't a lambda expression (it's a function application), so this isn't a combinator either. The second `x` is a bound variable while the`y` is a free variable. When you're wondering if a recursive function like `factorial` : (define factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) is a combinator, you don't consider the `define` part, so what you're really asking is if (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) is a combinator. Since in this lambda expression, the name `factorial` represents a free variable (the name `factorial` is not a formal argument of the lambda expression), this is not a combinator. This will be important below. In fact, the names `=` , `*` , and `-` are also free variables, so even without the name `factorial` this would not be a combinator (to say nothing of the numbers!). ## Back to the puzzle ### Abstracting out the recursive function call Recall the factorial function we had previously: (define factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) What we want to do is to come up with a version of this that does the same thing but doesn't have that pesky recursive call to `factorial` in the body of the function. Where do we start? It would be nice if you could save all of the function except for the offending recursive call, and put something else there. That might look like this: (define sort-of-factorial (lambda (n) (if (= n 0) 1 (* n (<???> (- n 1)))))) This still leaves us with the problem of what to put in the place marked `<???>` . It's a tried-and-true principle of functional programming that if you don't know exactly what you want to put somewhere in a piece of code, just abstract it out and make it a parameter of a function. The easiest way to do this is as follows: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) What we've done here is to rename the recursive call to `factorial` to `f` , and to make `f` an argument to a function which we're calling `almost-factorial` . Notice that `almost-factorial` is not at all the factorial function. Instead, it's a higher-order function which takes a single argument `f` , which had better be a function (or else ``` (f (- n 1)) ``` won't make sense), and *returns* another function (the `(lambda (n) ...)` part) which (hopefully) will be a factorial function if we choose the right value for `f` . It's important to realize that this trick is not in any way specific to the `factorial` function. We can do exactly the same trick with any recursive function. For instance, consider a recursive function to compute fibonacci numbers. The recursive definition of fibonacci numbers is as follows: fibonacci 0 = 0 fibonacci 1 = 1 fibonacci n = fibonacci (n - 1) + fibonacci (n - 2) (In fact, that's the definition of the fibonacci function in Haskell.) In Scheme, we can write the function this way: (define fibonacci (lambda (n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fibonacci (- n 1)) (fibonacci (- n 2))))))) (where `cond` is just a shorthand expression for nested `if` expressions). We can then remove the explicit recursion just like we did for `factorial` : (define almost-fibonacci (lambda (f) (lambda (n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (f (- n 1)) (f (- n 2)))))))) As you can see, the transformation from a recursive function to a non-recursive `almost-` equivalent function is a purely mechanical one: you rename the name of the recursive function inside the body of the function to `f` and you wrap a ``` (lambda (f) ...) ``` around the body. If you've followed what I just did (never mind *why* I did it; we'll see that later), then congratulations! As Yoda says, you've just taken the first step into a larger world. ### Sneak preview I probably shouldn't do this yet, but I'm going to give you a sneak preview of where we're going. Once we define the Y combinator, we'll be able to define the factorial function using `almost-factorial` as follows: (define factorial (Y almost-factorial)) where `Y` is the Y combinator. Note that this definition of `factorial` doesn't have any explicit recursion in it. Similarly, we can define the `fibonacci` function using `almost-fibonacci` in the same way: (define fibonacci (Y almost-fibonacci)) So the Y combinator will give us recursion wherever we need it as long as we have the appropriate `almost-` function available (*i.e.* the non-recursive function derived from the recursive one by abstracting out the recursive function calls). Read on to see what's really going on here and why this will work. ### Recovering `factorial` from `almost-factorial` Let's assume, for the sake of argument, that we already had a working factorial function lying around (recursive or not, we don't care). We'll call that hypothetical factorial function `factorialA` . Now let's consider the following: (define factorialB (almost-factorial factorialA)) Question: does `factorialB` actually compute factorials? To answer this, it's helpful to expand out the definition of `almost-factorial` : (define factorialB ((lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) factorialA)) Now, by substituting `factorialA` for `f` inside the body of the lambda expression we get: (define factorialB (lambda (n) (if (= n 0) 1 (* n (factorialA (- n 1)))))) This looks a lot like the recursive factorial function, but it isn't: `factorialA` is not the same function as `factorialB` . So it's a non-recursive function that depends on a hypothetical `factorialA` function to work. Does it actually work? Well, it's pretty obvious that it should work for `n = 0` , since `(factorialB 0)` will just return `1` (the factorial of `0` ). If `n > 0` , then the value of ``` (factorialB n) ``` will be `(* n (factorialA (- n 1)))` . Now, we assumed that `factorialA` would correctly compute factorials, so `(factorialA (- n 1))` is the factorial of `n - 1` , and therefore `(* n (factorialA (- n 1)))` is the factorial of `n` (by the definition of factorial), thus proving that `factorialB` computes the factorial function correctly as long as `factorialA` does. So this works. The only problem is that we don't actually have a `factorialA` lying around. Now, if you're really clever, you might be asking yourself whether we can just do this: (define factorialA (almost-factorial factorialA)) The idea is this: let's assume that `factorialA` is a valid factorial function. Then if we pass it as an argument to `almost-factorial` , the resulting function will have to be a valid factorial function, so why not just name that function `factorialA` ? It looks like you've created a perpetual-motion machine (or perhaps I should say a perpetual-calculation machine), and there must be *something* wrong with this definition... mustn't there? In fact, this definition will work fine as long as the Scheme language you're using uses lazy evaluation! Standard Scheme uses strict evaluation, so it won't work (it'll go into an infinite loop). If you use DrScheme as your Scheme interpreter (which you should), then you can use the "lazy Scheme" language level, and the above code will actually work (huzzah!). We'll see why below, but for now I want to stick to standard (strict) Scheme and approach the problem in a slightly different way. Let's define a couple of functions: (define identity (lambda (x) x)) (define factorial0 (almost-factorial identity)) The `identity` function is pretty simple: it takes in a single argument and returns it unchanged (it's also a combinator, as I hope you can tell). We're basically going to use it as a placeholder when we need to pass a function as an argument and we don't know what function we should pass. `factorial0` is more interesting. It's a function that can compute *some*, but not *all* factorials. Specifically, it can compute the factorials up to and including the factorial of zero (which means that it can only compute the factorial of zero, but you'll soon see why I describe it this way). Let's verify that: (factorial0 0) ==> ((almost-factorial identity) 0) ==> (((lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) identity) 0) ==> ((lambda (n) (if (= n 0) 1 (* n (identity (- n 1))))) 0) ==> (if (= 0 0) 1 (* 0 (identity (- 0 1)))) ==> (if #t 1 (* 0 (identity (- 0 1)))) ==> 1 OK, so it works. Unfortunately, it won't work for `n > 0` . For instance, if `n = 1` then we'll have (skipping a few obvious steps): (factorial0 1) ==> (* 1 (identity (- 1 1))) ==> (* 1 (identity 0)) ==> (* 1 0) ==> 0 which is not the correct answer. Now consider this spiffed-up version of `factorial0` : (define factorial1 (almost-factorial factorial0)) which is the same thing as: (define factorial1 (almost-factorial (almost-factorial identity))) This will correctly compute the factorials of `0` and `1` , but it will be incorrect for any `n > 1` . Let's verify this as well, again skipping some obvious steps: (factorial1 0) ==> ((almost-factorial factorial0) 0) ==> 1 (via essentially the same derivation we showed above) (factorial1 1) ==> ((almost-factorial factorial0) 1) ==> (((lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) factorial0) 1) ==> ((lambda (n) (if (= n 0) 1 (* n (factorial0 (- n 1))))) 1) ==> (if (= 1 0) 1 (* 1 (factorial0 (- 1 1)))) ==> (if #f 1 (* 1 (factorial0 (- 1 1)))) ==> (* 1 (factorial0 (- 1 1))) ==> (* 1 (factorial0 0)) ==> (* 1 1) ==> 1 which is the correct answer. So `factorial1` can compute factorials for `n = 0` and `n = 1` . You can verify, though, that it won't be correct for `n > 1` . We can keep going, and define functions which can compute factorials up to any particular limit: (define factorial2 (almost-factorial factorial1)) (define factorial3 (almost-factorial factorial2)) (define factorial4 (almost-factorial factorial3)) (define factorial5 (almost-factorial factorial4)) etc. `factorial2` will compute correct factorials for inputs between `0` and `2` , `factorial3` will compute correct factorials for inputs between `0` and `3` , and so on. You should be able to verify this for yourself using the above derivations as models, though you probably won't be able to do it in your head (at least, *I* can't do it in my head). One interesting way of looking at this is that `almost-factorial` takes in a crappy factorial function and outputs a factorial function that is slightly less crappy, in that it will handle exactly one extra value of the input correctly. Note that you can again rewrite the definitions of the factorial functions like this: (define factorial0 (almost-factorial identity)) (define factorial1 (almost-factorial (almost-factorial identity))) (define factorial2 (almost-factorial (almost-factorial (almost-factorial identity)))) (define factorial3 (almost-factorial (almost-factorial (almost-factorial (almost-factorial identity))))) (define factorial4 (almost-factorial (almost-factorial (almost-factorial (almost-factorial (almost-factorial identity)))))) (define factorial5 (almost-factorial (almost-factorial (almost-factorial (almost-factorial (almost-factorial (almost-factorial identity))))))) and so on. Again, if you're very clever you might wonder if you could do this: (define factorial-infinity (almost-factorial (almost-factorial (almost-factorial ...)))) where the `...` means that you're repeating the chain of `almost-factorials` an infinite number of times. If you did wonder this, go to the head of the class! Unfortunately, we can't write this out directly, but we can define the equivalent of this. Note also that `factorial-infinity` is just the `factorial` function we want: it works on all integers greater than or equal to zero. What we have shown is that if we could define an infinite chain of `almost-factorials` , that would give us the factorial function. Another way of saying this is that the factorial function is the *fixpoint* of `almost-factorial` , which is what I will explain next. ### Fixpoints of functions The notion of a fixpoint should be familiar to anyone who has amused themselves playing with a pocket calculator. You start with `0` and hit the `cos` (cosine) key repeatedly. What you find is that the answer rapidly converges to a number which is (approximately) `0.73908513321516067` ; hitting the `cos` key again doesn't change anything because ``` cos(0.73908513321516067) = 0.73908513321516067 ``` . We say that the number `0.73908513321516067` is a *fixpoint* of the cosine function. The cosine function takes a single input value (a real number) and produces a single output value (also a real number). The fact that the input and output of the function are the same type is what allows you to apply it repeatedly, so that if `x` is a real number, we can calculate what `cos(x)` is, and since that will also be a real number, we can calculate what `cos(cos(x))` is, and then what `cos(cos(cos(x)))` is, and so on. The fixpoint is the value `x` where `cos(x) = x` . Fixpoints don't have to be real numbers. In fact, they can be any type of thing, as long as the function that generates them can take the same type of thing as input as it produces as output. Most importantly for our discussion, fixpoints can be functions. If you have a higher-order function like `almost-factorial` that takes in a function as its input and produces a function as its output (with both input and output functions taking a single integer argument as input and producing a single integer as output), then it should be possible to compute its fixpoint (which will, naturally, be a function which takes a single integer argument as input and produces a single integer as output). That fixpoint function will be the function for which fixpoint-function = (almost-factorial fixpoint-function) By repeatedly substituting the right-hand side of that equation into the `fixpoint-function` on the right, we get: fixpoint-function = (almost-factorial (almost-factorial fixpoint-function)) = (almost-factorial (almost-factorial (almost-factorial fixpoint-function))) = ... = (almost-factorial (almost-factorial (almost-factorial (almost-factorial (almost-factorial ...))))) As we saw above, this will be the factorial function we want. Thus, the fixpoint of `almost-factorial` will be the `factorial` function: factorial = (almost-factorial factorial) = (almost-factorial (almost-factorial (almost-factorial (almost-factorial (almost-factorial ...))))) That's all well and good, but just *knowing* that `factorial` is the fixpoint of `almost-factorial` doesn't tell us how to compute it. Wouldn't it be nice if there was some magical higher-order function that would take as its input a function like `almost-factorial` , and would output its fixpoint function, which in that case would be `factorial` ? Wouldn't that be really freakin' sweet? That function exists, and it's the Y combinator. Y is also known as the *fixpoint combinator*: it takes in a function and returns its fixpoint. ### Eliminating (most) explicit recursion (lazy version) OK, it's time to derive Y. Let's start by specifying what Y does: (Y f) = fixpoint-of-f What do we know about the fixpoint of `f` ? We know that (f fixpoint-of-f) = fixpoint-of-f by the definition of what a fixpoint of a function is. Therefore, we have: (Y f) = fixpoint-of-f = (f fixpoint-of-f) and we can substitute `(Y f)` for `fixpoint-of-f` to get: (Y f) = (f (Y f)) Voila! We've just defined Y. If we want it to be expressed as a Scheme function, we would have to write it like this: (define (Y f) (f (Y f))) or, using an explicit `lambda` expression, as: (define Y (lambda (f) (f (Y f)))) However, there are two caveats regarding this definition of Y: It will only work in a lazy language (see below). It is not a combinator, because the `Y` in the body of the definition is a free variable which is only bound once the definition is complete. In other words, we couldn't just take the body of this version of`Y` and plop it in wherever we needed it, because it requires that the name`Y` be defined somewhere. Nevertheless, if you're using lazy Scheme, you can indeed define factorials like this: (define Y (lambda (f) (f (Y f)))) (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial (Y almost-factorial)) and it will work correctly. What have we accomplished? We originally wanted to be able to define the factorial function without using any explicitly recursive functions at all. We've *almost* done that. Our definition of `Y` is still explicitly recursive. However, we've taken a giant step, because this is the *only* function in our language that needs to be explicitly recursive in order to define recursive functions. With this version of `Y` we can go ahead and define other recursive functions (for instance, defining `fibonacci` as ``` (Y almost-fibonacci) ``` ). ### Eliminating (most) explicit recursion (strict version) I said above that the definition of Y that we derived wouldn't work in a strict language (like standard Scheme). In a strict language, we evaluate all the arguments to a function call before applying the function to its arguments, whether or not those arguments are needed. So if we have a function `f` and we try to evaluate `(Y f)` using the above definition, we get: (Y f) = (f (Y f)) = (f (f (Y f))) = (f (f (f (Y f)))) etc. and so on ad infinitum. The evaluation of `(Y f)` will never terminate, so we will never get a usable function out of it. This definition of Y doesn't work for strict languages. However, there is a clever hack that we can use to save the day and define a version of Y that works in strict languages. The trick is to realize that `(Y f)` is going to become a function of one argument. Therefore, this equality will hold: (Y f) = (lambda (x) ((Y f) x)) Whatever one-argument function `(Y f)` is, ``` (lambda (x) ((Y f) x)) ``` has to be the same function. All you're doing is taking in a single input value `x` and giving it to the function defined by `(Y f)` . In a similar way, this will be true: cos = (lambda (x) (cos x)) It doesn't matter whether you use `cos` or ``` (lambda (x) (cos x)) ``` as your cosine function; they will both do the same thing. However, it turns out that `(lambda (x) ((Y f) x))` has a big advantage when defining Y in a strict language. By the reasoning given above, we should be able to define Y as follows: (define Y (lambda (f) (f (lambda (x) ((Y f) x))))) Since we know that `(lambda (x) ((Y f) x))` is the same function as `(Y f)` , this is a valid version of Y which will work just as well as the previous version, even though it's a bit more complicated (and perhaps a tiny bit slower in practice). We could use this version of Y to define the `factorial` function in lazy Scheme, and it would work fine. The cool thing about *this* version of Y is that it will also work in a strict language (like standard Scheme)! The reason for this is that when you give Y a particular `f` to find the fixpoint of, it will return (Y f) = (f (lambda (x) ((Y f) x))) This time, there is no infinite loop, because the inner `(Y f)` is kept inside a `lambda` expression, where it sits until it's needed (since the body of a lambda expression is never evaluated in Scheme until the lambda expression is applied to its arguments). Basically, you're using the lambda to delay the evaluation of `(Y f)` . So if `f` was `almost-factorial` , we would have this: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial (Y almost-factorial)) Expanding out the call to Y, we have: (define factorial ((lambda (f) (f (lambda (x) ((Y f) x)))) almost-factorial)) ==> (define factorial (almost-factorial (lambda (x) ((Y almost-factorial) x)))) ==> (define factorial (lambda (n) (if (= n 0) 1 (* n ((lambda (x) ((Y almost-factorial) x)) (- n 1)))))) Here again, `(lambda (x) ((Y almost-factorial) x))` is the same function as `(Y almost-factorial)` , which is the fixpoint of `almost-factorial` , which is just the factorial function. However, the `(Y almost-factorial)` in ``` (lambda (x) ((Y almost-factorial) x)) ``` won't be evaluated until the entire lambda expression is applied to its argument, which won't happen until later (or not at all, for the factorial of zero). Therefore this factorial function will work in a strict language, and the version of Y used to define it will also work in a strict language. I realize that the preceding discussion and derivation is nontrivial, so don't be discouraged if you don't get it right away. Just sleep on it, play with it in your mind and with your trusty DrScheme interpreter, and you'll eventually get it. At this point, we've accomplished everything we've set out to accomplish, except for one tiny little detail: we haven't yet derived the Y combinator itself. ## Deriving the Y combinator ### The lazy (normal-order) Y combinator At this point, we want to define not just Y, but a Y *combinator*. Note that the previous (lazy) definition of Y: (define Y (lambda (f) (f (Y f)))) is a valid definition of Y but is not a Y combinator, since the definition of Y refers to Y itself. In other words, this definition is explicitly recursive. A combinator isn't allowed to be explicitly recursive; it has to be a lambda expression with no free variables (as I mentioned above), which means that it can't refer to its own name (if it even has a name) in its definition. If it did, the name would be a free variable in the definition, as we have in our definition of Y: (lambda (f) (f (Y f))) Note that Y in this definition is free; it isn't the bound variable of any lambda expression. So this is not a combinator. Another way to think about this is that you should be able to replace the name of a combinator with its definition everywhere it's found and have everything still work. (Can you see why this wouldn't work with the explicitly recursive definition of Y? You would get into an infinite loop and you'd never be able to replace all the Ys with their definitions.) So whatever the Y combinator will be, it will not be explicitly recursive. From this non-recursive function we will be able to define whatever recursive functions we want. I'm going to go back a bit to our original problem and derive a Y combinator from the bottom up. After I've done that I'll check to make sure that it is a fixpoint combinator, like the versions of Y we've already seen. In what follows I will borrow (steal) liberally from a very elegant derivation of the Y combinator sent to me by Eli Barzilay (thanks, Eli!), who is one of the DrScheme developers and an all-around Scheme uberstud. Recall our original recursive `factorial` function: (define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1))))) Recall that we want to define a version of this without the explicit recursion. One way we could do this is to pass the factorial function itself as an extra argument when you call the function: ;; This won't work yet: (define (part-factorial self n) (if (= n 0) 1 (* n (self (- n 1))))) Note that `part-factorial` is not the same as the `almost-factorial` function described above. We would have to call this `part-factorial` function in a different way to get it to compute factorials: (part-factorial part-factorial 5) ==> 120 This is not explicitly recursive because we send along an extra copy of the `part-factorial` function as the `self` argument. However, it won't work unless the point of recursion calls the function the exact same way: (define (part-factorial self n) (if (= n 0) 1 (* n (self self (- n 1))))) ;; note the extra "self" here (part-factorial part-factorial 5) ==> 120 This works, but now we have moved away from our original way of calling the factorial function. We can move back to something closer to our original version by rewriting it like this: (define (part-factorial self) (lambda (n) (if (= n 0) 1 (* n ((self self) (- n 1)))))) ((part-factorial part-factorial) 5) ==> 120 (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 Pause for a second here. Notice that we've *already* defined a version of the factorial function without using explicit recursion anywhere! This is the most crucial step. Everything else we do will be concerned with packaging what we've already done so that we can easily re-use it with other functions. Now let's try to get back something like our `almost-factorial` function by pulling out the `(self self)` call using a `let` expression outside of a `lambda` : (define (part-factorial self) (let ((f (self self))) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 This will work fine in a lazy language. In a strict language, the `(self self)` call in the `let` statement will send us into an infinite loop, because in order to calculate ``` (part-factorial part-factorial) ``` (in the definition of `factorial` ) you will first have to calculate `(part-factorial part-factorial)` (in the `let` expression). (For fun: figure out why this wasn't a problem with the previous definition.) I'll let this go for now, because I want to define the lazy Y combinator, but in the next section I'll solve this problem in the same way we solved it before (by wrapping a `lambda` around the `(self self)` call). Note that in a lazy language, the `(self self)` call in the `let` statement will never be evaluated unless `f` is actually needed (for instance, if ``` n = 0 ``` then `f` isn't needed to compute the answer, so `(self self)` won't be evaluated). Understanding how lazy languages evaluate expressions is not trivial, so don't worry if you find this a little confusing. I recommend you experiment with the code using the lazy Scheme language level of DrScheme to get a better feel for what's going on. It turns out that any `let` expression can be converted into an equivalent `lambda` expression using this equation: (let ((x <expr1>)) <expr2>) ==> ((lambda (x) <expr2>) <expr1>) where `<expr1>` and `<expr2>` are arbitrary Scheme expressions. (I'm only considering `let` expressions with a single binding and `lambda` expressions with a single argument, but the principle can easily be generalized to `lets` with multiple bindings and `lambdas` with multiple arguments.) This leads us to: (define (part-factorial self) ((lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) (self self))) (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 If you look closely, you'll see that we have our old friend the `almost-factorial` function embedded inside the `part-factorial` function. Let's pull it outside: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define (part-factorial self) (almost-factorial (self self))) (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 I don't know about you, but I'm getting pretty fed up with this whole `(part-factorial part-factorial)` thing, and I'm not going to take it anymore! Fortunately, I don't have to; I can first rewrite the `part-factorial` function like this: (define part-factorial (lambda (self) (almost-factorial (self self)))) Then I can rewrite the `factorial` function like this: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial (let ((part-factorial (lambda (self) (almost-factorial (self self))))) (part-factorial part-factorial))) (factorial 5) ==> 120 The `factorial` function can be written a little more concisely by changing the name of `part-factorial` to `x` (since we aren't using this name anywhere else now): (define factorial (let ((x (lambda (self) (almost-factorial (self self))))) (x x))) Now let's use the same `let ==> lambda` trick we used above to get: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial ((lambda (x) (x x)) (lambda (self) (almost-factorial (self self))))) (factorial 5) ==> 120 And again, to make this definition a little more concise, we can rename `self` to `x` to get: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial ((lambda (x) (x x)) (lambda (x) (almost-factorial (x x))))) (factorial 5) ==> 120 Note that the two `lambda` expressions in the definition of `factorial` both are functions of `x` , but the two `x` 's don't conflict with each other. In fact, we could have renamed `self` to `y` or almost any other name, but it'll be convenient to use `x` in what follows. We're almost there! This works fine, but it's too specific to the `factorial` function. Let's change it to a generic `make-recursive` function that makes recursive functions from non-recursive ones (sound familiar?): (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define (make-recursive f) ((lambda (x) (x x)) (lambda (x) (f (x x))))) (define factorial (make-recursive almost-factorial)) (factorial 5) ==> 120 The `make-recursive` function is in fact the long-sought lazy Y combinator, also known as the *normal-order Y combinator*, so let's write it that way: (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define (Y f) ((lambda (x) (x x)) (lambda (x) (f (x x))))) (define factorial (Y almost-factorial)) I'm going to expand out the definition of Y a little bit: (define Y (lambda (f) ((lambda (x) (x x)) (lambda (x) (f (x x)))))) Note that we can apply the inner `lambda` expression to its argument to get an equivalent version of Y: (define Y (lambda (f) ((lambda (x) (f (x x))) (lambda (x) (f (x x)))))) What this means is that, for a given function `f` (which is a non-recursive function like `almost-factorial` ), the corresponding recursive function can be obtained first by computing `(lambda (x) (f (x x)))` , and then applying this `lambda` expression to itself. This is the usual definition of the normal-order Y combinator. The only thing left to do is to check that this Y combinator is a fixpoint combinator (which it has to be in order to compute the right thing). To do this we have to demonstrate that this equation is correct: (Y f) = (f (Y f)) From the definition of the normal-order Y combinator given above, we have: (Y f) = ((lambda (x) (f (x x))) (lambda (x) (f (x x)))) Now apply the first lambda expression to its argument, which is the second lambda expression, to get this: = (f ((lambda (x) (f (x x))) (lambda (x) (f (x x))))) = (f (Y f)) as desired. So, not only is the normal-order Y combinator also a fixpoint combinator, it's just about the most obvious fixpoint combinator there is, in that the proof that it's a fixpoint combinator is so trivial. If you've made it through all of this derivation, you should pat yourself on the back and take a well-deserved break. When you come back, we'll finish off by deriving... ### The strict (applicative-order) Y combinator Let's pick up the previous derivation just before the point where it failed for strict languages: (define (part-factorial self) (lambda (n) (if (= n 0) 1 (* n ((self self) (- n 1)))))) ((part-factorial part-factorial) 5) ==> 120 (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 Up to this point, everything works in a strict language. Now if we pull the `(self self)` out into a `let` expression as before, we have: (define (part-factorial self) (let ((f (self self))) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 As I said above, this will not work in a strict language, because whenever the `factorial` function is called it will evaluate the function call `(part-factorial part-factorial)` , and when that function call is evaluated it will first evaluate `(self self)` as part of the `let` expression, which in this case will be ``` (part-factorial part-factorial) ``` , leading to an infinite loop of ``` (part-factorial part-factorial) ``` calls. We saw above that the way around problems like this is to realize that what we are trying to evaluate are functions of one argument. In this case, `(self self)` will be a function of one argument (it's going to be the same as `(part-factorial part-factorial)` , which is just the `factorial` function). We can wrap a lambda expression around this function to get an equivalent function: (define (part-factorial self) (let ((f (lambda (y) ((self self) y)))) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) All we've done here is convert `(self self)` , a function of one argument, to `(lambda (y) ((self self) y))` , an equivalent function of one argument (we saw this trick earlier). I'm using `y` instead of `x` as the variable binding of the new lambda expression so as not to cause name conflicts later on in the derivation when `self` gets renamed to `x` , but I could have chosen another name as well. After we've done this, the `part-factorial` function will now work even in a strict language. That's because once ``` (part-factorial part-factorial) ``` is evaluated, as part of evaluating the `let` expression the code `(lambda (x) ((self self) x))` will be evaluated. Unlike before, this will *not* send us into an infinite loop; the lambda expression won't be evaluated further until it's applied to its argument. This lambda wrapper doesn't change the value of the thing it wraps, but it does delay its evaluation, which is all we need to get the definition of `part-factorial` to work in a strict language. And that's the trick. After that, we carry through every other step of the derivation in exactly the same way. We end up with this definition of the strict Y combinator: (define Y (lambda (f) ((lambda (x) (f (lambda (y) ((x x) y)))) (lambda (x) (f (lambda (y) ((x x) y))))))) This can also be written in the equivalent form: (define Y (lambda (f) ((lambda (x) (x x)) (lambda (x) (f (lambda (y) ((x x) y))))))) Hopefully, you can see why this is equivalent. Either of these are the strict Y combinator, or as it's called in the technical literature, the *applicative-order Y combinator*. In a strict language (like standard Scheme) you can use this to define the factorial function in the usual way: (define factorial (Y almost-factorial)) I recommend you try this out with DrScheme, and lo! marvel at the awesome power of the applicative-order Y combinator, that which hath created recursion where no recursion hath previously existed. ## Other matters ### Practical applications This article has (I hope) convinced you that you don't need to have explicit recursion built in to a language in order for that language to allow you to define recursive functions, as long as the language supports first-class functions so that you can define a Y combinator. However, I don't want to leave you with the notion that recursion in real computer languages is implemented this way. In practice, it's far more efficient to just implement recursion directly in a computer language than to use the Y combinator. There are lots of other interesting issues that come up when considering how to implement recursion efficiently, but those issues are beyond the scope of this article. The point is that implementing recursion using the Y combinator is mainly of theoretical interest. That said, in the paper Y in Practical Programs, Bruce McAdams discusses a few ways in which Y can be used to define variants of recursive functions that *e.g.* print traces of their execution or automatically memoize their execution to give greater efficiency (as well as a few more esoteric applications), so Y isn't *just* a theoretical construct. ### Mutual Recursion Experienced functional programmers and/or particularly astute readers may have noticed that I didn't describe how to use the Y combinator to implement *mutual recursion*, which is where you have two or more functions which all call each other. The simplest example I can think of to illustrate mutual recursion are the following pair of functions which determine whether a non-negative integer is even or odd: (define (even? n) (if (= n 0) #t (odd? (- n 1)))) (define (odd? n) (if (= n 0) #f (even? (- n 1)))) Before you start yelling at me, yes, I know that this isn't the most efficient way to compute evenness or oddness — it's just to illustrate what mutual recursion is. Any computer language that supports recursive function definitions has to support mutual recursion as well, but I haven't shown you how to use Y to define mutually-recursive functions. I'm going to cop out here because I think this article is long enough as it is, but rest assured that it is possible to define analogs of Y that can define mutually-recursive functions. ## Further reading The Wikipedia article on the Y combinator is somewhat difficult reading, but it has some interesting material I didn't cover here. The Little Schemer, 4th. ed., by Dan Friedman and Matthias Felleisen. Chapter 9 has a derivation of the Y combinator which is what got me interested in this subject. The article Y in Practical Programs, by Bruce McAdams, which was referred to in the previous section. ## Acknowledgments I would like to thank the following people: Everyone who commented on my first blog post on the Y combinator, and also everyone who comments on this article. Eli Barzilay, for a very interesting email discussion on this subject. The derivation of the normal-order Y combinator is taken directly from Eli (with permission). My friend Darius Bacon for the poem. I'd also like to apologize to the estate of Kurt Vonnegut for abusing his work. The original poem appeared in Vonnegut's brilliant novel Cat's Cradle. If you haven't read it, you should do so as soon as possible. All the DrScheme implementors for giving me a terrific tool with which to explore this subject. The authors of the book The Little Schemer, Dan Friedman and Matthias Felleisen. This article is (in my mind at least) a massive expansion of chapter 9 of their book. rus_kzmvanier(no subject)-rus_kz- Posted on Aug. 14th, 2008 10:03 am (UTC) | Expand(no subject)-mvanier- Posted on Aug. 14th, 2008 10:17 am (UTC) | Expand(no subject)-rus_kz- Posted on Aug. 14th, 2008 10:22 am (UTC) | Expand(Anonymous)Yawn(Anonymous)The joy of parensmvanierRe: The joy of parensext_117279Excellent article.I'm going to sleep on what I've read so far. I'll continue reading once I feel comfortable with what you presented up to that point. Thank you for writing about the Y combinator. You explain abstract topics in a way that is easy to understand. When are you going to write a post explaining Monads? ext_117279Re: Excellent article.mvanier- Posted on Aug. 14th, 2008 10:21 pm (UTC) | Expandmvanier- Posted on Aug. 14th, 2008 10:37 pm (UTC) | Expand(Anonymous)- Posted on Aug. 17th, 2008 02:51 am (UTC) | Expandmvanier- Posted on Aug. 18th, 2008 04:18 am (UTC) | Expand(Anonymous)- Posted on Aug. 18th, 2008 12:24 pm (UTC) | ExpandryaniY doesn't have a straightforward static type?y f = let x = f x in x Now, it's true that you cannot define "y" without the use of recursion in many statically typed languages; this is a property of System F and most other typed lambda-calculus... it follows directly from the fact that these systems are strongly normalizing. That is to say, there are no infinite loops in typed lambda-calculi like System F; any program will always complete successfully. On the other hand, System F + Y-combinator is Turing complete; just one single recursive let in the definition of "y" is all that is needed. mvanierRe: Y doesn't have a straightforward static type?Good point -- I didn't know about this definition of Y. I tried it in Haskell, and it works fine. However, note that Haskell's `let` is what is known as`letrec` in Scheme (a`let` where all the bindings are in the scope of all the other ones), so there is explicit recursion built in to this definition even though it's not obvious. If this was a Scheme-equivalent`let` then this definition would be equivalent to:which by the let/lambda equivalence rule would be the same as: which is meaningless since the x in (f x) is unbound. What it really is is this: which works in lazy Scheme but not in strict (standard) Scheme. I definitely need to learn more about System F. nornagonI liked this post so much I've offered to the local (Sydney) functional programming group to give a talk on the Y-combinator. Much of my material's based on what's here. I've translated the examples into Haskell-esque syntax, because that's what I'm comfortable with. Is that okay with you? I'll happily link you to my slides when I'm done if it is. mvanier(no subject)-nornagon- Posted on Sep. 18th, 2008 01:34 pm (UTC) | Expand(no subject)-mvanier- Posted on Sep. 19th, 2008 06:24 am (UTC) | Expandpaddy3118.blogspot.comIs this truly the Y Combinator?I wonder if you could shed some light on this as it is beyond me. Thanks. The task: http://www.rosettacode.org/wiki/Y_combinator The query: http://www.rosettacode.org/wiki/Talk:Y_combinator "[edit] Is this really the "Y" combinator"? mvanierRe: Is this truly the Y Combinator?paddy3118.blogspot.com- Posted on Mar. 3rd, 2009 05:59 pm (UTC) | Expandext_226099AwesomeI teach a compilers class, and I wrote a blog post for my students that takes a fixed-point approach to the Y combinator for JavaScript: http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/ To top it off, I made a memoizing version too. mvanierRe: AwesomedamondaemonHowever, I am compelled to point out that is is actually Obi-Wan Kenobi -- in Star Wars-- that tells Luke that he has stepped into a larger world. (On theMillennium Falconright after Luke successfully parries the shots from the test drone despite being blinded.)mvanierSteinway WuBut, I am also curious about how people implement "built-in recursion support" without Y. You mentioned that Y gives us the possibility to do recursion in a language that doesn't support explicit recursion naively. But what does "native support" really means? Would you please show some examples? Or could you provide some materials that explain that? Thank you!mvanierWei QiuEven clearer than Dan Friedman's book. Thanks! mvanierdalresearchy combinator for any-arity recursive functions(define Y (lambda (f) ((lambda (x) (x x)) (lambda (x) (f (lambda y (apply (x x) y)))))))Edited at 2016-05-24 08:05 am (UTC)Nick LamicelaRe: y combinator for any-arity recursive functionsThat being said, for most practical languages this is a necessary hack, just like the work-around used to make the Y combinator work in non-lazy languages is. Although in most practical languages you could just be using the built-in looping and recursion options rather than defining them manually, so I guess that's a moot point too.
true
true
true
or: How to Succeed at Recursion Without Really Recursing Tiger got to hunt, Bird got to fly; Lisper got to sit and wonder, (Y (Y Y))? Tiger got to sleep, Bird got to land; Lisper got to tell himself he understand. Kurt Vonnegut, modified by Darius Bacon Introduction…
2024-10-12 00:00:00
2016-05-24 00:00:00
https://l-stat.livejourn…net/img/sign.png
article
livejournal.com
mvanier.livejournal.com
null
null
16,151,908
https://github.com/r-raymond/nixos-mailserver
GitHub - r-raymond/nixos-mailserver: A complete and Simple Nixos Mailserver
R-Raymond
**THIS REPO IS NOT UPDATED ANYMORE, IT HAS BEEN MOVED TO GITLAB. PLEASE DON'T OPEN ANY MORE PR'S OR ISSUES ON GITHUB** Subscribe to SNM Announcement List This is a very low volume list where new releases of SNM are announced, so you can stay up to date with bug fixes and updates. All announcements are signed by the gpg key with fingerprint ``` D9FE 4119 F082 6F15 93BD BD36 6162 DBA5 635E A16A ``` - Continous Integration Testing - Multiple Domains - Postfix MTA - smtp on port 25 - submission port 587 - lmtp with dovecot - Dovecot - maildir folders - imap starttls on port 143 - pop3 starttls on port 110 - Certificates - manual certificates - on the fly creation - Let's Encrypt - Spam Filtering - via rspamd - Virus Scanning - via clamav - DKIM Signing - via opendkim - User Management - declarative user management - declarative password management - Sieves - A simple standard script that moves spam - Allow user defined sieve scripts - ManageSieve support - User Aliases - Regular aliases - Catch all aliases - DKIM Signing - Allow a per domain selector See the mailing list archive ``` { config, pkgs, ... }: { imports = [ (builtins.fetchTarball "https://github.com/r-raymond/nixos-mailserver/archive/v2.1.4.tar.gz") ]; mailserver = { enable = true; fqdn = "mail.example.com"; domains = [ "example.com" "example2.com" ]; loginAccounts = { "[email protected]" = { hashedPassword = "$6$/z4n8AQl6K$kiOkBTWlZfBd7PvF5GsJ8PmPgdZsFGN1jPGZufxxr60PoR0oUsrvzm2oQiflyz5ir9fFJ.d/zKm/NgLXNUsNX/"; aliases = [ "[email protected]" "[email protected]" "[email protected]" ]; }; }; }; } ``` For a complete list of options, see `default.nix` . Check out the Complete Setup Guide in the project's wiki. Checkout the Complete Backup Guide. Backups are easy with `SNM` . See the How to Develop SNM wiki page. See the contributor tab - send mail graphic by tnp_dreamingmao from TheNounProject is licensed under CC BY 3.0 - Logo made with Logomakr.com
true
true
true
A complete and Simple Nixos Mailserver. Contribute to r-raymond/nixos-mailserver development by creating an account on GitHub.
2024-10-12 00:00:00
2016-07-21 00:00:00
https://opengraph.githubassets.com/c237471ad09f441e1629d29466d36917827f531d60b8be01abe7a449d2a39a54/r-raymond/nixos-mailserver
object
github.com
GitHub
null
null
6,951,244
http://www.the-free-foundation.org/tst8-12-2013.html
Why Are We At War In Yemen?
null
****Please note: This is the temporary home for my weekly column until my personal web page is up and running.**** # Why Are We At War In Yemen? Most Americans are probably unaware that over the past two weeks the US has launched at least eight drone attacks in Yemen, in which dozens have been killed. It is the largest US escalation of attacks on Yemen in more than a decade. The US claims that everyone killed was a “suspected militant,” but Yemeni citizens have for a long time been outraged over the number of civilians killed in such strikes. The media has reported that of all those killed in these recent US strikes, only one of the dead was on the terrorist “most wanted” list. This significant escalation of US attacks on Yemen coincides with Yemeni President Hadi’s meeting with President Obama in Washington earlier this month. Hadi was installed into power with the help of the US government after a 2011 coup against its long-time ruler, President Saleh. It is in his interest to have the US behind him, as his popularity is very low in Yemen and he faces the constant threat of another coup. In Washington, President Obama praised the cooperation of President Hadi in fighting the Yemen-based al-Qaeda in the Arabian Peninsula. This was just before the US Administration announced that a huge unspecified threat was forcing the closure of nearly two dozen embassies in the area, including in Yemen. According to the Administration, the embassy closings were prompted by an NSA-intercepted conference call at which some 20 al-Qaeda leaders discussed attacking the West. Many remain skeptical about this dramatic claim, which was made just as some in Congress were urging greater scrutiny of NSA domestic spying programs. The US has been involved in Yemen for some time, and the US presence in Yemen is much greater than we are led to believe. As the Wall Street Journal reported last week: “At the heart of the U.S.-Yemeni cooperation is a joint command center in Yemen, where officials from the two countries evaluate intelligence gathered by America and other allies, such as Saudi Arabia, say U.S. and Yemeni officials. There, they decide when and how to launch missile strikes against the highly secretive list of alleged al Qaeda operatives approved by the White House for targeted killing, these people say.” Far from solving the problem of extremists in Yemen, however, this US presence in the country seems to be creating more extremism. According to professor Gregory Johnson of Princeton University, an expert on Yemen, the civilian “collateral damage” from US drone strikes on al-Qaeda members actually attracts more al-Qaeda recruits: “There are strikes that kill civilians. There are strikes that kill women and children. And when you kill people in Yemen, these are people who have families. They have clans. And they have tribes. And what we're seeing is that the United States might target a particular individual because they see him as a member of al-Qaeda. But what's happening on the ground is that he's being defended as a tribesman.” The US government is clearly at war in Yemen. It is claimed they are fighting al-Qaeda, but the drone strikes are creating as many or more al-Qaeda members as they are eliminating. Resentment over civilian casualties is building up the danger of blowback, which is a legitimate threat to us that is unfortunately largely ignored. Also, the US is sending mixed signals by attacking al-Qaeda in Yemen while supporting al-Qaeda linked rebels fighting in Syria. This cycle of intervention producing problems that require more intervention to “solve” impoverishes us and makes us more, not less, vulnerable. Can anyone claim this old approach is successful? Has it produced one bit of stability in the region? Does it have one success story? There is an alternative. It is called non-interventionism. We should try it. First step would be pulling out of Yemen. *Permission to reprint in whole or in part is gladly granted, provided full credit is given.*
true
true
true
Why Are We At War In Yemen?
2024-10-12 00:00:00
2013-08-12 00:00:00
null
null
null
null
null
null
4,633,840
http://posterous.nickbaum.com/a-new-curriculum
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,543,730
http://www.pnas.org/content/early/2018/07/05/1716613115
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
959,262
http://www.catonmat.net/blog/mit-linear-algebra-part-one/
MIT Linear Algebra, Lecture 1: The Geometry of Linear Equations
null
This is going to be my summary of Linear Algebra course from MIT. I watched the lectures of this course in the summer of last year. This was not the first time I'm learning linear algebra. I already read a couple of books and read a few tutorials a couple of years ago but it was not enough for a curious mind like mine. The reason I am posting these mathematics lectures on my programming blog is because I believe that if you want to be a great programmer and work on the most exciting and world changing problems, then you have to know linear algebra. Larry and Sergey wouldn't have created Google if they didn't know linear algebra. Take a look at this publication if you don't believe me "Linear Algebra Behind Google." Linear algebra has also tens and hundreds of other computational applications, to name a few, data coding and compression, pattern recognition, machine learning, image processing and computer simulations. The course contains 35 lectures. Each lecture is 40 minutes long, but you can speed them up and watch one in 20 mins. The course is taught by Gilbert Strang. He's the world's leading expert in linear algebra and its applications and has helped the development of Matlab mathematics software. The course is based on working out a lot of examples, there are almost no theorems or proofs. The textbook used in this course is Introduction to Linear Algebra by Gilbert Strang. I'll write the summary in the same style as I did my summary of MIT Introduction to Algorithms. I'll split up the whole course in 30 or so posts, each post will contain one or more lectures together with my comments, my scanned notes, embedded video lectures and a timeline of the topics in the lecture. But, not to flood my blog with just mathematics, I will write one or two programming posts in between. You should subscribe to my posts through RSS here. The whole course is available at MIT's Open Course Ware: Course 18.06, Linear Algebra. I'll review the first lecture today. ## Lecture 1: The Geometry of Linear Equations The first lecture starts with Gilbert Strang stating the **fundamental problem of linear algebra**, which is to solve systems of linear equations. He proceeds with an example. The example is a system of two equations in two unknowns: There are three ways to look at this system. The first is to look at it a row at a time, the second is to look a column at a time, and the third is use the matrix form. If we look at this equation a row at a time, we have two independent equations 2x - y = 0 and -x + 2y = 3. These are both line equations. If we plot them we get the row picture: The row picture shows the two lines meeting at a single point (x=1, y=2). That's the solution of the system of equations. If the lines didn't intersect, there would have been no solution. Now let's look at the columns. The column at the x's is (2, -1), the column at y's is (-1, 2) and the column at right hand side is (0, 3). We can write it down as following: This is a linear combination of columns. What this tells us is that we have to combine the right amount of vector (2, -1) and vector (-1, 2) to produce the vector (0, 3). We can plot the vectors in the column picture: If we take one green x vector and two blue y vectors (in gray), we get the red vector. Therefore the solution is again (x=1, y=2). The third way to look at this system entirely through matrices and use the matrix form of the equations. The matrix form in general is the following: A**x** = **b** where A is the coefficient matrix, **x** is the unknown vector and **b** is the right hand side vector. How to solve the equations written in matrix form will be discussed in the next lectures. But I can tell you beforehand that the method is called Gauss elimination with back substitution. For this two equations, two unknowns system, the matrix equation Ax=b looks like this: The next example in the lecture is a system of three equations in three unknowns: We can no longer plot it in two dimensions because there are three unknowns. This is going to be a 3D plot. Since the equations are linear in unknowns x, y, z, we are going to get three planes intersecting at a single point (if there is a solution). Here is the row picture of 3 equations in 3 unknowns: The red is the 2x - y = 0 plane. The green is the -x + 2y - z = -1 plane, and the blue is the -3y + 4z = 4 plane. Notice how difficult it is to spot the point of intersection? Almost impossible! And all this of going one dimension higher. Imagine what happens if we go 4 or higher dimensions. (The intersection is at (x=0, y=0, z=1) and I marked it with a small white dot.) The column picture is almost as difficult to understand as the row picture. Here it is for this system of 3 equations in 3 unknowns: The first column (2, -1, 0) is red, the second column (-1, 2, -3) is green, the fourth column (0, -1, 4) is blue, and the result (0, -1, 4) is gray. Again, it's pretty hard to visualize how to manipulate these vectors to produce the solution vector (0, -1, 4). But we are lucky in this particular example. Notice that if we take none of red vector, none of green vector and one of blue vector, we get the gray vector! That is, we didn't even need red and green vectors! This is all still tricky, and gets much more complicated if we go to more equations with more unknowns. Therefore we need better methods for solving systems of equations than drawing plane or column pictures. The lecture ends with several questions: - Can A **x**=**b**be solved for any**b**? - When do the linear combination of columns fill the whole space?, - What's the method to solve 9 equations with 9 unknowns? The examples I analyzed here are also carefully explained in the lecture, you're welcome to watch it: Topics covered in lecture one: - [00:20] Information on textbook and course website. - [01:05] Fundamental problem of linear algebra: solve systems of linear equations. - [01:15] Nice case: n equations, n unknowns. - [02:20] Solving 2 equations with 2 unknowns - [02:54] Coefficient matrix. - [03:35] Matrix form of the equations. Ax=b. - [04:20] Row picture of the equations - lines. - [08:05] Solution (x=1, y=2) from the row picture. - [08:40] Column picture of the equations - 2 dimensional vectors. - [09:50] Linear combination of columns. - [12:00] Solution from the column picture x=1, y=2. - [12:05] Plotting the columns to produce the solution vector. - [15:40] Solving 3 equations with 3 unknowns - [16:46] Matrix form for this 3x3 equation. - [17:30] Row picture - planes. - [22:00] Column picture - 3 dim vectors. - [24:00] Solution (x=0, y=0, z=1) from the column picture by noticing that z vector is equal to b vector. - [28:10] Can Ax=b be solved for every b? - [28:50] Do the linear combinations of columns fill the 3d space? - [32:30] What if there are 9 equations and 9 unknowns? - [36:00] How to multiply a matrix by a vector? Two ways. - [36:40] Ax is a linear combination of columns of A. Here are my notes of lecture one. Sorry about the handwriting. It seems that I hadn't written much at that time and the handwriting had gotten really bad. But it gets better with each new lecture. At lecture 5 and 6 it will be as good as it gets. The next post is going to be about a systematic way to find a solution to a system of equations called elimination.
true
true
true
This is going to be my summary of Linear Algebra course from MIT. I watched the lectures of this course in the summer of last year. This was not the first time I'm learning linear algebra. I already read a couple of books and read a few tutorials a couple of years ago but it was not enough for a curious mind like mine.
2024-10-12 00:00:00
2009-11-01 00:00:00
https://catonmat.net/ima…review-image.png
null
catonmat.net
Catonmat
null
null
21,995,372
https://www.essexst.com/justin-sun-tron-marketing/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,445,890
https://www.nejm.org/doi/full/10.1056/NEJMp2003762
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,684,579
https://www.cnn.com/2018/03/26/opinions/data-company-spying-opinion-schneier/index.html
It’s not just Facebook. Thousands of companies are spying on you | CNN
Bruce Schneier
**Editor’s Note: **Bruce Schneier is the author of “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.” The opinions expressed in this commentary are his. In the wake of the Cambridge Analytica scandal, news articles and commentators have focused on what Facebook knows about us. A lot, it turns out. It collects data from our posts, our likes, our photos, things we type and delete without posting, and things we do while not on Facebook and even when we’re offline. It buys data about us from others. And it can infer even more: our sexual orientation, political beliefs, relationship status, drug use, and other personality traits – even if we didn’t take the personality test that Cambridge Analytica developed. But for every article about Facebook’s creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it’s Facebook and not them in the spotlight. Because while Facebook is one of the biggest players in this space, there are thousands of other companies that spy on and manipulate us for profit. Harvard Business School professor Shoshana Zuboff calls it “surveillance capitalism.” And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it’s up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and – if not – what to do about it. There are 2,500 to 4,000 data brokers in the United States whose business is buying and selling our personal data. Last year, Equifax was in the news when hackers stole personal information on 150 million people, including Social Security numbers, birth dates, addresses, and driver’s license numbers. You certainly didn’t give it permission to collect any of that information. Equifax is one of those thousands of data brokers, most of them you’ve never heard of, selling your personal information without your knowledge or consent to pretty much anyone who will pay for it. Surveillance capitalism takes this one step further. Companies like Facebook and Google offer you free services in exchange for your data. Google’s surveillance isn’t in the news, but it’s startlingly intimate. We never lie to our search engines. Our interests and curiosities, hopes and fears, desires and sexual proclivities, are all collected and saved. Add to that the websites we visit that Google tracks through its advertising network, our Gmail accounts, our movements via Google Maps, and what it can collect from our smartphones. That phone is probably the most intimate surveillance device ever invented. It tracks our location continuously, so it knows where we live, where we work, and where we spend our time. It’s the first and last thing we check in a day, so it knows when we wake up and when we go to sleep. We all have one, so it knows who we sleep with. Uber used just some of that information to detect one-night stands; your smartphone provider and any app you allow to collect location data knows a lot more. Surveillance capitalism drives much of the internet. It’s behind most of the “free” services, and many of the paid ones as well. Its goal is psychological manipulation, in the form of personalized advertising to persuade you to buy something or do something, like vote for a candidate. And while the individualized profile-driven manipulation exposed by Cambridge Analytica feels abhorrent, it’s really no different from what every company wants in the end. This is why all your personal information is collected, and this is why it is so valuable. Companies that can understand it can use it against you. None of this is new. The media has been reporting on surveillance capitalism for years. In 2015, I wrote a book about it. Back in 2010, the Wall Street Journal published an award-winning two-year series about how people are tracked both online and offline, titled “What They Know.” Surveillance capitalism is deeply embedded in our increasingly computerized society, and if the extent of it came to light there would be broad demands for limits and regulation. But because this industry can largely operate in secret, only occasionally exposed after a data breach or investigative report, we remain mostly ignorant of its reach. This might change soon. In 2016, the European Union passed the comprehensive General Data Protection Regulation, or GDPR. The details of the law are far too complex to explain here, but some of the things it mandates are that personal data of EU citizens can only be collected and saved for “specific, explicit, and legitimate purposes,” and only with explicit consent of the user. Consent can’t be buried in the terms and conditions, nor can it be assumed unless the user opts in. This law will take effect in May, and companies worldwide are bracing for its enforcement. Because pretty much all surveillance capitalism companies collect data on Europeans, this will expose the industry like nothing else. Here’s just one example. In preparation for this law, PayPal quietly published a list of over 600 companies it might share your personal data with. What will it be like when every company has to publish this sort of information, and explicitly explain how it’s using your personal data? We’re about to find out. In the wake of this scandal, even Mark Zuckerberg said that his industry probably should be regulated, although he’s certainly not wishing for the sorts of comprehensive regulation the GDPR is bringing to Europe. He’s right. Surveillance capitalism has operated without constraints for far too long. And advances in both big data analysis and artificial intelligence will make tomorrow’s applications far creepier than today’s. Regulation is the only answer. The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it? I don’t see any hope of Congress passing a GDPR-like data protection law anytime soon, but it’s not too far-fetched to demand laws requiring these companies to be more transparent in what they’re doing. One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It’s hard to do right, and doesn’t do anything about the data that Facebook collects about people who don’t use Facebook. But it’s a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.
true
true
true
For every article about Facebook’s creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it’s not them in the spotlight, writes Bruce Schneier.
2024-10-12 00:00:00
2018-03-26 00:00:00
https://media.cnn.com/ap…915,c_crop/w_800
article
cnn.com
CNN
null
null
25,358,691
https://lifeandtimes.games/episodes/files/30
The Dragon Speech, and Chris Crawford's improbable dream
null
# 30 - The Dragon Speech, and Chris Crawford's improbable dream Click/tap here to download this episode. It was "the greatest speech he ever gave in his life", and it marked a turning point in his pursuit of his dream, but it had the note of a eulogy. This is the story of how — and why — the legendary designer Chris Crawford left the games industry in an opening-day lecture at the 1993 Game Developers Conference, an event that he had founded just six years prior. Become a Patron! (If you don't see the player above, it means your browser is blocking my podcast host Megaphone. If that's the case you can listen by downloading the mp3, turning off your adblocker, whitelisting all megaphone.com subdomains, or loading it up in your favourite podcast app. I'm sorry about the inconvenience, but there's nothing I can do about it until Megaphone either improves its privacy performance or I switch to a new host — which I'd rather not do just yet.) Chris is still at it, still chasing his dragon, now with a more stripped-back storyworld and storyworld engine. You can read about these — and perhaps have a go at making your own interactive storyworld — at his website, which is full of essays, reflections, development diaries, and educational materials from the past 30+ years of his life. Thank you to my patreon supporters for making this episode possible — especially my producer-level backers Scott Grant, Carey Clanton, Wade Tregaskis, Simon Moss, Seth Robinson, Vivek Mohan, and Rob Eberhardt. To support my work, so that I can uncover more untold stories from video game history, you can make a donation via paypal.me/mossrc or subscribe to my Patreon. (I also accept commissions and the like over email, if you're after something specific.) Thank you also to my sponsor, Richard Bannister, for his support. You can check out his modern reimaginings of classic arcade games at retrogamesformac.com. ### (Partial) Transcript *[Most episode transcripts/scripts are reserved for my Patreon supporters (at least for the time being), but I like to give you at least a taster here — or in this case, the first half of the episode.]* Welcome to the Life and Times of Video Games, a documentary audio series about video games and the video game industry, as they were in the past and how they've come to be the way they are today. I'm Richard Moss, and this is episode 30, The Dragon Speech, and Chris Crawford's improbable dream. We'll get going in just a moment. *pause for pre-roll ad slot* *** Chris Crawford didn't always have a dream. He wasn't always tormented by a menacing dragon that he could not defeat. it wasn't so very long ago, when I knew the dragon. It was in nineteen seventy five when I first encountered the concept of a computer game. That was a new concept to me. For me, the computer had always been a tool for scientific calculation. Now, the notion of using it to play…well, that was a fascinating and utterly unconventional concept to me. And so I decided to begin to acquaint myself with this technology. I had no dream as yet. For me, the dragon still slept… He was a teacher at the time, parlaying his knowledge of physics to students at a small community college in Nebraska, and he'd just met a man who was attempting to program a computerised version of a boardgame — a strategy wargame called Blitzkrieg. Intrigued, he started to ponder the problem. And before long he made his own computer game, a turn-based tactical wargame he called Wargy 1. You played against a computer-controlled opponent with help from a physical board and some pieces from an Avalon Hill boardgame. You'd make your moves on the board, then input them into the computer, and in return it would print out coordinates for its own moves. That game would eventually become a commercial program called Tanktics, which he initially self-published for Commodore-PET computers in 1978 — for a grand total of 150 sales, which was impressive at the time. And in 1981 he would have the game re-published on multiple systems by Avalon Hill. But by then Chris was firmly entrenched in the video game business. He'd left his teaching job to join Atari in 1979, eager to become a part of this exciting video game revolution. At Atari he briefly learnt the basics of programming for the Video Computer System, or the Atari 2600, as we know it today, before he shifted over to the group that really excited him — the group focused on developing games for the Atari 800 home computer, which at the time had the finest graphics and sound capabilities of any home computers on the market. The 800 was so far ahead of everything else. It was the machine to learn on, and I knew nothing about graphics and sound. So I hurled myself into the machine, absorbing everything I could, learning all about graphics and sound. Some of you may find it ironic to learn that for a time there I was known as 'the graphics wizard of Atari'. And indeed, there was a period of time there where my game, Eastern Front, was the most graphically advanced product in the marketplace. Because that was a phase I needed to go through. I needed to understand that. I needed to feel that I had a good grip on it. And so all through this period, from 1975 to 1981, for six years, I was apprenticing myself to this technology. I was turning it over and over in my hands. I was getting the feel of it in my fingertips. And for six years I had no dream at all. All I did was learn. And by 1981 I felt that I understood the technology. I felt that I knew what this medium was about, but I had no dream I could — I couldn't see the dragon, but by 1981 I could hear him thrashing about in the forest. I knew he was out there somewhere. I knew he was big, whatever he was, and I wanted to find it. And then in one of those fortuitous circumstances that is so perfectly timed that we can only ascribe its event, its occurrence, to the the workings of fate, then Alan Kay came into my life. And here, in 1981 — this is where our story starts. That name, Alan Kay, may sound familiar to you; he's one of the fathers of the personal computer, and of a concept called the Dynabook, which eventually manifested in the form of the iPad. Alan Kay did a PhD in computer science in the late 1960s, where he was mentored by the fathers of computer graphics, Ivan Sutherland and David Evans, and then joined Xerox PARC, the research and development company that created the Xerox Alto computer — which would inspire Apple to create the Macintosh. Alan Kay and his colleagues at Xerox also invented the concept of object-oriented programming, as well as graphical user interfaces and lots of other forward-thinking things that took decades to turn into mainstream technologies. In 1981, Alan Kay became Atari's chief scientist. He was hired to form a corporate research group that would push the boundaries of what's possible with video games. And Chris was asked to join that group, to apprentice himself to this great master of technology — whose mind races decades ahead, who could see and describe revolutions coming 20, 30, 50 years before they actually hit us. Alan Kay had a massive influence on Chris, and he taught Chris myriad lessons. But Chris recalls that one lesson stood out above all else. Chris Crawford: I'd say the most important one was to dream big, or aim high. One of his most useful adages was if you don't fail at least 90 percent of the time, you're not aiming high enough. *** Another way of viewing dreaming is to think in terms of alternate realities. There is, of course, reality, the real reality. But when we fantasise, we create an imaginary, a desired universe. But we don't care about whether the universe works, whether it's possible. Only when we dream do we create a universe that is actually possible. When Alan Kay taught Chris to dream, and dream big, Chris did exactly that. He dreamed of what games might become, of what games could be. At first his dream was imprecise, unclear, but as he thought upon it more, and as he wrote his first book, The Art of Computer Game Design, slowly it solidified in his mind. And by 1983, I had my dream. I could see the dragon clearly in my mind's eye. Let me tell you about my dream. I dreamed of the day when computer games would be a viable medium of artistic expression, an art form. I dreamed of computer games encompassing the broad range of human experience and emotion, computer games about tragedy or self-sacrifice, games about duty and honour, patriotism, of satirical games about politics or games about human folly. Games about man's relationship to God, or to nature. Games about the passionate love between a boy and a girl, or the serene and mature love between [a] husband and wife of decades. Games about family relationships or death, mortality, a boy becoming a man or a man realising he is no longer young. Games about a man facing truth at high noon on a dusty main street, or boy and his dog, or a prostitute with a heart of gold. All of these things and more were part of this dream, but by themselves they amounted to nothing because all of these things have already been done by other art forms. There's no advantage, no purchase, no — nothing superior about this dream. It's just an old rehash. All we are doing with the computer — if, if, if all we do is just reinvent the wheel with poor-grade materials, well, we don't have a dream worth pursuing. The critical piece of Chris's dream, the part to made it so important to him, that elevated it above the idea of games as an imitation of other art forms, to become a true art form all their own — that was interactivity. Games are interactive. They tap into a need to learn through play that's hardwired into our very being. Chris wanted all those things he dreamed of to be presented in a deeply interactive way, in a way that was unique only to video games. So he began to work on turning his dream to reality. This work involved an attempt to lead by example — to make games about social interaction, games about geopolitics (and it's worth noting that his one game there, the so-called "un-war" game Balance of Power, was a very big seller), games about things that matter — but also it involved trying to facilitate high-level discussion about games as an emerging art form. He did this by contributing the occasional essay or letter to Computer Gaming World, a magazine that positioned itself as an unofficial journal of computer games. And he did it by creating his own publication, the Journal of Computer Game Design, a publication he himself edited for its 150 subscribers, with essays from his peers about the theory and practice of designing games. And, most importantly, he did it by founding the still-running Computer Game Developers Conference — CGDC for short, or GDC, as it's known today. Gordon Walton: He literally said, hey, look, why don't we — we're not going to get anywhere unless we have a community. Why don't we do a community? Even though he'd been fine being a kind of a mountain man, a loner, just, you know, that's more who he is. But he said, no, no, we need to make changes. We need to change how people think about this medium. And the only way to make those changes is to get together. This is Gordon Walton, an industry veteran who today is known for his work on MMOs but back then ran a company called Digital Illusions. They specialised in porting games from one platform to another. He was one of 26 people in the room at that first CGDC, which took place in Chris's living room in 1987. Gordon Walton: Chris is a guy who's always looking over the horizon and he's not trying to, you know, he's a world changer, right? He wants to change the world and change the way people think about things. And so that's always been his driver. But CGDC wasn't changing much in the way people think about things. It was amazing for fostering a sense of community in the games industry, and it did help the industry advance — by enabling conversations between people working on different genres and platforms and in different parts of the world, but as the conference grew bigger and more successful, it also drifted further and further away from Chris's primary goal. Chris Crawford: Well. Its evolution was natural, and I expected this direction of evolution. I wasn't surprised by it at all. But I was hoping to get in some consideration for art in that. And I failed. I knew that as the industry was growing, it would become more commercial, more focussed on quick profits. Chris Crawford: But I thought that some of the wiser heads in the industry would be looking further down the road. And I was wrong. I mean, people were thinking exclusively in terms of, you know, next quarter. Year after year, his discontent grew. He saw marketing efforts focus on the tried and true, preaching to the converted and settling on a narrow range of established game genres — rather than attempting to expand the gaming audience with new genres. Meanwhile, the idea that games must be "fun" — not just compelling or entertaining or engaging, but specifically "fun" — that idea spread like a virus — a virus that would not have an antidote until the "serious games" movement emerged at the turn of the century. And at the same time, in an amplifying effect, game development budgets climbed ever higher — distorting the economics so that publishers became averse to taking risks on new or different concepts. It was the beginning of what we now call triple-A games publishing, where the most successful games are often — though certainly not always — the best-presented, highest-funded, and most-widely-marketed ones, but often also the least innovative, getting by on beautiful graphics and polished sound and whatever themes and mechanics are in trend at the time. And Chris hated it. He saw such games as the antithesis of what games should be. Expensive and expansive but creatively shallow, the beginnings of an obsession with mimicking Hollywood, rather than forging a new path unique to games. The biggest problem we face here is the lack of people in our games. Have you ever noticed computer games? All of our games are about things, not people. We shoot things, we chase things, things shoot us, things chase us. We manipulate things, manoeuvre things, allocate things, manage things. But it's always things, things, things. There are never any people in any of our games. Now, sure, I've seen the pitiful excuses for characters in our games. They are Potemkin villages that — the characters in our games are like a cardboard box with a picture of a face pasted onto the front, but nothing inside. There's no heart and soul. And all I have are a couple of buttons on front. Push one button and he says, I am your friend now. And you push the other button and he says, I am your enemy now. *(For more, including the tale of how Chris concluded his speech, why he felt compelled to give it, and how he reflects on his quixotic quest now, you'll have to either just listen to the episode or sign up as a supporter on Patreon — everyone who pledges $3 or more a month gets access to full episode transcripts [amongst other things].)*
true
true
true
It was 'the greatest speech he ever gave in his life', and it marked a turning point in his pursuit of his dream, but it had the note of a eulogy. This is the story of how — and why — the legendary designer Chris Crawford left the games industry in an opening-day lecture at the 1993 Game Developers Conference, an event that he had founded just six years prior.
2024-10-12 00:00:00
2020-12-09 00:00:00
https://lifeandtimes.gam…VG-logo-2021.png
website
null
The Life & Times of Video Games
null
null
2,488,983
http://mealsnap.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,094,231
http://www.northwestern.edu/newscenter/stories/2010/01/cartilage.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,430,060
https://ebooks.adelaide.edu.au/l/locke/john/l81u/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,966,811
http://channel9.msdn.com/Series/C9-Lectures-Erik-Meijer-Functional-Programming-Fundamentals/Lecture-Series-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-2
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,673,887
http://m.engadget.com/default/article.do?artUrl=http://www.engadget.com/2011/06/20/fujitsu-k-supercomputer-now-ranked-fastest-in-the-world-dethron/&category=classic&postPage=1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
160,784
http://www.crunchgear.com/2008/04/10/video-crunchgear-interviews-james-dyson/
TechCrunch | Startup and Technology News
Brian Heater
**Top Headlines** ## Latest News ## Storylines Catch up on trending topics ## Upcoming Events - ### StrictlyVC NYC Join us for cocktails and killer content - Limited availability! ## Startups More From: ## Venture Our venture capital news features interviews and analysis on all the VCs, the VC-backed startups, and the investment trends that founders, investors, students, academics – and anyone else interested in the way that tech is transforming the world – should be tracking. ## AI ## Security ## Apps ## Transportation ## Podcasts ### Equity Equity is TechCrunch’s flagship podcast about the business of startups, unpacked by the writers who know best. Produced by Theresa Loconsolo. Edited by Kell. ### Found Each week, we feature early-stage startup founders to hear first-hand accounts of the real stories behind startups. Produced by Maggie Stamets. Edited by Kell. ### StrictlyVC Download Each week, StrictlyVC’s host and TechCrunch Editor-in-Chief Connie Loizos, with Alex Gove, former journalist, VC and operating exec, review the top stories in StrictlyVC and interview a mover and shaker in the world of tech.
true
true
true
TechCrunch | Reporting on the business of technology, startups, venture capital funding, and Silicon Valley
2024-10-12 00:00:00
2024-10-04 00:00:00
https://techcrunch.com/w…re-reverse2x.png
website
techcrunch.com
TechCrunch
null
null
1,501,920
http://www.nytimes.com/2010/07/10/technology/10broadband.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,037,838
https://github.com/AuspeXeu/openvpn-status
GitHub - AuspeXeu/openvpn-status: A web-based application to monitor OpenVPN server client connections
AuspeXeu
A web-based application to monitor (multiple) OpenVPN servers. Features - Multi server support - WebSocket based real-time events - Map view - Disconnect clients remotely - Persistent event log - Mobile friendly - Full material design - NodeJS 10 or higher - npm package manager - Windows 7 is only supported on version `4.2.12` and below Installation comes in two flavours. Either from source as per the following section or you can skip to the docker section. `git clone https://github.com/AuspeXeu/openvpn-status.git` ``` cd openvpn-status npm install ``` The configuration is located in `cfg.json` . Option | Default | Description | ---|---|---| port | `3013` | Port on which the server will be listening. | bind | `127.0.0.1` | Address to which the server will bind to. Change to `0.0.0.0` to make available on all interfaces. | servers | `[{"name": "Server","host": "127.0.0.1","man_port": 7656, "man_pwd": "1337", "netmask": "0.0.0.0/0"}]` | Array of servers. `man_pwd` is only needed if a password is set as per the [documentation](https://openvpn.net/community-resources/reference-manual-for-openvpn-2-0/, `netmask` is only needed if connecing networks to filter entries) | username | `admin` | User for basic HTTP authentication. Change to `''` or `false` to disable. | password | `admin` | Password for basic HTTP authentication. | web.dateFormat | `HH:mm:ss - DD.MM.YY` | DateTime format used in the web frontend (options). | Example: ``` { "port": 3013, "bind": "127.0.0.1", "servers": [ {"id": 0, "name": "Server A", "host": "127.0.0.1","man_port": 7656}, {"id": 1, "name": "Server B", "host": "127.0.0.1","man_port": 6756} ], "username": "admin", "password": "CHANGE THIS - DO NOT USE ANY DEFAULT HERE", "web": { "dateFormat": "HH:mm - DD.MM.YY" } } ``` Add the following line to your configuration file, e.g., `server.conf` . This will start the management console on port `7656` and make it accessible on `127.0.0.1` , i.e. this machine. Optionally, a password file can be specified as per the openvpn manual. ``` management 127.0.0.1 7656 // As specified in cfg.json for this server ``` Restart your OpenVPN server. Before the application is ready to run, the frontend needs to be built. This is done using npm. `npm run build` This makes the application available on http://127.0.0.1:3013. ``` node server.js ``` ``` sudo npm install pm2 -g pm2 start pm2.json pm2 save ``` ``` [Unit] Description=OpenVPN Status After=network.target [Service] User=root WorkingDirectory=/home/pi/backend \\ Adjust this path ExecStart=/usr/local/bin/node server.js \\ Adjust this path Restart=on-failure RestartSec=5s [Install] WantedBy=multi-user.target ``` In order to integrate the service into your webserver you might want to use nginx as a reverse proxy. The following configuration assumes that the port is set to `3013` as it is by default. ``` server { listen 80; server_name [domain]; location / { proxy_pass http://127.0.0.1:3013 proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400; } } ``` As shown in the `docker-compose.yml` below, the folder server will be mounted to the host's file system. Upon boot, `openvpn-status` scans that folder for `.json` files and adds them as servers. An example of such a file is. ``` {"name": "Server","host": "127.0.0.1","man_port": 7656} ``` **3013** - STATUS_USERNAME - STATUS_PASSWORD - STATUS_WEB_FORMAT ``` # Full example: # https://raw.githubusercontent.com/AuspeXeu/openvpn-status/master/docker-compose.sample.yml version: '2' services: openvpn-status: image: auspexeu/openvpn-status container_name: openvpn-status environment: - STATUS_USERNAME=admin - STATUS_PASSWORD=<CHANGE THIS - DO NOT USE ANY DEFAULT HERE> - STATUS_WEB_FORMAT='HH:mm:ss - DD.MM.YY' volumes: - ./servers:/usr/src/app/servers' ports: - 8080:3013 restart: "unless-stopped" ``` Find a list of supported browsers here This product includes GeoLite2 data created by MaxMind, available from https://www.maxmind.com. GoSquared provides the flag icons for this project. The source for the flag icons can be found here.
true
true
true
A web-based application to monitor OpenVPN server client connections - AuspeXeu/openvpn-status
2024-10-12 00:00:00
2015-08-10 00:00:00
https://opengraph.githubassets.com/c17e7a5ec5cfb2886bf53f6b922b3a7f535857f87d1cb4c4fd2872f368906157/AuspeXeu/openvpn-status
object
github.com
GitHub
null
null
7,905,203
https://medium.com/@nicharry/i-am-not-a-commodity-8d147d76c90b
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,087,938
http://venturebeat.com/2011/09/30/kindle-blackberry-influence/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,779,881
https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16088663&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16088663
RocksDB Integrations
null
#### Details - - **Status:**Resolved - - **Resolution:**Fixed - - None #### Description Hi Legal, There's a hypothetical question on the Apache Cassandra mailing list about potentially expanding Cassandra's storage to be pluggable, specifically using RocksDB. RocksDB has a 3 clause BSD license ( https://github.com/facebook/rocksdb/blob/master/LICENSE ), and a patent grant ( https://github.com/facebook/rocksdb/blob/master/PATENTS ) I know the 3 clause BSD license is fine, but is the wording of the patent grant problematic? cc dikanggu #### Attachments #### Issue Links - incorporates - MARMOTTA-669 Clarify the usage of RocksDB license - Closed
true
true
true
null
2024-10-12 00:00:00
2017-04-20 00:00:00
null
null
null
JIRA
null
null
22,297,262
https://florianwinkelbauer.com/posts/2020-02-08-git-with-chunks/
Dreaming of Git with Chunks
Florian Winkelbauer
# Dreaming of Git with Chunks I see more and more people using content addressable storage together with content defined chunking (CDC) to pull off interesting applications. Here are a few examples: - League Of Legends' patch downloader - Asuran - rdedup - borg - restic I'd love to see a distributed version control system (DVCS) which is based on CDC (which support for encryption and compression). So far, I have only found the Attaca project, which seems to be unmaintained at the moment. I believe that a "CDC based Git clone" would offer some interesting possibilities. We could: - Keep track of our backups on our local system, while still being able to synchronize changes with one or more remote locations - Support large binary data so that game developers (or other developers who have to deal with large assets) could use a DVCS - Track build artifacts (packages, containers, binaries, …) using commits and branches. This would allow us to build update mechanisms for our applications based on `push` ,`pull` and`fetch` operations (similar to the League Of Legends post above) - Use a version control system as an alternative to tools such as Dropbox, NextCloud or Syncthing ## Design Ideas Git uses four components to build its internal data structure: - Blobs to store the actual file content - Trees to create a "snapshot" of a repository - Commits to add meta information to trees and to create the repository history - References (branches and tags) to point to a specific point in the commit graph The major difference between Git and a CDC-based DVCS would be, that a single file might be split into one more chunks. This leaves us with two new problems: - We have to keep track of which chunks make up a file - We need to do some additional work so that we re-gain features such as `git diff` **Addressing a File** While Git can use a single hash to find a specific file, we need three pieces of information to do the same: - A unique identifier - A list of hashes to find all current chunks - Information about how to process the data in case of encryption and/or compression Instead of a file name, I believe that a UUID might be even better to uniquely identify a file. This way, we could keep track of a file, even if its name changes over time. In some cases we might even be able to detect a rename operation by identifying the file based on its unchanged chunks. **Construct Files in a Cache** Before we can run operations similar to `git diff` , we have to reconstruct a file based on its chunks. To simplify such operations, we could build an internal cache for a specific commit. Keep in mind that a commit is immutable, which means that such a cache could be operated in an "append-only" fashion. While this approach seems to be pretty straightforward, we would need to implement some form of retention policy in order to keep our overall disk space consumption in line.
true
true
true
null
2024-10-12 00:00:00
2020-02-08 00:00:00
null
null
null
null
null
null
13,318,509
https://medium.com/@trydesignlab/12-uxmas-challenges-part-3-of-4-eda5236b96e2
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,801,993
https://hackaday.com/2023/12/26/chinas-nuclear-powered-containership-a-fluke-or-the-future-of-shipping/
China’s Nuclear-Powered Containership: A Fluke Or The Future Of Shipping?
Maya Posch
Since China State Shipbuilding Corporation (CSSC) unveiled its KUN-24AP containership at the Marintec China Expo in Shanghai in early December of 2023, the internet has been abuzz about it. Not just because it’s the world’s largest container ship at a massive 24,000 TEU, but primarily because of the power source that will power this behemoth: a molten salt reactor of Chinese design that is said to use a thorium fuel cycle. Not only would this provide the immense amount of electrical power needed to propel the ship, it would eliminate harmful emissions and allow the ship to travel much faster than other containerships. Meanwhile the Norwegian classification society, DNV, has already issued an approval-in-principle to CSSC Jiangnan Shipbuilding shipyard, which would be a clear sign that we may see the first of this kind of ship being launched. Although the shipping industry is currently struggling with falling demand and too many conventionally-powered ships that it had built when demand surged in 2020, this kind of new container ship might be just the game changer it needs to meet today’s economic reality. That said, although a lot about the KUN-24AP is not public information, we can glean some information about the molten salt reactor design that will be used, along with how this fits into the whole picture of nuclear marine propulsion. ## Not New, Yet Different The idea of nuclear marine propulsion was pretty much coined the moment nuclear reactors were conceived and built. Over the past decades, quite a few have been constructed, with some – like commercial shipping and passenger ships – being met with little success. Meanwhile, nuclear propulsion is literally the only way that a world power can project military might, as diesel-electric submarines and conventionally powered aircraft carriers lack the range and scale to be of much use. The primary reason for this is the immense energy density of nuclear fuel, that depending on the reactor configuration can allow the vessel to forego refueling for years, decades, or even its entire service life. For US nuclear-powered aircraft carriers, the refueling is part of its mid-life (~20 years) shipyard period, where the entire reactor module is lifted out through a hole cut in the decks before a fresh module is put in. Because of this abundance of power there never is a need to ‘save fuel’, leaving the vessel free to ‘gun it’ in so far as the rest of the ship’s structures can take the strain. Theoretically the same advantages could be applied to civilian merchant vessels like tankers, cargo and container ships. But today, only the Soviet-era *Sevmorput *is still in active duty as part of Rosatom’s Atomflot that also includes nuclear powered icebreakers. After having been launched in 1986, Sevmorput is currently scheduled to be decommissioned in 2024, after a lengthy career that is perhaps ironically mostly characterized by the fact that very few people today are even aware of its existence, despite making regular trips between various Russian harbors, including those on the Baltic Sea. The KLT-40 nuclear reactor (135 MWth) in the Sevmorput is very similar to the basic reactor design that powers a US aircraft carrier like the *USS Nimitz* (2 times A4W reactor for 550 MWth). Both are pressurized water reactors (PWRs) not unlike the PWRs that make up most of the world’s commercial nuclear power stations, differing mostly in how enriched their uranium fuel is, as this determines the refueling cycle. Here the KUN-24AP container ship would be a massive departure with its molten salt reactor. Despite this seemingly odd choice, there are a number of reasons for this, including the inherent safety of an MSR, the ability to refuel continuously without shutting down the reactor, and a high burn-up rate, which means very little waste to be filtered out of the molten salt fuel. The roots for the ship’s reactor would appear to be found in China’s TMSR-LF program, with the TMSR-LF1 reactor having received its operating permit earlier in 2023. This is a fast neutron breeder, meaning that it can breed U-233 from thorium (Th-232) via neutron capture, allowing it to primarily run on much cheaper thorium rather than uranium fuel. ## The Easy And Hard Parts Making a very large container ship is not the hard part, as the rapid increase in the number of New Panamax and larger container ships, like the ~24,000 TEU Evergreen A-class demonstrate. The main problem ultimately becomes propelling it through the water with any kind of momentum and control. Having a direct drive shaft to a propeller requires that you have enough shaft power, which requires a power plant that can provide the necessary torque directly or via a gearbox. Options include using a big generator and electric propulsion, or to use boilers and steam turbines. Yet as great as boilers and steam turbines are for versatility and power, they are expensive to run and maintain, which is why the Evergreen G-series container ships have a 75,570 kW combustion engine, while the Kitty Hawk has 210 MW and the Nimitz has 194 MW of installed power, with the latter having enough steam from its two A4W reactors for 104 MW per pair of propellers, leaving a few hundred MW of electrical power for the ship’s systems. This amount of power across four propellers allow these aircraft carriers to travel at 32 knots, while container ships typically travel between 15 to 25 knots, with the increased fuel usage from fast steaming forming a strong incentive to travel at slower speeds, 18-20 knots, when deadlines allow. Although fuel usage is also a concern for conventionally powered ships like the Kitty Hawk, the nuclear Nimitz has effectively unlimited fuel for 20-25 years and thus it can go anywhere as fast as the rest of the ship and its crew allows. ## Got To Go Fast Today’s shipping industry finds itself as mentioned earlier in a bind, even before recent events that caused both the Panama and Suez canals to be more or less off-limits and forcing cargo ships to fall back to early 19th century shipping routes around Africa and South America. With faster cargo ships traveling at or over 30 knots rather than about 20, the detour around Africa rather than via the Suez Canal could be massively shortened, providing significant more flexibility. If this offering also comes at no fuel cost penalty, you suddenly got the attention of every shipping company in the world, and this is where the KUN-24AP’s unveiling suddenly makes a lot of sense. Naturally, there is a lot of concern when it comes to anything involving ‘nuclear power’. Yet many decades of nuclear propulsion have shown the biggest risk to be the resistance against nuclear marine propulsion, with a range of commercial vessels (Mutsu, Otto Hahn, Savannah) finding themselves decommissioned or converted to diesel propulsion not due to accidents, but rather due to harbors refusing access on ground of the propulsion, eventually leaving the Sevmorput as the sole survivor of this generation outside of vessels operated by the world’s naval forces. These same naval forces have left a number of sunken nuclear-powered submarines scattered on the ocean floor, incidentally with no ill effects. Although there are still many details which we don’t know yet about the KUN-24AP and its power plant, the TMSR-LF-derived MSR is likely designed to be highly automated, with the adding of fresh thorium salts and filtering out of gaseous and solid waste products not requiring human intervention or monitoring. Since the usual staffing of container ships already features a number of engineering crew members who keep an eye on the combustion engine and the other systems, this arrangement is likely to be maintained, with an unknown amount of (re)training to work with the new propulsion system required. With Samsung Heavy Industries, another heavy-shipping giant, already announcing its interest in 2021 for nuclear power plant technology based around a molten salt reactor, the day when container ships quietly float into harbors around the world with no exhaust gases might be sooner than we think, aided by a lot more acceptance from insurance companies and harbor operators than half a century ago. (Top image: the proposed KUN-24AP container ship, courtesy of CSSC) Good because a small handful of cargo ships outweigh the emissions of every single land vehicle on Earth combined. But they are going to have to have the steel to open up with chainguns on Somali boats approaching them (somehow I doubt the Chinese have any qualms with this) What is the definition of “small handful”? Probably over 10,000 ships (USA only), assuming my math is decent. The graph above shows that 100 tons per day is a reasonable average, so, that gives us about 200,000 lbs, or 33,333 gallons per day, or 12,166,666 gallons per year. The average USA car burns 489 gallons per year, which means that a container ship uses as much fuel as about 25,000 cars. But, because there are over 250 million cars in the USA, it would still take over 10,000 container ships to offset that. (Some numbers were pulled from Google and may not be accurate) There are tons of extra factors, such as ships having dirtier exhaust, or often running outside of the range on the graph, etc. This was just a rough estimation. How many ships are there (USA only) that exist? I thought bunker fuel was (literally) tons dirtier than an equivalent amount of gasoline or diesel? I’d think that’s more due to bunker fuel burning with lots of soot, i.e. particulates, and sulfur emmission. It doesn’t help that there are way fewer mandates on sea than on land, and therefore companies tend to not filter much (if at all) of their emissions. Modern truck diesels would be sooty too if it weren’t for mandatory particulate filters, and emit lots of sulfur dioxide (causing acid rain) if it weren’t for mandatory sulfur removal during fuel production. Regulation has come a long way in the past 50 years, at least on land. Bunker oil is a heavy fuel oil (HFO) used by large ships because it is much less expensive than lighter and cleaner fuels. Its viscosity is similar to tar and it poses significant environmental risks. However, shippers pay a price with onboard settling tanks and centrifugal separators for purification. Alfalaval, Mitsubishi and Westfalia are some of the suppliers of this purification equipment. In addition to reducing carbon emissions, these purification systems can significantly improve engine life by removing some abrasive material from the HFO. The cost savings are so large that most shippers will temporarily halt operation of a ship if its purification system has a breakdown. This is despite the fact that cleaner fuels are available. Over nite, worldwide service networks support these systems. HFO is so viscous and dirty that it is banned as a fuel source for ships traveling in the Antarctic https://www.youtube.com/watch?v=2HI_dsnKRtg https://en.wikipedia.org/wiki/Heavy_fuel_oil To be fair, today’s cargo ships are relatively efficient when it comes to moving those 25,000 cars (or whatever cargo) across the ocean. Improvements certainly could be made (including finding ways to ship less, of course), but from a grams of CO2 per tonne-km, it’s hard to beat a large container ship. Rail is probably better though getting stuff across an ocean would make that a challenging choice. Maybe ammonia. https://www.canarymedia.com/articles/sea-transport/the-race-is-on-to-build-the-worlds-first-ammonia-powered-ship “Good because a small handful of cargo ships outweigh the emissions of every single land vehicle on Earth combined.” Serriously doubt the “small handful” part. BTW, hurray for China. /s China is building six times more new coal plants than other countries, report finds March 2, 2023 https://www.npr.org/2023/03/02/1160441919/china-is-building-six-times-more-new-coal-plants-than-other-countries-report-fin “Everybody else is moving away from coal and China seems to be stepping on the gas,” she says. “We saw that China has six times as much plants starting construction as the rest of the world combined.” Winston Sterzel? On the one hand something really has to be done about marine emissions – living next to a port, I can taste when a few large ships are in. On the other hand, I’m not convinced anyone needs cheap imported tat enough for a nuclear container ship to be the answer. This is likely just a concept. China has far too many more important issues to focus on than a cargo ship with limited harbor access. Not to mention that a ship of this size is not going to be able to use the two canals mentioned. You run into many logistical issues even after you have it built and running. This feels like Soviet propaganda. We are so forward thinking! We put a reactor in a shipping contain. Totally new, never been done before. Pay no mind to the practical implementation it has. This is a wonder of the technological world. Assuming the Somali pirates can catch up to them. Even then, a good pump and water nozzle should clean up nicely. The problem with trying to stop people boarding with a water hose is that a machine gun soon puts off the person with the hose plus for every port the ship calls into you need a firearm license for each person that uses them. With nuclear reactor You have a lot of steam, easy to dump down the ladder :) “the detour around Africa rather than via the Suez Canal could be massively shortened” It’s not a “detour” if a nuclear powered cargo ship cannot even sail in certain waters (i.e., Suez Canal area, due to boomboom) They were referring to when the canal was closed and therefore forced all ships to make the expensive detour. Boomboom isn’t really a concern, the issue is the kind of environmental disaster that could happen. Well it may be a concern in some European countries (especially Germany) with a strong Anti-Nuclear movement. And quite some more countries are in some form of trade war with China and may deny these ships access to their harbors/territorial waters for “safety concerns”. This may in the end significantly limit the use cases for nuclear ships made in China. Well this is a braindead comment. There is no “boomboom” because it’s Thorium-based molten salt reactor. It doesn’t make anything used in nuclear weapons. I thought boomboom referred to the Houthi marauders. Yeah, a bit vague. yesyesyes I did mean boomboom of unfriendly people who can reach out to nearby boats. I wouldn’t be too down on non-nuclear submarines and carriers – both are still very capable and can operate at vast distances from base, and as the range of a carrier is dictated by the range of the shortest ranged escort vessel anyway… So while nuclear carriers can go a long way on its own it will never actually do so, and the support vessels are not nuclear powered (largely anyway). And nuclear vessels have a larger minimum size – big is great out in the deep waters, but makes resupply and operating near the coast much harder. Which is a problem that may bite this cargo ship too – if its too big to dock in most places you have to hope there is enough trade between the few ports that are deep enough to take it – (which is already true for many of the larger cargo ships from my understanding, so bigger still may well be too far). yep, I read “Meanwhile, nuclear propulsion is literally the only way that a world power can project military might, as diesel-electric submarines and conventionally powered aircraft carriers lack the range and scale to be of much use.” and laughed. nuclear subs are a special use case – and very useful for that – but non nuclear subs are both much cheaper and fine for many many things… Indeed, and at least in training exercises more than a few allied navy and their often diesel electric sub have humiliated a US fleet by sinking the carrier. Not really my area of expertise but pretty sure it has reportedly happened a few times and not just been the UK Royal Navy (which would for me be local news), but the Swedes and French I think the Aussies too have sneeked in, though some these events are rather old now and some may have been nuclear subs – as I said not really something I care that much about. And on the front of projecting power there have been so many aircraft carrier that are/where not nuclear and have projected power a very long way from home and try telling the German Uboats of WWII they were short range… Can’t stay submerged forever, but not like they don’t have legs. “Indeed, and at least in training exercises more than a few allied navy and their often diesel electric sub have humiliated a US fleet by sinking the carrier.” No, that has literally never happened. Some allied navies have “sank” a US carrier when the US desugned the exercise to be so heavily weighed in the submarines favour that it would be damn near impossible for it to fail to “sink” the carrier. Nobody was embarrassed by the inevitable outcome; it was good training for everyone involved. Are you saying the swedish submarine rented by the us navy didnt manage to “sink” the carrier in the wargame scenario it participated in ? From what i recall, the yanks could even find the thing until the swedes popped up to the surface, 700 yards from the carrier. A suitable thorium reactor will fit into any present day panamex or suezmex sized container ship without issue. “And nuclear vessels have a larger minimum size – big is great out in the deep waters” “Big” doesn’t necessarily mean “deep draft” or especially deep draft. They didn’t tell us what it would be, but one can even imagine a surface effects craft actually able to come up on land, no harbor required, maybe just a good mudflat or marsh. >“Big” doesn’t necessarily mean “deep draft” True, but the two do have a correlation. But even in the case where your draft is unchanged bigger doesn’t always work – dock infrastructure is only so big, the inlet and turns you must take to manoeuvrer while in closer to the coast become harder or even impossible. If you want to build a nuclear powered landing craft you probably could. But at the same time being bigger there with the usual military context of landing craft isn’t a great idea, putting too many eggs in that one basket while also limiting just how many beaches you could land at. Even bigger? Are we talking exceeding PanMax or SuezMax? And what about countries that will probably block port privileges because of the nuclear reactor? Given the unmitigated environmental disaster which is the current container ship system, this has to be a good thing. But what’s really interesting is that it will use a thorium reactor. I’ve heard the Chinese were investing heavily in thorium but hadn’t realised that got beyond the prototype stage. Theoretically thorium is cheaper and safer and generally less polluting than uranium reactors, and is ideal for this use case. I wonder if they’ve actually managed to achieve this in practice? A not-so-incidental side effect, by the way, would be a line of commercialised modular small thorium reactors for general-purpose electricity generation. Which could make electric cars work, and or completely overhaul the ‘grid”. Probably wind up having ‘red power’ only for charting cars like we have ‘red diesel’ only for farming. If they’re going to make this thing larger than Panamax then why not make it an ice-breaking cargo ship? It would have the power to go through the Northwest Passage, size would not be problem and it would be quicker to get from China to Europe or the east coast of America. Probably because the Northwest Passage is only usable during the summer, even for ice breakers. The length of time that it remains usable has been growing over the last couple decades, but it doesn’t seem prudent to design your first nuclear cargo ship around a route which is problematic at the best of times. Great thing, especially them using Thorium. I’m just a bit concerned that it’s designed, built and maintained in China. They don’t have the best of track records for keeping things working and cleaning up after themselves. I wouldn’t want a Chinese designed and built floating reactor anywhere near our coasts. The CCP is corrupt and will cut any corners that exist. Look into the collapsed “tofu-dreg” constructed buildings, unless the CCP and their shills have successfully scrubbed the evidence clean from the internet. Nuclear power for Chinese ship is good for planet, nuclear power for mains in USA is bad…why? When you find out please tell the rest of us. Beats me. The logic applied by some members of our society is indeed very confusing, convoluted and contradictory. You got me… Logic escapes me too. Nuclear is good ‘everywhere’. The greenies should be more than happy to promote nuclear energy. Meanwhile keep the coal/gas flowing until we finally transition to the clean energy :) . That. Without cheap reliable energy there’s no civilization to be worth its name. Without cheap reliable energy the transition to cheap reliable green energy will be rather painful. Few politicos seem to understand this. I am sorry, i really don’t like this and i am NOT talking stupid politics. How safe is this thing? We had already 2 meltdowns (remember your history lessons) of nuclear reactors with catastrophic consequences. And i am really afraid this thing – if it is the first of its kind – will have some problems that have to be sort out, but we are talking a F*** NUCLEAR REACTOR, not some random device (that when it blows up might be a *local* desaster but “only” a *local* one). Also i am really afraid some nasty people will get on this thing, mess with some buttons and/or put some explosives somewhere and/or other sabotage. No, please, don’t. Yes CO2 is a big concern and ships also blow other nasty stuff into the air, but going nuclear? Yeah… Why can’t we have nice things? Cause they clash with the drapes. LOL! (Thanks, that laugh helped clear some lung passages) A molten salt reactor is incapable of melting down – the nuclear reaction does not produce enough heat on its own to maintain the salt in a molten state. If there’s ever an issue, an emergency frost plug can have the power cut to it, allowing it to melt and drain the system out into a containment vessel where it will solidify and act to moderate and dampen the nuclear reaction. Even if that fails, the molten salt will solidify in place and won’t go critical. This ^ There is no secure up/down on a ship … Please educate us about the catastrophies that occurred. Because last time I checked, the number of people killed by the nuclear industry was dwarved by the number killed in fossil fuel industry, and that’s without counting climate change. As if the number of fatalities were the only metric by which to measure the severity of an accident. More generally, while it may be possible to design a nuclear reactor that is “safe”, the project to staff them with commensurately “safe” builders and operators has been stuck at Human Being rev 1.0 for a looong time. Negative health effects among workers around nuclear reactors are also way lower than among those working around fossil fuel reactors. Sorry – it is not the first of its kind. USDOE built a TMSR at the lab in Tennessee. I suggest you do a little research on “thorium molten salt reactor” before extrapolating whatever you understand about uranium-water reactors. Note that US Naval reactors have used lIquid sodium in the primary coolant loop for a very long time. Naval reactors systems are not refueled but are removed as sealed units and reprocessed a special facility. One obvious failure mode is that the highly radioactive and water soluble irradiated thorium salts get dumped/leak near the coast and wipe out an entire fishing ecosystem, perhaps affecting 20M people. Given China’s reliance on offshore fisheries this seems like a pretty poor choice for them. They’ll just make sure the accidental leak happens along some Western country’s coast On the upside, if things go tits up with the reactor, all you have to do is pull the plug and sink the damned thing to keep it out of sight and mind. Quote: “These same naval forces have left a number of sunken nuclear-powered submarines scattered on the ocean floor, incidentally with no ill effects.” Response: Go breathe asbestos dust all day for a week. Test your health a month after that. You are very likely to find no impact. Does that mean that asbestos is safe? A fire on a nuclear submarine under the ocean is a tragedy for the crew and people close to them, a fire on a nuclear containership in a harbor could be a tragedy for everyone. Molten salt reactor cannot melt down, it already is during operation. The salts used are nonflammable and nonvolatile. The volatile fission products can be removed from the salt loop online during operation, keeping the inventory low. It’s not a pressurized water reactor that can fart mightily if the coolant loop is broken. But nukular baaaad. It’s main safety feature is meltdown, if you just let it sit in the reactor without cooling it the salt will become hot enough to become volatile. Dropping down into containment to spread out and cool quickly will be harder when down can become up. Assuming moderation fails obviously. The plug method is nice for a static application, but reaching a high level of confidence that you don’t have a pool of salt reaching a very high equilibrium temperature after capsizing will be a lot harder. On a ship you will be forced to put more confidence in active safety. Its never too early to panic. Especially when you have a lot of Chicken Little’s running around out there. :rolleyes: Why do we not using air ships like the one in the disneys series TaleSpin? I want to be an air pirate, Arrrrrrr! Ah yes now i remember the Hindenburg, there was something, safety standards and so… :-) Ships like that definitely need one of those guns that shoot missiles that are too close out of the air. It’s purely defensive so there’s little to complain about. Of course now thanks to the damn ukrainians making it a thing we now also need an anti-‘water-drone’ device to compliment that, although that can get tricky to ID as hostile or just a small boat sailing along. Maybe just a container. https://youtu.be/kyH47LHgeNc Where will China dump the “cleaner” nuclear waste? And who will watch over it for the required 10,000 years? This is not the strawman you’re looking for. As mentioned in the article already, an MSR is capable of very high burn-up due to the fuel being mixed in with the coolant, meaning that it can use most of the transuranics and minor actinides that normally are stored with ‘spent fuel’ are instead fissioned. This means that an MSR can not only run on very little fresh fuel, but also produces a negligible amount of radioactive isotopes that it cannot breed into more fuel, such as radioactive xenon gas. This is all however very short-lived, on the order of minutes to months before it drops below background radiation levels. Darn good answer Maya! (Android substituted “darn” for what I wanted to write) Does Android _really_ censor “damn” to “darn”? That would be fucking stupid. The biggest problem with this design is the use of molten salts. At those temperatures most metals will react with the salts. Doesnt bode well for long term applications. If this was viable, we would already have molten salt reactors in commercial production. Anything we do to try and save the planet, a volcanic eruption puts us back a few hundred years. EV’s are a tax pure and simple. AI should be solving self-charging not self-driving. I am a manual wheelchair user disabled driver and electric are just not compatible with my life. I have modified a lot in my life. Russia has a ‘graveyard’ of decaying, nuclear powered icebreakers. What happens to thousand of decaying, radioactive containerships. Russian has frequently been caught, dumping them in the sea of Japan. Similarly with the pile of unwanted electric vehicles. What happens when they are no longer useful? I am not qualified in terms of either technical or economic knowledge, to give educated statements, but… Why don’t we start doing things that WORK**? ** I mean, work, not only for a few years or decades, but ( as a technology) sustainable? Obvioulsy we (humanity) found out we cannot continue using fossile fuels forever (of course we can for another 150 years, but already now it has stopped feeling cozy, hasn’t it ?). And we are talking about commercial use. Commercial use is always tied to cost.Even now the (true) cost of nuclear power is much higher (and only doable with high subsidies) than using renewables. Renewables may at the moment be no choice for marine vessels (except you produce green hydrogen on land and use that). Think about the immediate cost of commercial nuclear powered vessels. Safety protocols (would it really make you feel better knowing that the chinese won’t care as much?). Maintainance. I am sure the IAEA would kick in. Don’t compare that to military use, as there: 1) cost is usually not a major factor, 2) possible radiation exposure is more easily tended to be defined as soldiers “job risk” So i think for that reason only it will not happen. I think it has been in the mid 70s we internationally stopped dumping nuclear waste into the ocean (officially). In that context,the risks of the vessel *in use* have been discussed here. Still the problem with nuclear waste remains. To my knowledge worldwide NO safe solution for storing nuclear waste (for 100000s of years) has been found. There is NO company, NO government that can guarantee keeping them safe even only for centuries. Show me one company that exists since millenia? Has anyone ever calculated the costs of keeping something safe for 100000 years? The germans have a storage place (Gorleben, evaluation site, cost til now 1.6 billion eur) where they encased barrels with weak radioactive waste molten into glass blocks, only to find out that after only 50 years water leaking in (into the forever dry salt stock) had corroded the steel barrels Do you remember, that after the ussr breakdown, some russian long range missile bunkers were found to be guarded by 2 (!) soldiers stationed in a wooden hut? Do you remember that “the dome” built over the remains of the chernobyl reactor (only to be afforded with large international financial help) will only last for about 100 years? Are you aware that the japanese gvt. has decided to dump the contaminated fukushima waters into the ocean for “cost reasons”? Three mile island (meltdown 1979) current status:Today, the process of decommissioning Three Mile Island is still underway and according to the NRC, will be finished in 2079. So… it feels for me fission is a bad loan… to our future. So on the long run , can we afford fossile? Can we afford fission? I hope we will be able to afford fusion in the future. But for now, lets use what we have. water. wind (container sailing ships?). sun.
true
true
true
Since China State Shipbuilding Corporation (CSSC) unveiled its KUN-24AP containership at the Marintec China Expo in Shanghai in early December of 2023, the internet has been abuzz about it. Not jus…
2024-10-12 00:00:00
2023-12-26 00:00:00
https://hackaday.com/wp-…ntainer_ship.jpg
article
hackaday.com
Hackaday
null
null
24,341,005
https://learn.adacore.com/
“learn.adacore.com"
null
# LEARN.ADACORE.COM # What is Ada and SPARK? Ada is a state-of-the art programming language that development teams worldwide are using for critical software: from microkernels and small-footprint, real-time embedded systems to large-scale enterprise applications, and everything in between. SPARK is a formally analyzable subset of Ada — and a toolset that brings mathematics-based confidence to software verification. # Try Ada Now: with Ada.Text_IO; use Ada.Text_IO; procedure Learn is subtype Alphabet is Character range 'A' .. 'Z'; begin Put_Line ("Learning Ada from " & Alphabet'First & " to " & Alphabet'Last); end Learn; Check out the interactive courses and labs listed on the left side to learn more about Ada and SPARK. - Introduction to Ada - Advanced Journey With Ada - Introduction to SPARK - Introduction to Embedded Systems Programming - What's New in Ada 2022 - Ada for the C++ or Java Developer - Ada for the Embedded C Developer - SPARK Ada for the MISRA C Developer - Introduction to the GNAT Toolchain - Guidelines for Safe and Secure Ada/SPARK # E-books Download the contents of the entire website as an e-book for offline reading. Following formats are available: PDF and EPUB. Alternatively, download individual courses and laboratories as e-books: # Ada Training **Get professional Ada training** from Adacore. Experience has shown that Ada is an extremely learnable language and that programmers with basic knowledge in other languages can quickly get up to speed with Ada. For programmers who already have some Ada experience, AdaCore offers advanced courses in Ada and GNAT Pro/GNAT Studio designed to help developers get the most out of the technology. # GNAT Academic Program **Teachers and graduate students** who are interested in teaching or using Ada or SPARK can take advantage of AdaCore's GNAT Academic Program (GAP). GAP's primary objective is to help put Ada and SPARK at the forefront of university study by building a community of academic professionals. GAP members receive a comprehensive toolset and professional support package specifically designed to provide the tools needed to teach and use Ada and SPARK in an academic setting. Best of all, AdaCore provides the GAP Package to eligible members at no cost. Register for membership today and join over 100 member universities in 35 countries currently teaching Ada and SPARK using GAP.
true
true
true
An interactive learning platform to teach the Ada and SPARK programming languages.
2024-10-12 00:00:00
2018-01-01 00:00:00
https://learn.adacore.co…rn_meta_img.jpeg
null
adacore.com
learn.adacore.com
null
null
22,896,393
https://arielmejia.dev/post/add-tailwindcss-to-vuecli-easy
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,607,099
https://www.jeffgeerling.com/blog/2021/monitor-your-internet-raspberry-pi
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
15,397,872
https://www.techinasia.com/uber-board-clips-kalanick-wings-clears-way-for-softbank-and-ipo
Tech in Asia
null
If you're seeing this message, that means JavaScript has been disabled on your browser. Please enable JavaScript to make this website work.
true
true
true
null
2024-10-12 00:00:00
2020-01-01 00:00:00
null
null
null
Tech in Asia
null
null
38,941,858
https://edition.cnn.com/2024/01/10/europe/germany-rail-strikes-farmers-protests-intl/index.html
German rail strikes and farmers protests cause disruption in Europe’s biggest economy | CNN Business
Nadine Schmidt; CNN
German commuters faced chaos on Wednesday as the country was hit with a three-day national rail strike, adding to travel disruption in Europe’s biggest economy where protesting farmers continued to block roads and highways. Both cargo and passenger trains were affected by the rail strike which led the main rail operator Deutsche Bahn (DB) to cancel thousands of trains, a press statement by DB said Wednesday. DB said that some 80% of long-distance services will be canceled, while regional lines will be affected to varying extents. During the strike which will last until Friday, rail services will run on a heavily reduced emergency timetable. “The strike by the train drivers’ union GDL has had a massive impact on train services in Germany,” DB spokeswoman Anja Broeker said in a video message posted on DB’s website Tuesday night. “We regret the restrictions and hope that many people who were unable to reschedule their journey will get to their destination.” DB said that strike action would impact the travel plans of millions of commuters and urged people to postpone or cancel all non-essential travel. It is the third and largest strike by the drivers since their union took up negotiations with DB and other carriers in November last year. Germany’s GDL union is demanding a reduction in working hours from 38 to 35 hours per week for shift workers, in addition to a pay increase of $606.62 (555 Euros) per month and a one-off inflation compensation bonus of 3,000 euros. Rail operator DB has offered flexibility on working hours, however refused to reduce working hours without a pay cut. The nationwide rail strikes come as German farmers vowed to ramp up their nationwide protests against the government’s planned cuts to fuel subsidies. Since the start of the week, farmers have been blocking numerous roads and highway entrances across the country with their tractors and have also held rallies in towns and cities, causing considerable disruption to traffic. The protests saw several hundred agricultural vehicles descend upon the German government district at Berlin’s iconic Brandenburg Gate on Monday. The German government hopes the cuts announced in December will help save €920 million ($1 billion), according to German public broadcaster Deutschlandfunk. On Monday, a German government spokesperson told a press briefing that the government is not planning on changing its plans despite the protests. A group of German farmers attempted to convey their fury last week by blocking the Economy Minister Robert Habeck from exiting a ferry in north west Germany. Habeck, who was traveling privately, was trapped in the ferry for several hours. The incident was condemned by the President of the FarmersAssociation, Joachim Rukwied who called blockades of this nature a “no-go” in a Friday press release. A rally organized in conjunction with the German freight industry has been announced for January 15 in Berlin while multiple protests are planned for across the country. ## Far-right concerns Authorities have voiced concern that Germany’s far-right Alternative for Germany (AfD) party is capitalizing on the farmers’ protests to support its own agenda. CNN has seen footage of convoys of tractors and trucks, some adorned with protest banners and posters from the far-right AfD. Signs of the AfD hanging on tractors taking part in a protest against the cuts of vehicle subsidies read “our farmers first” and “Germany needs new elections.” On social media, the controversial leader of the AfD in the Eastern German state of Thuringia, Björn Höcke, launched an appeal on his Facebook page: “Fellow citizens, we will see you on the roads!” The politician is classified as an extremist by Germany’s Office for the Protection of the Constitution. The AfD, which has been hitting record highs in polls and is currently scoring consistently above the three governing parties in the German government coalition, is hoping for major gains in three state elections this year. However, the AfD itself advocates the abolition of subsidies for farmers in its own party manifesto and at the same time as backing the protest, using it as proof of the Germans’ dissatisfaction with Chancellor Olaf Scholz’ coalition government. *CNN’s Inke Kappeler and Niamh Kennedy contributed reporting. *
true
true
true
German commuters faced chaos on Wednesday as the country was hit with a three-day national rail strike, adding to travel disruption in Europe’s biggest economy where protesting farmers continued to block roads and highways.
2024-10-12 00:00:00
2024-01-10 00:00:00
https://media.cnn.com/ap…9&q=w_800,c_fill
article
cnn.com
CNN
null
null
7,967,911
http://andlabs.lostsig.com/blog/2014/06/30/85/an-introduction-to-pointers-for-go-programmers-not-coming-from-c-family-languages
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
395,649
http://igbarb19.wordpress.com/2008/12/12/new-noble-peace-laureate/
New Noble Peace Laureate
null
## New Noble Peace Laureate As I hope you already know: “The Norwegian Nobel Committee has decided to award the Nobel Peace Prize for 2008 to Martti Ahtisaari for his important efforts, on several continents and over more than three decades, to resolve international conflicts. ” (Here is the full announcement from the Nobel Peace Prize site) I would strongly encourage you to have a look at Mr. Ahtisaari’s Nobel lecture, delivered in Oslo on Dec. 10, 2008. There are many important and interesting observations in this address, so I will only mention a few that I found confirming and encouraging. “…there tends to be too much focus on the mediators. With that we are disempowering the parties to the conflict and creating the wrong impression that peace comes from the outside. The only people that can make peace are the parties to the conflict, and just as they are responsible for the conflict and its consequences, so should they be given responsibility and recognition for the peace.” (NOTE: Mr. Ahtisaari was awarded the Prize for his activities as a mediator) “Wars and conflicts are not inevitable. They are caused by human beings. There are always interests that are furthered by war. Therefore those who have power and influence can also stop them.” “Peace is a question of will. All conflicts can be settled, and there are no excuses for allowing them to become eternal.” I hope these few quotes have enticed you to read the whole lecture. No comments yet. - ## Archives - March 2012 (2) - February 2012 (2) - January 2012 (2) - December 2011 (2) - November 2011 (2) - October 2011 (4) - September 2011 (2) - August 2011 (3) - July 2011 (2) - June 2011 (4) - May 2011 (5) - April 2011 (6) - ## Categories - ## RSS Entries RSS Comments RSS ## Leave a comment
true
true
true
As I hope you already know: “The Norwegian Nobel Committee has decided to award the Nobel Peace Prize for 2008 to Martti Ahtisaari for his important efforts, on several continents and over mo…
2024-10-12 00:00:00
2008-12-12 00:00:00
https://s0.wp.com/i/blank.jpg
article
wordpress.com
IG's Peace Blog
null
null
23,200,213
https://www.confluent.io/blog/removing-zookeeper-dependency-in-kafka
Kafka Needs No Keeper - Removing ZooKeeper Dependency
Colin McCabe
[Demo+Webinar] New Product Updates to Make Serverless Flink a Developer’s Best Friend | Watch Now Currently, Apache Kafka® uses Apache ZooKeeper™ to store its metadata. Data such as the location of partitions and the configuration of topics are stored outside of Kafka itself, in a separate ZooKeeper cluster. In 2019, we outlined a plan to break this dependency and bring metadata management into Kafka itself. So what is the problem with ZooKeeper? Actually, the problem is not with ZooKeeper itself but with the concept of external metadata management. Having two systems leads to a lot of duplication. Kafka, after all, is a replicated distributed log with a pub/sub API on top. ZooKeeper is a replicated distributed log with a filesystem API on top. Each has its own way of doing network communication, security, monitoring, and configuration. Having two systems roughly doubles the total complexity of the result for the operator. This leads to an unnecessarily steep learning curve and increases the risk of some misconfiguration causing a security breach. Storing metadata externally is not very efficient. We run at least three additional Java processes, and sometimes more. In fact, we often see Kafka clusters with just as many ZooKeeper nodes as Kafka nodes! Additionally, the data in ZooKeeper also needs to be reflected on the Kafka controller, which leads to double caching. Worse still, storing metadata externally limits Kafka’s scalability. When a Kafka cluster is starting up, or a new controller is being elected, the controller must load the full state of the cluster from ZooKeeper. As the amount of metadata grows, so does the length of this loading process. This limits the number of partitions that Kafka can store. Finally, storing metadata externally opens up the possibility of the controller’s in-memory state becoming de-synchronized from the external state. The controller’s view of liveness—which is in the cluster—can also diverge from ZooKeeper’s view. KIP-500 outlines a better way of handling metadata in Kafka. You can think of this as “Kafka on Kafka,” since it involves storing Kafka’s metadata in Kafka itself rather than in an external system such as ZooKeeper. In the post-KIP-500 world, metadata will be stored in a partition inside Kafka rather than in ZooKeeper. The controller will be the leader of this partition. There will be no external metadata system to configure and manage, just Kafka itself. We will treat metadata as a log. Brokers that need the latest updates can read only the tail of the log. This is similar to how consumers that need the latest log entries only need to read the very end of the log, not the entire log. Brokers will also be able to persist their metadata caches across process restarts. A Kafka cluster elects a controller node to manage partition leaders and cluster metadata. The more partitions and metadata we have, the more important controller scalability becomes. We would like to minimize the number of operations that require a time linearly proportional to the number of topics or partitions. One such operation is controller failover. Currently, when Kafka elects a new controller, it needs to load the full cluster state before proceeding. As the amount of cluster metadata grows, this process takes longer and longer. In contrast, in the post-KIP-500 world, there will be several standby controllers that are ready to take over whenever the active controller goes away. These standby controllers are simply the other nodes in the Raft quorum of the metadata partition. This design ensures that we never need to go through a lengthy loading process when a new controller is elected. KIP-500 will speed up topic creation and deletion. Currently, when a topic is created or deleted, the controller must reload the full list of all topic names in the cluster from ZooKeeper. This is necessary because while ZooKeeper notifies us when the set of topics in the cluster has changed, it doesn’t tell us exactly which topics were added or removed. In contrast, creating or deleting a topic in the post-KIP-500 world will simply involve creating a new entry in the metadata partition, which is an O(1) operation. Metadata scalability is a key part of scaling Kafka in the future. We expect that a single Kafka cluster will eventually be able to support a million partitions or more. Several administrative tools shipped as part of the Kafka release still allow direct communication with ZooKeeper. Worse still, there are still one or two operations that can’t be done except through this direct ZooKeeper communication. We have been working hard to close these gaps. Soon, there will be a public Kafka API for every operation that previously required direct ZooKeeper access. We will also disable or remove the unnecessary --zookeeper flags in the next major release of Kafka. In the post-KIP-500 world, the Kafka controller will store its metadata in a Kafka partition rather than in ZooKeeper. However, because the controller depends on this partition, the partition itself cannot depend on the controller for things like leader election. Instead, the nodes that manage this partition must implement a self-managed Raft quorum. KIP-595: A Raft Protocol for the Metadata Quorum outlines how we will adapt the Raft protocol to Kafka so that it really feels like a native part of the system. This will involve changing the push-based model described in the Raft paper to a pull-based model, which is consistent with traditional Kafka replication. Rather than pushing out data to other nodes, the other nodes will connect to them. Similarly, we will use terminology consistent with Kafka rather than the original Raft paper—”epochs” instead of “terms,” and so forth. The initial implementation will be focused on supporting the metadata partition. It will not support the full range of operations that would be needed to convert regular partitions over to Raft. However, this is a topic we may return to in the future. The most exciting part of this project, of course, is the ability to run without ZooKeeper, in “KIP-500 mode.” When Kafka is run in this mode, we will use a Raft quorum to store our metadata rather than ZooKeeper. Initially, KIP-500 mode will be experimental. Most users will continue to use “legacy mode,” in which ZooKeeper is still in use. Partly, this is because KIP-500 mode will not support all possible features at first. Another reason is because we want to gain confidence in KIP-500 mode before making it the default. Finally, we will need time to perfect the upgrade process from legacy mode to KIP-500 mode. Much of the work to enable KIP-500 mode will be in the controller. We must separate out the part of the controller that interacts with ZooKeeper from the part that implements more general-purpose logic such as replica set management. We need to define and implement more controller APIs to replace the communication mechanisms that currently involve ZooKeeper. One example of this is the new AlterIsr API. This API allows a replica to notify the controller of a change in the in-sync replica set without using ZooKeeper. KIP-500 introduced the concept of a *bridge release* that can coexist with both pre- and post-KIP-500 versions of Kafka. Bridge releases are important because they enable zero-downtime upgrades to the post-ZooKeeper world. Users on an older version of Kafka simply upgrade to a bridge release. Then, they can perform a second upgrade to a release that lacks ZooKeeper. As its name suggests, the bridge release acts as a bridge into the new world. So how does this work? Consider a cluster that is in a partially upgraded state, with several brokers on the bridge release and several brokers on a post-KIP-500 release. The controller will always be a post-KIP-500 broker. In this cluster, brokers cannot rely on directly modifying ZooKeeper to announce changes they are making (such as a configuration change or an ACL change). The post-KIP-500 brokers would not receive such notifications because they are not listening on ZooKeeper. Only the controller is still interacting with ZooKeeper, by mirroring its changes to ZooKeeper. Therefore, in the bridge release, all the brokers except the controller must treat ZooKeeper as read only (with some very limited exceptions). For RPCs like IncrementalAlterConfigs, we simply need to ensure that the call is processed by the active controller. This is easy for new clients—they can simply send the calls there directly. For older clients, we need a redirection system that will run on the brokers that send the RPCs to the active controller, no matter which broker they initially end up on. For RPCs that involve a complex interaction between the broker and the controller, we will need to create new controller APIs. One example is KIP-497, which specifies a new AlterIsrRequest API that allows brokers to request changes to partition in-sync replicas (ISRs). Replacing ad hoc ZooKeeper APIs with well-documented and supported RPCs has many of the same benefits as removing client-side ZooKeeper access did. Maintaining cross-version compatibility will be easier. For the special case of AlterIsrRequest, there will also be benefits to reducing the number of writes to ZooKeeper that a common operation requires. Kafka is one of the most active Apache projects. It has been amazing to see the evolution in its architecture over the last few years. That evolution is not done yet, as projects like KIP-500 show. What I like most about KIP-500 is that it’s a simplification to the overall architecture—for administrators and developers alike. It will let us use the powerful abstraction of the event log for metadata handling. And it will finally prove that…Kafka needs no keeper. To learn about other work that is happening to make Kafka elastically scalable, check out the following: In this post, the second in the Kafka Producer and Consumer Internals Series, we follow our brave hero—a well-formed produce request—which is on its way to be processed by the broker and have its data stored on the cluster. The beauty of Kafka as a technology is that it can do a lot with little effort on your part. In effect, it’s a black box. But what if you need to see into the black box to debug something? This post shows what the producer does behind the scenes to help prepare your raw event data for the broker.
true
true
true
Say goodbye to Kafka ZooKeeper dependency! KIP-500 introduces a new way of storing data in Kafka itself, rather than in external systems such as ZooKeeper.
2024-10-12 00:00:00
2020-05-15 00:00:00
https://cdn.confluent.io…-logo-meadow.png
website
confluent.io
Confluent
null
null
28,853,800
https://www.embeddedcomputing.com/technology/iot/embedded-toolbox-connect-iot-pocs-in-four-minutes-flat
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,245,780
https://blog.alexellis.io/your-instant-kubernetes-cluster/
Your instant Kubernetes cluster
Alex Ellis
This is a condensed and updated version of my previous tutorial Kubernetes in 10 minutes. I've removed just about everything I can so this guide still makes sense. Use it when you want to create a cluster on the cloud or on-premises as fast as possible. ## 1.0 Pick a host We will be using Ubuntu 16.04 for this guide so that you can copy/paste all the instructions. Here are several environments where I've tested this guide. Just pick where you want to run your hosts. - DigitalOcean - developer cloud - Civo - UK developer cloud - Equinix Metal - bare metal cloud - 2x Dell Intel i7 boxes - at home Civo is a relatively new developer cloud and one thing that I really liked was how quickly they can bring up hosts - in about 25 seconds. I'm based in the UK so I also get very low latency. ## 1.1 Provision the machines You can get away with a single host for testing but I'd recommend at least three so we have a single master and two worker nodes. Here are some other guidelines: - Pick dual-core hosts with ideally at least 2GB RAM - If you can pick a custom username when provisioning the host then do that rather than root. For example Civo offers an option of `ubuntu` ,`civo` or`root` . Now run through the following steps on each machine. It should take you less than 5-10 minutes. If that's too slow for you then you can use my utility script kept in a Gist: ``` $ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh ``` ## 1.2 Login and install Docker Install Docker from the Ubuntu apt repository. This will be an older version of Docker but as Kubernetes is tested with old versions of Docker it will work in our favour. ``` $ sudo apt-get update \ && sudo apt-get install -qy docker.io ``` ## 1.3 Disable the swap file This is now a mandatory step for Kubernetes. The easiest way to do this is to edit `/etc/fstab` and to comment out the line referring to swap. To save a reboot then type in `sudo swapoff -a` . Disabling swap memory may appear like a strange requirement at first. If you are curious about this step then read more here. ## 1.4 Install Kubernetes packages ``` $ sudo apt-get update \ && sudo apt-get install -y apt-transport-https \ && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ | sudo tee -a /etc/apt/sources.list.d/kubernetes.list \ && sudo apt-get update $ sudo apt-get update \ && sudo apt-get install -y \ kubelet \ kubeadm \ kubernetes-cni ``` ## 1.5 Create the cluster At this point we create the cluster by initiating the master with `kubeadm` . Only do this on the master node. Despite any warnings I have been assured by Weaveworks and Lucas (the maintainer) that `kubeadm` is suitable for production use. ``` $ sudo kubeadm init ``` If you missed a step or there's a problem then `kubeadm` will let you know at this point. Take a copy of the Kube config: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` Make sure you note down the join token command i.e. ``` $ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256:<hash> ``` ## 2.0 Install networking Many networking providers are available for Kubernetes, but none are included by default, so let's use Weave Net from Weaveworks which is one of the most popular options in the Kubernetes community. It tends to work out of the box without additional configuration. ``` $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` If you have private networking enabled on your host then you may need to alter the private subnet that Weavenet uses for allocating IP addresses to Pods (containers). Here's an example of how to do that: ``` $ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \ | kubectl apply -f - ``` Weave also have a very cool visualisation tool called Weave Cloud. It's free and will show you the path traffic is taking between your Pods. See here for an example with the OpenFaaS project. ## 2.2 Join the worker nodes to the cluster Now you can switch to each of your workers and use the `kubeadm join` command from 1.5. Once you run that log out of the workers. ## 3.0 Profit That's it - we're done. You have a cluster up and running and can deploy your applications. If you need to setup a dashboard UI then consult the Kubernetes documentation. ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION openfaas1 Ready master 20m v1.9.2 openfaas2 Ready <none> 19m v1.9.2 openfaas3 Ready <none> 19m v1.9.2 ``` If you want to see my running through creating a cluster step-by-step and showing you how `kubectl` works then checkout my video below and make sure you subscribe You can also get an "instant" Kubernetes cluster on your Mac for development using Minikube or Docker for Mac Edge edition. Read my review and first impressions here. ## 4.0 Keep learning You can keep learning and get head with weekly emails from me as part of a GitHub Sponsorship. Sign up below to learn more about Kubernetes, Raspberry Pi and Docker from me.
true
true
true
You're learning Kubernetes or need a cluster fast to test your application. This is my "instant" guide that condenses down and updates my 10 minute guide.
2024-10-12 00:00:00
2018-01-27 00:00:00
https://blog.alexellis.i…-394377--1-.jpeg
article
alexellis.io
Alex Ellis' Blog
null
null
34,993,452
https://www.lastweekinaws.com/blog/aws-is-asleep-at-the-lambda-wheel/
AWS is Asleep at the Lambda Wheel
Corey Quinn
Countless volumes have been written about the various benefits of serverless, a task made even easier by it being such a squishy, nebulous term that’s come to mean basically whatever the author wants it to mean. This has been a boon for AWS’s product teams, who’ve gone from creating services that are clearly serverless such as DynamoDB, Route 53, IAM, and others to instead slapping the “serverless” moniker on things that are clearly not very serverless at all, like OpenSearch and Aurora. One service that is very clearly serverless is Lambda, AWS’s Function as a Service. It epitomizes the best of a moving target with respect to “what defines serverless.” It scales down to zero, you only pay for what you use, it’s massively event driven, and at least in theory AWS manages the care and feeding of the service so the only thing you have to worry about is your own business logic. Unfortunately, as of this writing AWS has apparently gone out for lunch and forgotten to return for several quarters to fulfill their part of the serverless bargain. ## Serverless Abdication AWS Lambda has various supported “runtimes,” which are language-specific function environments that include various versions of supported programming languages. The one I want to talk about today is the language runtime I spend the most time working with: Python. Version 3.10 of the Python programming language was released to General Availability in October of 2021. Now in the closing hours of February of 2023 there have been ten further releases of the 3.10 major version; Python 3.10.10 is the current generally available branch of Python 3.10. Python 3.11 was released to general availability in October of 2022, with its current stable version (as of this writing; I’m not going to go back and maintain this every few weeks!) of 3.11.2 released in early February of 2023. Python 3.12 is the version that’s currently in development. I bring this up not because I’ve passed the keyboard over to everyone’s favorite vampire, Sesame Street’s Count Von Count to indulge his love affair with counting programming language version numbers, but rather because the current state of the art of Lambda’s managed Python runtime is version 3.9 and has been since August of 2021. This has gone from “okay, it’s taking a bit of time” to folks having to actively do extra work to make up for AWS’s lack of velocity on keeping current with Python. A GitHub Issue asking for Python 3.10 support started off very politely / reasonably in December of 2021. It got slightly more heated in May of 2022 when Python 3.9 entered the “security only” portion of its lifecycle. In just over a month from when this publishes, Python 3.10 will enter the “security only” lifecycle phase, and is tracking to not have had an officially supported Lambda runtime at all during its moment in the sun. ## What are you doing over there, AWS? I’m not bringing this up because I’m looking to taunt AWS about things out of a misplaced sense of pettiness; if I wanted to play those games I’d simply point out that Amazon Linux 2022 is still in the Release Candidate stage and has been relabeled to “Amazon Linux 2023” in the hopes that nobody notices just how far behind schedule it is. *That* would be petty, because it’s ultimately AWS’s own distribution of Linux, and they can release it when they’re damned well ready to. Lambda is different; it’s an “AWS manages the moving parts for you” service. That’s how it was presented, that’s the explicit contract AWS has made with us in the spirit of the Shared Responsibility Model. Python 3.11 is on average 25% faster than its predecessor, with a host of enhancements and changes to the language. I personally have had to *refactor code* to run on Python 3.9 so that I could turn something into a Lambda function. There’s nothing particularly special about Python that would explain this delay; Google Cloud Functions and Azure Functions both support Python 3.10. A number of folks in the GitHub Issue have reported success along with instructions to build a custom runtime that supports modern Python versions as well. It’s clear that there’s no technical blocker preventing AWS from supporting the language, leading me to the conclusion that it’s instead either a lack of leadership, or a lack of will–which effectively reduces down to the same thing. ## This is Important The Go runtime is so old that it doesn’t support Graviton-based Lambdas, and the Ruby runtime is likewise fairly moribund. But AWS announced at re:Invent 2020 that Lambda had over 1 million customers, and the 2022 Datadog State of Serverless report indicates that Python is the single most widely used Lambda language, slightly edging out Node.js. This is a problem that impacts virtually every AWS customer. ## Doing It Ourselves Is Not an Answer We’ve been able to use custom runtimes ourselves for a while, and via the magic of Docker container images we can basically make Lambda run whatever the heck we want it to. Unfortunately, building our own Python runtimes at home isn’t sustainable as a customer; we’d all universally be signing up for a massive pile of work, as we’d be taking on the burden of keeping those runtimes patched and updated ourselves. The LTS version of Ubuntu uses Python 3.10. AWS’s major competitors all support Python 3.10 at a minimum. It’s long past time for AWS to either ship the language runtime customers are (quite reasonably!) clamoring for, or else offer up better transparency than the “we’re working on it” stonewalling that we’ve gotten for over a year. AWS Lambda’s value proposition was and remains that it removes undifferentiated heavy lifting for customers. Please lift this heavy, undifferentiated burden for us.
true
true
true
Countless volumes have been written about the various benefits of serverless, a task made even easier by it being such a squishy, nebulous term that's come to mean basically whatever the author wants it to mean. This has been a boon for AWS's product teams, who've gone from creating services that are clearly serverless such as DynamoDB, Route 53, IAM, and others to instead slapping the "serverless" moniker on things that are clearly not very serverless at all, like OpenSearch and Aurora.
2024-10-12 00:00:00
2023-03-01 00:00:00
https://www.lastweekinaw…870_l-scaled.jpg
article
lastweekinaws.com
Last Week in AWS
null
null
22,970,040
https://www.esquire.com/news-politics/politics/a32268591/smithfield-foods-coronavirus-outbreak-worker-lawsuit/
We're Back in 'The Jungle'
Charles P Pierce
T*out les ‘Toobz *were abuzz last week over George Packer’s jeremiad in* The Atlantic* that seamlessly roots our current situation in systemic low-level national crises that otherwise had been overlooked and/or denied. Packer correctly calls these “underlying conditions,” using the medical term-of-art currently in vogue when describing how certain populations were uniquely vulnerable to the pandemic due to preexisting health problems. An example of what Packer was writing about can be found in *The New York Times.* But as the coronavirus pandemic has emerged, workers say they have encountered another health complication:reluctance to cover their mouthswhile coughing or to clean their faces after sneezing, because this can cause them to miss a piece of meat as it goes by, creating a risk of disciplinary action. The claims appear in a complaint filed Thursday in federal court by an anonymous Smithfield worker and the Rural Community Workers Alliance, a local advocacy group whose leadership council includes several other Smithfield workers. It should surprise approximately nobody that meat-processing plants are hellholes for the people who work there. Neither should anyone be shocked that they’ve turned out to be vectors for all manner of disease, including this particular pandemic. Regulations are a joke, where they exist at all. The industry depends on low-wage employees, where it doesn’t depend on undocumented workers, who don’t dare complain. So the pandemic hits, and the industry can respond with nothing more than knuckling people for trying to keep from sneezing on the cutlets. This isn’t late-period capitalism. It’s what the whole industrial economy used to be. This a return to Upton Sinclair’s* The Jungle, *in which the author wrote: Preventable diseases kill off half our population. And even if science were allowed to try, it could do little, because the majority of human beings are not yet human beings at all, but simply machines for the creating of wealth for others. They are penned up in filthy houses and left to rot and stew in misery, and the conditions of their life make them ill faster than all the doctors in the world could heal them; and so, of course, they remain as centers of contagion, poisoning the lives of all of us, and making happiness impossible for even the most selfish. The lawsuit in Missouri seeks to put this principle into action in a unique way, dusting off an old law and shining it up to meet new circumstances. The court complaint about the Smithfield pork plant in Missouri,which is not unionized, says workers are typically required to stand almost shoulder to shoulder, must often go hours without being able to clean or sanitize their hands, and have difficulty taking sick leave...Beyond seeking to make workers safer, the complaint about the plant in Milan, Mo., is testing whether public nuisance law dating back hundreds of years can be used to protect workers on the job. The plaintiffs argue that Smithfield, by failing to take adequate safety measures, risks a coronavirus outbreak that could quickly spread to the entire community. Changing presidents is a baby step.
true
true
true
The coronavirus fiasco at Smithfield Foods is not a symptom of late-stage capitalism—it's a regression to the past, and the world of Upton Sinclair.
2024-10-12 00:00:00
2020-04-24 00:00:00
https://hips.hearstapps.…xh&resize=1200:*
article
esquire.com
Esquire
null
null
129,239
http://earththesequel.edf.org/book.sampleChapter
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,528,457
https://github.com/aymericdamien/TensorFlow-Examples/tree/master
GitHub - aymericdamien/TensorFlow-Examples: TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
Aymericdamien
This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and source codes with explanation, for both TF v1 & v2. It is suitable for beginners who want to find clear and concise examples about TensorFlow. Besides the traditional 'raw' TensorFlow implementations, you can also find the latest TensorFlow API practices (such as `layers` , `estimator` , `dataset` , ...). **Update (05/16/2020):** Moving all default examples to TF2. For TF v1 examples: check here. **Hello World**(notebook). Very simple example to learn how to print "hello world" using TensorFlow 2.0+.**Basic Operations**(notebook). A simple example that cover TensorFlow 2.0+ basic operations. **Linear Regression**(notebook). Implement a Linear Regression with TensorFlow 2.0+.**Logistic Regression**(notebook). Implement a Logistic Regression with TensorFlow 2.0+.**Word2Vec (Word Embedding)**(notebook). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow 2.0+.**GBDT (Gradient Boosted Decision Trees)**(notebooks). Implement a Gradient Boosted Decision Trees with TensorFlow 2.0+ to predict house value using Boston Housing dataset. **Simple Neural Network**(notebook). Use TensorFlow 2.0 'layers' and 'model' API to build a simple neural network to classify MNIST digits dataset.**Simple Neural Network (low-level)**(notebook). Raw implementation of a simple neural network to classify MNIST digits dataset.**Convolutional Neural Network**(notebook). Use TensorFlow 2.0+ 'layers' and 'model' API to build a convolutional neural network to classify MNIST digits dataset.**Convolutional Neural Network (low-level)**(notebook). Raw implementation of a convolutional neural network to classify MNIST digits dataset.**Recurrent Neural Network (LSTM)**(notebook). Build a recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0 'layers' and 'model' API.**Bi-directional Recurrent Neural Network (LSTM)**(notebook). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0+ 'layers' and 'model' API.**Dynamic Recurrent Neural Network (LSTM)**(notebook). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of variable length, using TensorFlow 2.0+ 'layers' and 'model' API. **Auto-Encoder**(notebook). Build an auto-encoder to encode an image to a lower dimension and re-construct it.**DCGAN (Deep Convolutional Generative Adversarial Networks)**(notebook). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise. **Save and Restore a model**(notebook). Save and Restore a model with TensorFlow 2.0+.**Build Custom Layers & Modules**(notebook). Learn how to build your own layers / modules and integrate them into TensorFlow 2.0+ Models.**Tensorboard**(notebook). Track and visualize neural network computation graph, metrics, weights and more using TensorFlow 2.0+ tensorboard. **Load and Parse data**(notebook). Build efficient data pipeline with TensorFlow 2.0 (Numpy arrays, Images, CSV files, custom data, ...).**Build and Load TFRecords**(notebook). Convert data into TFRecords format, and load them with TensorFlow 2.0+.**Image Transformation (i.e. Image Augmentation)**(notebook). Apply various image augmentation techniques with TensorFlow 2.0+, to generate distorted images for training. **Multi-GPU Training**(notebook). Train a convolutional neural network with multiple GPUs on CIFAR-10 dataset. The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples. Or see below for a list of the examples. Some examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples. MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook. Official Website: http://yann.lecun.com/exdb/mnist/. To download all the examples, simply clone this repository: ``` git clone https://github.com/aymericdamien/TensorFlow-Examples ``` To run them, you also need the latest version of TensorFlow. To install it: ``` pip install tensorflow ``` or (with GPU support): ``` pip install tensorflow_gpu ``` For more details about TensorFlow installation, you can check TensorFlow Installation Guide The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples. **Hello World**(notebook) (code). Very simple example to learn how to print "hello world" using TensorFlow.**Basic Operations**(notebook) (code). A simple example that cover TensorFlow basic operations.**TensorFlow Eager API basics**(notebook) (code). Get started with TensorFlow's Eager API. **Linear Regression**(notebook) (code). Implement a Linear Regression with TensorFlow.**Linear Regression (eager api)**(notebook) (code). Implement a Linear Regression using TensorFlow's Eager API.**Logistic Regression**(notebook) (code). Implement a Logistic Regression with TensorFlow.**Logistic Regression (eager api)**(notebook) (code). Implement a Logistic Regression using TensorFlow's Eager API.**Nearest Neighbor**(notebook) (code). Implement Nearest Neighbor algorithm with TensorFlow.**K-Means**(notebook) (code). Build a K-Means classifier with TensorFlow.**Random Forest**(notebook) (code). Build a Random Forest classifier with TensorFlow.**Gradient Boosted Decision Tree (GBDT)**(notebook) (code). Build a Gradient Boosted Decision Tree (GBDT) with TensorFlow.**Word2Vec (Word Embedding)**(notebook) (code). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow. **Simple Neural Network**(notebook) (code). Build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset. Raw TensorFlow implementation.**Simple Neural Network (tf.layers/estimator api)**(notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.**Simple Neural Network (eager api)**(notebook) (code). Use TensorFlow Eager API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.**Convolutional Neural Network**(notebook) (code). Build a convolutional neural network to classify MNIST digits dataset. Raw TensorFlow implementation.**Convolutional Neural Network (tf.layers/estimator api)**(notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a convolutional neural network to classify MNIST digits dataset.**Recurrent Neural Network (LSTM)**(notebook) (code). Build a recurrent neural network (LSTM) to classify MNIST digits dataset.**Bi-directional Recurrent Neural Network (LSTM)**(notebook) (code). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset.**Dynamic Recurrent Neural Network (LSTM)**(notebook) (code). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of different length. **Auto-Encoder**(notebook) (code). Build an auto-encoder to encode an image to a lower dimension and re-construct it.**Variational Auto-Encoder**(notebook) (code). Build a variational auto-encoder (VAE), to encode and generate images from noise.**GAN (Generative Adversarial Networks)**(notebook) (code). Build a Generative Adversarial Network (GAN) to generate images from noise.**DCGAN (Deep Convolutional Generative Adversarial Networks)**(notebook) (code). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise. **Save and Restore a model**(notebook) (code). Save and Restore a model with TensorFlow.**Tensorboard - Graph and loss visualization**(notebook) (code). Use Tensorboard to visualize the computation Graph and plot the loss.**Tensorboard - Advanced visualization**(notebook) (code). Going deeper into Tensorboard; visualize the variables, gradients, and more... **Build an image dataset**(notebook) (code). Build your own images dataset with TensorFlow data queues, from image folders or a dataset file.**TensorFlow Dataset API**(notebook) (code). Introducing TensorFlow Dataset API for optimizing the input data pipeline.**Load and Parse data**(notebook). Build efficient data pipeline (Numpy arrays, Images, CSV files, custom data, ...).**Build and Load TFRecords**(notebook). Convert data into TFRecords format, and load them.**Image Transformation (i.e. Image Augmentation)**(notebook). Apply various image augmentation techniques, to generate distorted images for training. **Basic Operations on multi-GPU**(notebook) (code). A simple example to introduce multi-GPU in TensorFlow.**Train a Neural Network on multi-GPU**(notebook) (code). A clear and simple TensorFlow implementation to train a convolutional neural network on multiple GPUs. The following examples are coming from TFLearn, a library that provides a simplified interface for TensorFlow. You can have a look, there are many examples and pre-built operations and layers. - TFLearn Quickstart. Learn the basics of TFLearn through a concrete machine learning task. Build and train a deep neural network classifier. - TFLearn Examples. A large collection of examples using TFLearn.
true
true
true
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2) - aymericdamien/TensorFlow-Examples
2024-10-12 00:00:00
2015-11-11 00:00:00
https://opengraph.githubassets.com/ffa53ea1fd99ff26eb79a1cff0172fa4b14a97db51883f6ffb73e05f1fa3b5a3/aymericdamien/TensorFlow-Examples
object
github.com
GitHub
null
null
3,929,731
http://www.worldslongestinvoice.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,554,552
https://www.alphabot.com/security/blog/2017/net/How-to-configure-Json.NET-to-create-a-vulnerable-web-API.html
How to configure Json.NET to create a vulnerable web API - Alphabot Security
null
13 Jun 2017 | Peter Stöckli # How to configure Json.NET to create a vulnerable web API *tl;dr* No, of course, you don’t want to create a vulnerable JSON API. So when using Json.NET: Don’t use another TypeNameHandling setting than the default: `TypeNameHandling.None` . ## Intro In May 2017 Moritz Bechler published his MarshalSec paper where he gives an in-depth look at remote code execution (RCE) through various Java Serialization/Marshaller libraries like Jackson and XStream. In the conclusion of the detailed paper, he mentions that this kind of exploitation is not limited to Java but might also be possible in the .NET world through the Json.NET library. Newtonsoft’s Json.NET is one of the most popular .NET Libraries and allows to deserialize JSON into .NET classes (C#, VB.NET). So we had a look at Newtonsoft.Json and indeed found a way to create a web application that allows remote code execution via a JSON based REST API. For the rest of this post we will show you how to create such a simple vulnerable application and explain how the exploitation works. It is important to note that these kind of vulnerabilities in web applications are most of the time not vulnerabilities in the serializer libraries but configuration mistakes. The idea is of course to raise awareness with developers to prevent such flaws in real .NET web applications. ## The sample application The following hypothetical ASP.NET Core sample application was tested with .NET Core 1.1. For other .NET framework versions slightly different JSONs might be necessary. ## TypeNameHandling The key in making our application vulnerable for “Deserialization of untrusted data” is to enable type name handling in SerializerSettings of Json.NET. This tells Json.NET to write type information in the field “$type” of the resulting JSON and look at that field when deserializing. In our sample application we set this SerializerSettings globally in the *ConfigureServices* method in Startup.cs: Following TypeNameHandlings are vulnerable against this attack: ``` TypeNameHandling.All TypeNameHandling.Auto TypeNameHandling.Arrays TypeNameHandling.Objects ``` In fact the only kind that is not vulnerable is the default: `TypeNameHandling.None` The official Json.NET TypeNameHandling documentation explicitly warns about this: TypeNameHandling should be used with caution when your application deserializes JSON from an external source. Incoming types should be validated with a custom SerializationBinder when deserializing with a value other than None. But as the MarshalSec paper points out: not all developers read the documentation of the libraries they’re using. ## The REST web service To offer a remote attack possibility in our web application we created a small REST API that allows POSTing a JSON object. ``` [..] [HttpPost] public IActionResult Post([FromBody]Info value) { if (value == null) { return NotFound(); } return Ok(); } [..] ``` As you may have noticed we accept a body value from the type `Info` , which is our own small dummy class: ``` public class Info { public string Name { get; set; } public dynamic obj { get; set; } } ``` ## The exploitation To “use” our newly created vulnerability we simply POST a type-enhanced JSON to our web service: Et voilà: we executed code on the server! Wait… what? But how? ## Here’s how it works When sending a custom JSON to a REST service that is handled by a deserializer that has support for custom type name handling in combination with the `dynamic` keyword the attacker can specify the type he’d like to have deserialized on the server. So let’s have a look at the JSON we sent: ``` { "obj": { "$type": "System.IO.FileInfo, System.IO.FileSystem", "fileName": "rce-test.txt", "IsReadOnly": true } } ``` The line: `"$type": "System.IO.FileInfo, System.IO.FileSystem",` specifies the class `FileInfo` from the namespace *System.IO* in the assembly *System.IO.FileSystem*. The deserializer will instantiate a `FileInfo` object by calling the public constructor `public FileInfo(String fileName)` with the given fileName “rce-test.txt” (a sample file we created at the root of our insecure web app). Json.NET prefers parameterless default constructors over one constructor with parameters, but since the default constructor of `FileInfo` is `private` it uses the one with one parameter. Afterwards it will set “IsReadOnly” to true. However, this does not simply set the “IsReadOnly” flag via reflection to true. What happens instead is that the deserializer calls the setter for IsReadOnly and the code of the setter is executed. What happens when you call the IsReadOnly setter on a `FileInfo` instance is that the file is actually set to read-only. We see that indeed the read-only flag has been set on the rce-test.txt file on the server: A small side effect of this vulnerable service implementation is that we also can check if a file exists on the server. If the file sent in the “fileName” field does not exist an exception is thrown when the setter for IsReadOnly is called and the server returns NotFound(404) to the caller. To perform even more sinister work an attacker could search the .NET framework codebase or third party libraries for classes that execute code in the constructor and/or setters. The `FileInfo` class here is just used as a very simple example. ## Summary When providing Json.NET based REST services always leave the default TypeNameHandling at `TypeNameHandling.None` . When other TypeNameHandling settings are used an attacker might be able to provide a type he wants the serializer to deserialize and as a result unwanted code could be executed on the server. The described behavior is of course not unique to Json.NET but is also implemented by other libraries that support Serialization e.g. when using `System.Web.Script.Serialization.JavaScriptSerializer` with a type resolver (e.g. `SimpleTypeResolver` ). ### Update (28 Jul 2017) At Black Hat USA 2017 Alvaro Muñoz and Oleksandr Mirosh held a talk with the title “Friday the 13th: JSON Attacks”. Muñoz and Mirosh had an in-depth look at different .NET (FastJSON, Json.NET, FSPickler, Sweet.Jayson, JavascriptSerializer DataContractJsonSerializer) and Java (Jackson, Genson, JSON-IO, FlexSON, GSON) JSON libraries. The conclusions regarding Json.NET are the same as in this blog post: Basically to not use another TypeNameHandling than TypeNameHandling.None or use a SerializationBinder to white list types (as in the documentation of Json.NET). They also presented new gadgets, which allow more sinister attacks than the one published in this blog post (the gadgets might not work with all JSON/.NET framework combinations): `System.Configuration.Install.AssemblyInstaller` : "Execute payload on local assembly load"`System.Activities.Presentation.WorkflowDesigner` : "Arbitrary XAML load"`System.Windows.ResourceDictionary` : "Arbitrary XAML load"`System.Windows.Data.ObjectDataProvider` : "Arbitrary Method Invocation" In addition to their findings they had a look at .NET open source projects which made use of any of those different JSON libraries with type support and found several vulnerabilities: - Kaliko CMS RCE in admin interface (used FastJSON, which has insecure type name handling by default) - Nancy RCE (RCE via CSRF cookie) - Breeze RCE (used Json.NET with TypeNameHandling.Objects) - DNN (aka DotNetNuke) RCE (RCE via user-provided cookie) Both the white paper[pdf] and the slides[pdf] are available on the Black Hat site.
true
true
true
tl;dr No, of course, you don’t want to create a vulnerable JSON API.So when using Json.NET: Don’t use another TypeNameHandling setting than the default: Type...
2024-10-12 00:00:00
2017-07-28 00:00:00
null
website
alphabot.com
How to configure Json.NET to create a vulnerable web API
null
null
3,066,011
http://www.securityweek.com/facebook-adds-malicious-link-protection-powered-websense
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,741,393
http://blog.thewillcreator.com/2010/09/5-of-the-top-jaw-dropping-inheritances
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,665,939
https://github.com/airbnb/javascript/issues/1271#issuecomment-375810185
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,520,574
http://www.wired.com/2016/04/videogames-ai-learning/
Making AI Play Lots of Videogames Could Be Huge (No, Seriously)
Julie Muncy
It's almost a given that you'll ride in an autonomous car at some point in your life, and when you do, the AI controlling it just might have honed its skills playing *Minecraft*. It sounds crazy, but open-world games like *Minecraft* are a fantastic tool for teaching learning algorithms---which power the next generation of advanced artificial intelligence---how to understand and navigate three-dimensional spaces. Achieving that is a major stepping stone toward creating AI that can interact with the real world in complex ways. It's easy to consider videogames mindless escapism, but because they generate such vast amounts of information---think of the expansive world players create in *Minecraft---*they are exceptionally well suited to teaching an AI how to perceive the world and interact with it. "It's hard for a human to teach AI," says Xerox researcher Adrian Gaidon, because they are "worse than the worst toddlers in the world---you have to explain *everything*." Beyond a certain point, humans just don't have the time and patience to teach an AI how to behave. Videogames don't have that problem. You may grow frustrated with them, but they never grow frustrated with you. Researchers typically teach the so-called "deep learning" algorithms that underpin modern artificial intelligence by feeding them staggering amounts of data. These systems gorge on information, seeking patterns. If you want to teach an AI like AlphaGo to play Go, you feed it every record of every Go game you can find. For something like a board game, this is the easiest part of the task. The machinations of even the most complex board game can be rendered pretty easily by a computer, allowing AlphaGo to learn from a sample size in the millions. For more complex tasks like, say, driving an automobile, gathering enough data is a huge logistical and financial challenge. Google has spent untold sums testing its autonomous vehicles, racking up millions of miles in various prototypes to refine the AI controlling the cars. Such an approach isn't feasible for researchers who don't have the limitless resources of a company like Google or Baidu. That makes videogames increasingly appealing. You can gather vast amounts of data relatively quickly and cheaply in a game world. This idea came to Adrien Gaidon about 18 months ago when he saw a trailer for the latest installment of *Assassin's Creed*. "I was shocked, because I thought it was the trailer for a movie, whereas it was actually CGI. I got fooled for 20 seconds, easily. It's the first time that happened to me." If modern game engines could so easily fool him, he thought, perhaps they could fool an AI, too. So he and his team at Xerox started using the videogame engine Unity to feed images of things like automobiles, roads, and sidewalks to a deep-learning neural network in an effort to teach it to recognize those same objects in the physical world. Researchers have seen success with this. Before tackling Go, Google's AI mastered Atari games. Other AI projects have conquered *Super Mario World* levels. Using game engines with three-dimensional rendering, and training AI within those spaces, however, represents a level of complexity that's only recently become possible. "The real benefit of a game engine is that, as you generate the pixels, you also know from the start what the pixels correspond to," Gaidon says. "You don't just generate pixels, you also generate the supervision [AI] requires." So far, Gaidon says his work at Xerox has been very successful: "What I'm showing is that the technology is mature enough now to be able to use data from computers to train other computer programs." *Minecraft* Microsoft also sees the value in this. It recently announced that later this year it will release Project Malmo, an open-source platform that "allows computer scientists to create AI experiments using the world of *Minecraft*." Beyond its complexity and open-ended freedom, *Minecraft* offers new ways of experimenting with AI embodiment, says Katja Hofmann, Project Malmo's lead researcher. "When you play *Minecraft*, you are really directly in this complex 3-D world," Hofmann says. "You perceive it through your sensory inputs, and you interact with it by walking around, by placing blocks, by building things, by interacting with other agents. It's this kind of simulated nature that's similar to how we interact with the real world." Hofmann and her team hope their tools push research in even more radical directions than Gaidon's team is pursuing. Using skills learned in a program like Malmo, AI could, she believes, learn the general intelligence skills necessary to move beyond navigating *Minecraft*'s blocky landscapes to walking in our own. "We see this very much as a fundamental AI research project, where we want to understand very generically how agents learn to interact with worlds around them and make sense of them," she says. "*Minecraft* is a perfect spot between the real world and more restricted games." The transition from simulation to reality is complex, though. Avatars in games typically don't move like real people move, and game worlds are designed for ease and legibility, not fidelity to real life. Besides, the basics of how any agent, human or otherwise, builds their understanding of spatial reality remain something of a mystery. "We're really at the very early stages of understanding how we could develop agents that develop meaningful internal representations of their environments, says Hofmann. "For humans, it seems like we make use of integrating the various sensors we have. I think linking various sources of information is one of the interesting research challenges that we have here." When science finally figures out just how AI develops an internal representation of a given environment, people might be surprised at what form it takes. It may look like nothing ever seen before. "This may look very different from what actually happens in our brains," Hofmann says. This should come as no surprise. Humans wanted to fly, but achieving it looked nothing like how birds fly. "We are inspired by how birds fly or how insects may fly. But what's really important is that we understand the actual mechanisms, how to create the right pressures, for example, or the right speed in order to lift an object off the ground." And so it will be with AI. Computers already view the world in a fundamentally different way than humans. Take, for instance, recent work by London's ScanLAB Projects revealed how the laser-scanner "eyes" of an autonomous car might view a city. The results are utterly foreign, a "parallel landscape" of ghosts and broken images, urban landscapes overlain with "the delusions and hallucinations of sensing machines." Likewise, as Google's recent showcase proved, AlphaGo understands the ancient game of Go in a way no human ever could. What, then, will the world look like when viewed by the next generation of "sensing machines?" The models, methods, and technologies built out in algorithms by experience in virtual space---what will they see when applied to our cities, our parks, our homes? We're teaching AI to understand the world in more robust ways. Videogames can help these machines reach that understanding. But when that understanding comes, we might not recognize it. *Correction appended [4:45 P.M. PT 4/18]: A previous version of this story incorrectly spelled Katja Hofmann's name.*
true
true
true
Videogames are becoming a useful means to teach new artificial intelligences. But what is the world going to look like to computers trained through games?
2024-10-12 00:00:00
2016-04-15 00:00:00
https://media.wired.com/…raft-wii-u-2.jpg
article
wired.com
WIRED
null
null
34,156,871
https://en.wikipedia.org/wiki/Cognitive_apprenticeship
Cognitive apprenticeship - Wikipedia
null
# Cognitive apprenticeship **Cognitive apprenticeship** is a theory that emphasizes the importance of the process in which a master of a skill teaches that skill to an apprentice. Constructivist approaches to human learning have led to the development of the theory of cognitive apprenticeship.[1][2] This theory accounts for the problem that masters of a skill often fail to take into account the implicit processes involved in carrying out complex skills when they are teaching novices. To combat these tendencies, cognitive apprenticeships "…are designed, among other things, to bring these tacit processes into the open, where students can observe, enact, and practice them with help from the teacher…".[1] This model is supported by Jhon Brix Kistadio's (1997) theory of modeling, which posits that in order for modeling to be successful, the learner must be attentive, access and retain the information presented, be motivated to learn, and be able to accurately reproduce the desired skill. ## Overview [edit]Part of the effectiveness of the cognitive apprenticeship model comes from learning in context and is based on theories of situated cognition. Cognitive scientists maintain that the context in which learning takes place is critical (e.g., Godden & Baddeley, 1975). Based on findings such as these, Collins, Duguid, and Brown (1989) argue that cognitive apprenticeships are less effective when skills and concepts are taught independently of their real-world context and situation. As they state, "Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated".[2] In cognitive apprenticeships, teachers model their skills in real-world situations. By modelling and coaching, masters in cognitive apprenticeships also support the three stages of skill acquisition described in the expertise literature: the cognitive stage, the associative stage, and the autonomous stage.[3][4] In the cognitive stage, learners develop a declarative understanding of the skill. In the associative stage, mistakes and misinterpretations learned in the cognitive stage are detected and eliminated, while associations between the critical elements involved in the skill are strengthened. Finally, in the autonomous stage, the learner's skill becomes honed and perfected until it is executed at an expert level.[5] Like traditional apprenticeships, in which the apprentice learns a trade such as tailoring or woodworking by working under a master teacher, cognitive apprenticeships allow masters to model behaviors in a real-world context with cognitive modeling.[6] After listening to the master explain exactly what they are doing and thinking as they model the skill, the apprentice identifies relevant behaviors and develops a conceptual model of the processes involved. The apprentice then attempts to imitate those behaviors as the master observes and coaches. Coaching provides assistance at the most critical level– the skill level just beyond what the learner/apprentice could accomplish by themself. Vygotsky (1978) referred to this as the Zone of Proximal Development and believed that fostering development within this zone would lead to the most rapid development. The coaching process includes providing additional modeling as necessary, giving corrective feedback, and giving reminders, which all intend to bring the apprentice's performance closer to that of the master's. As the apprentice becomes more skilled through the repetition of this process, the feedback and instruction provided by the master "fades" until the apprentice is, ideally, performing the skill at a close approximation of the master level.[7] ## Teaching methods [edit]Collins, Brown, and Newman developed six teaching methods rooted in cognitive apprenticeship theory and claim these methods help students attain cognitive and metacognitive strategies for "using, managing, and discovering knowledge".[2] The first three, modeling, coaching, scaffolding, are at the core of cognitive apprenticeship and help with cognitive and metacognitive development. The next two, articulation and reflection, are designed to help novices with awareness of problem-solving strategies and execution similar to that of an expert. The final step, exploration, intends to guide the novice towards independence and the ability to solve and identify problems within the domain on their own. The authors note, however, that this is not an exhaustive list of methods and that the successful execution of these methods is highly dependent on the domain.[1] ### Modeling [edit]Modeling is when an expert, usually a teacher, within the cognitive domain or subject area demonstrates a task explicitly so that novices, usually a student, can experience and build a conceptual model of the task at hand. For example, a math teacher might write out explicit steps and work through a problem aloud, demonstrating their heuristics and procedural knowledge. Modeling includes demonstrating expert performances or processes in the world. ### Coaching [edit]Coaching involves observing a novice's task performance and offering feedback and hints to sculpt the novice's performance to that of an expert's. The expert oversees the novice's tasks and may structure the task accordingly to assist the novice's development. ### Scaffolding [edit]Instructional scaffolding is the act of applying strategies and methods to support the student's learning. These supports could be teaching manipulatives, activities, or group work. The teacher may have to execute parts of the task that the student is not yet able to do. This requires the teacher to have the skill to analyze and assess students' abilities in the moment. ### Articulation [edit]Articulation includes "any method of getting students to articulate their knowledge, reasoning, or problem-solving process in a domain" (p. 482).[1] Three types of articulation are inquiry teaching, thinking aloud, and critical student role. Through inquiry teaching (Collins & Stevens, 1982), teachers ask students a series of questions that allow them to refine and restate their learned knowledge and form explicit conceptual models. Thinking aloud requires students to articulate their thoughts while solving problems. Students assuming a critical role monitor others in cooperative activities and draw conclusions based on the problem-solving activities. Articulation is described by McLellan[8] as consisting of two aspects: separating component knowledge from skills to learn more effectively, and more commonly verbalizing or demonstrating knowledge and thinking processes in order to expose and clarify ideas. ### Reflection [edit]Reflection allows students to "compare their own problem-solving processes with those of an expert, another student, and ultimately, an internal cognitive model of expertise" (p. 483).[1] A technique for reflection would be examining the past performances of both an expert and a novice, and highlighting similarities and differences. The goal of reflection is for students to look back and analyze their performances with desire to understand and improve the behavior of an expert. ### Exploration [edit]Exploration involves giving students room to problem solve on their own and teaching students exploration strategies. The former requires the teacher to slowly withdraw the use of supports and scaffolds not only in problem solving methods, but problem setting methods as well. The latter requires the teacher to show students how to explore, research, and develop hypotheses. Exploration allows the student to frame interesting problems within the domain for themselves and then take the initiative to solve these problems. ## Success [edit]## See also [edit]- Constructivism (philosophy of education) - Educational psychology - Legitimate peripheral participation - Situated learning ## Citations [edit]- ^ **a****b****c****d**Collins, A., Brown, J. S., & Newman, S. E. (1987). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics (Technical Report No. 403). BBN Laboratories, Cambridge, MA. Centre for the Study of Reading, University of Illinois. January, 1987.**e** - ^ **a****b**Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.**c** **^**Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press**^**Fitts, P.M., & Posner, M.I. (1967). Human performance. Belmont, CA: Brooks Cole.**^**Anderson, J.R. (2000). Cognitive psychology and its implications. New York, NY: Worth Publishers.**^**Bandura, A. (1997). Social Learning Theory. Englewood Cliffs, NJ: Prentice-Hall.**^**Johnson, S.D. (1992). A framework for technology education curricula which emphasizes intellectual processes. Journal of Technology Education, 3; 1-11.**^**McLellan, H. (1994). Situated learning: Continuing the conversation. Educational Technology 34, 7- 8.**^**Järvelä, Sanna (January 1995). "The cognitive apprenticeship model in a technologically rich learning environment: Interpreting the learning interaction".*Learning and Instruction*.**5**(3): 237–259. doi:10.1016/0959-4752(95)00007-P.**^**Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah; Dalby, Andrew R. (1 July 2015). "Effect of Internet-Based Cognitive Apprenticeship Model (i-CAM) on Statistics Learning among Postgraduate Students".*PLOS ONE*.**10**(7): e0129938. doi:10.1371/journal.pone.0129938. PMC 4488879. PMID 26132553.**^**Dickey, Michele D. (September 2008). "Integrating cognitive apprenticeship methods in a Web-based educational technology course for P-12 teacher education".*Computers & Education*.**51**(2): 506–518. doi:10.1016/j.compedu.2007.05.017.**^**Woolley, Norman N.; Jarvis, Yvonne (January 2007). "Situated cognition and cognitive apprenticeship: A model for teaching and learning clinical skills in a technologically rich and authentic learning environment".*Nurse Education Today*.**27**(1): 73–79. doi:10.1016/j.nedt.2006.02.010. PMID 16624452. ## References [edit]- Aziz Ghefaili. (2003). Cognitive Apprenticeship, Technology, and the Contextualization of Learning Environments. Journal of Educational Computing, Design& Online Learning, Vol. 4, Fall, 2003. A copy on apan.org - Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. - Anderson, J.R. (2000). Cognitive psychology and its implications. New York, NY: Worth Publishers. - Bandura, A. (1997). Social Learning Theory. Englewood Cliffs, NJ: Prentice-Hall. - Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42. - Collins, A., Brown, J. S., & Newman, S. E. (1987). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics (Technical Report No. 403). BBN Laboratories, Cambridge, MA. Centre for the Study of Reading, University of Illinois. January, 1987. - Fitts, P.M., & Posner, M.I. (1967). Human performance. Belmont, CA: Brooks Cole. - Johnson, S.D. (1992). A framework for technology education curricula which emphasizes intellectual processes. Journal of Technology Education, 3; 1-11. - Vygotsky, L.S. (1978). Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press. ## Further reading [edit]- Edmondson, R. Shawn (2006). *Evaluating the Effectiveness of a Telepresence-Enabled Cognitive Apprenticeship Model of Teacher Professional Development*(PhD). Utah State University. doi:10.26076/9551-7d16.
true
true
true
null
2024-10-12 00:00:00
2005-06-09 00:00:00
null
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
18,429,163
https://ofdollarsanddata.com/the-mcrib-effect/
The McRib Effect
Nick Maggiulli
It’s Sunday morning. You walk to your mailbox and see a letter from a mysterious stock research firm. The firm claims that their market insights team knows with 100% certainty that a particular stock is going up over the next week. Skeptical, you put the envelope aside and go about your day. One week later you receive another letter from the same firm, but this time they claim a *different* stock is about to drop in price. You go back to the first letter, and, lo and behold, they were right. The stock went up. Your interest is piqued. Over the course of the next week you watch the stock from the second letter drop as predicted. Now you are hooked. Week after week, letter after letter, the firm continues to reveal the future of a single stock as if reading from a crystal ball. After 10 weeks of correct predictions, you get a final letter asking you to invest money with them for a sizable commission. You calculate the probability that they could get 10 positive/negative calls correct in a row is 1/1,024 (or 2^10). This can’t be chance, right? You decide to pull the trigger and invest with them. Months later you are broke after the firm fails to repeat their prior success. What went wrong? Unbeknownst to you, you were not the only individual to receive letters from this mysterious research firm. In fact, in the first week, letters were sent to 10,240 people (including you). Half of these letters (5,120) predicted that stock A would rise, while the other half (5,120) predicted that stock A would fall. The 5,120 individuals that received the letter saying stock A would rise (the correct group), received a second mailing in the following week. Half of these “week 2” letters (2,560) stated that stock B would rise, while half (2,560) stated that stock B would drop. This process continued each week with those individuals who received correct calls getting additional mailings. After 10 weeks of this, there are exactly 10 individuals who received 10 correct calls in a row (1/1,024 * 10,240) completely by chance. You happen to be one of the lucky (or shall I say unlucky?) individuals to get 10 correct calls. This thought experiment, from How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg, illustrates how statistical happenstance can masquerade as skill. The problem we will face throughout life is how to differentiate between causation and correlation—between signal and noise. To illustrate this, let’s examine something I am calling **The McRib Effect**. Every year since 2010, McDonald’s re-releases its pork-based sandwich, the McRib, for a limited time across the U.S. After hearing that the McRib was being re-released on October 29, 2018, I immediately wondered whether the McRib’s availability had any affect on the stock market. I found some historical re-release dates online, ran the numbers, and discovered I was right: When the McRib is available, the S&P 500 has an average daily return about 7 basis points (0.07%) higher than on days when it is not available. To put that into perspective, when annualized, that difference would be 19% every year. The question remains though: is this difference legitimate? Do investors in American equities change their behavior (maybe at a subconscious level) when the McRib is freely available to be consumed? Does its presence provide some sort of nostalgia that makes us collectively bid up equity prices? Or is it merely a statistical anomaly? After having a slight chuckle and sharing this on Twitter, I decided to find out. The first thing I did was run a t-test on the difference in returns between when the McRib is available and when it isn’t. Unsurprisingly, the difference was not statistically significant (p-value = 0.19). Afterwards, I ran 10,000 simulations in which McDonalds re-released the McRib at different times throughout 2010-2017 for the same number of total days and then compared to see how many simulations showed results as extreme as the real world. After running the simulations, only 4.6% of them had the McRib days outperforming the non-McRib days by over 0.07%, on average. This might make the McRib Effect seem more plausible, but I also found that 4.2% of the simulations had the *non-McRib days outperforming the McRib days* by over 0.07%, on average. This is the exact opposite of what I was looking for. While you might laugh at the McRib Effect as an obvious example of “correlation does not equal causation,” I don’t see how it is that different from a lot of the arguments I see being made about financial markets every single day. Some pundit will claim that event X caused the market to drop or how President A was better than President B for stocks. All of these arguments boil down to inferring *simple* causality for a complex, chaotic system involving **millions** of decision makers. To think that one individual has an effect that is both large *and measurable* on aggregate equity performance is absurd. Joe Weisenthal’s tweet knocks it out of the park: It reminds me of the famous reply given to the Nobel laureate Ken Arrow after he discovered that his long term weather forecasts were no better than chance: The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes. This quote perfectly exemplifies the struggle our society faces with understanding causality. We all want a simple answer though the truth is usually far messier. **#NotMySample** I wrote about the McRib Effect to remind you that **causality is not, and will never be, easy to determine**. Understanding cause and effect is the biggest problem facing society because it touches every part of our lives. Health? Politics? Economics? All of these, and more, involve making causal arguments around systems that are highly complex. And we make policy decisions based on these arguments that go on to affect millions of Americans. While I wish these issues had a one-size-fits-all solution, they probably don’t. Additionally, in trying to understand causality in these fields, we are plagued by small or incomplete sample sizes. We can’t re-run the DotCom bubble with a different Fed Chair or a different President. We can’t easily test the effect that one particular food has on overall health. There are too many other variables that are changing at the same time and are highly correlated. Just try and find me a sufficient sample size of ultra-marathoners that *also* smoke a pack of cigarettes a day and you will understand this plight. Does this mean we are doomed? Not necessarily. While we will likely never have the ability to predict the future of chaotic systems (i.e. the stock market), if we stay cognizant of the inherent difficulties in assigning causality, we stand a better chance of understanding what is true. Thank you for reading! **If you liked this post, consider signing up for my newsletter.** This is post 97. Any code I have related to this post can be found here with the same numbering: https://github.com/nmaggiulli/of-dollars-and-data
true
true
true
Why understanding causality can be so difficult.
2024-10-12 00:00:00
2018-11-06 00:00:00
https://ofdollarsanddata…/mcrib_days.jpeg
article
ofdollarsanddata.com
Of Dollars And Data
null
null
19,843,099
http://time.com/5582767/game-thrones-personality-character-quiz/
The Ultimate Game of Thrones Quiz: Who Do You Really Think Should Win?
Chris Wilson
For all its narrative complexity and unforeseeable twists, *Game of Thrones* has always played out against the background of who — or what — will control Westeros at the conclusion of the epic story. As the end of the hit HBO series quickly approaches, that dubious honor could still fall to a number of the game’s contestants. In the same vein as TIME’s “Ultimate Harry Potter Fan Quiz,” the following interactive examines George R. R. Martin’s fantasy world through the lens of empirical social science. In partnership with research psychologists from the University of Cambridge, the University of Illinois at Urbana-Champaign, and the University of Mainz, Germany, we created the following survey that matches your views on leadership with one of the five surviving characters in the show who could ultimately take the reins. Which of the Game of Thrones characters do you most closely align with? Take our quiz to find out: (Can’t see quiz? You can find the direct link here.) As with similar projects that TIME has run with social psychologists, this one serves a dual purpose. First, it shines a light on the nuances of a popular fantasy world. Second — with each user’s consent — it anonymously contributes data back to the scientists who helped design it. We’re also grateful to the hundreds of people who seeded the survey with psychological profiles of the characters themselves by answering the questions on behalf of Cersei, Daenerys, Tyrion, Arya and Jon Snow. Stay tuned for the results of the study. Find out more about our research, methodology and privacy practices here. ## More Must-Reads from TIME - Introducing the 2024 TIME100 Next - Sabrina Carpenter Has Waited Her Whole Life for This - What Lies Ahead for the Middle East - Why It's So Hard to Quit Vaping - Jeremy Strong on Taking a Risk With a New Film About Trump - Our Guide to Voting in the 2024 Election - The 10 Races That Will Determine Control of the Senate - Column: How My Shame Became My Strength Write to Chris Wilson at [email protected]
true
true
true
Deep down, are you rooting for Cersei? Or Daenerys? Maybe John Snow? Find out
2024-10-12 00:00:00
2019-05-03 00:00:00
https://api.time.com/wp-…200&h=628&crop=1
article
time.com
Time
null
null
37,347,506
https://arstechnica.com/gadgets/2023/08/the-torrid-saga-of-reiserfs-nears-its-end-with-obsolete-label-in-linux-kernel/
ReiserFS is now “obsolete” in the Linux kernel and should be gone by 2025 [Updated]
Kevin Purdy
When Apple was about to introduce Time Machine in Mac OS X Leopard, John Siracusa wrote in the summer of 2006 about how a new file system should be coming to Macs (which it did, 11 years later). The Mac, Siracusa wrote, needed something that could efficiently handle lots of tiny files, volume management with pooled storage, checksum-based data integrity, and snapshots. It needed something like ZFS or, perhaps, ReiserFS, file systems “notable for their willingness to reconsider past assumptions about file system design.” Two months later, the name Reiser would lose most of its prestige and pick up a tragic association it would never shake. Police arrested the file system’s namesake, Hans Reiser, and charged him with murder in connection with the disappearance of his estranged wife. Reiser’s work on Linux file systems was essentially sentenced to obscurity from that point on. Now that designation has been made official, as the file system that was once the default on systems like SUSE Linux has been changed from “Supported” to “Obsolete” in the latest Linux 6.6 kernel merge process (as reported by Phoronix). While a former employee of Reiser’s company, Namesys, continues out-of-source work on later versions of ReiserFS, it is likely to disappear from the kernel entirely in a matter of years, likely 2025. It's an ignoble end for a filesystem that, at one time, could have been the next big thing for Linux file systems. Hans and Nina Reiser were in the midst of divorce proceedings when Nina disappeared in September 2006, having been last seen dropping her kids off at Hans' home. The two had clashed repeatedly over child support, and Nina had a protective order against Hans by then. During their investigation, police found Hans' Honda CRX miles from his home. The inside was waterlogged, the passenger seat removed, and police discovered a sleeping bag cover with a 6-inch stain of Nina's blood, along with two books on police murder investigations.
true
true
true
A little-used file system named for a convicted murderer is slated for removal.
2024-10-12 00:00:00
2023-08-31 00:00:00
https://cdn.arstechnica.…05291-scaled.jpg
article
arstechnica.com
Ars Technica
null
null
33,764,853
https://www.nytimes.com/2022/11/26/business/video-game-e-sports-profit.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,177,824
https://twitter.com/PeterMcCormack/status/1393971202738302986
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
39,035,243
https://www.wsj.com/finance/banking/wealth-giant-pursues-goldman-sachs-kpmg-and-others-over-silicon-valley-banks-collapse-64a16039
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,407,002
http://www.bing.com/blogs/site_blogs/b/search/archive/2013/09/17/refresh.aspx?
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,803,123
https://blog.codinghorror.com/maybe-normalizing-isnt-normal/
Maybe Normalizing Isn't Normal
Jeff Atwood
One of the items we're struggling with now on Stack Overflow is how to maintain near-instantaneous performance levels in a relational database as the amount of data increases. More specifically, how to scale our tagging system. Traditional database design principles tell you that well-designed databases are always normalized, but I'm not so sure. Dare Obasanjo had an excellent post When Not to Normalize your SQL Database wherein he helpfully provides a **sample database schema for a generic social networking site**. Here's what it would look like if we designed it in the accepted normalized fashion: Normalization certainly delivers in terms of limiting duplication. Every entity is represented once, and only once -- so there's almost no risk of inconsistencies in the data. But this design also requires a whopping *six joins* to retrieve a single user's information. select * from Users uinner joinUserPhoneNumbers upn on u.user_id = upn.user_idinner joinUserScreenNames usn on u.user_id = usn.user_idinner joinUserAffiliations ua on u.user_id = ua.user_idinner joinAffiliations a on a.affiliation_id = ua.affiliation_idinner joinUserWorkHistory uwh on u.user_id = uwh.user_idinner joinAffiliations wa on uwh.affiliation_id = wa.affiliation_id (Update: this isn't intended as a real query; it's only here to visually illustrate the fact that you need six joins -- or six individual queries, if that's your cup of tea -- to get all the information back about the user.) Those six joins aren't doing anything to help your system's performance, either. Full-blown normalization isn't merely difficult to understand and hard to work with -- it can also be quite slow. As Dare points out, the obvious solution is to **denormalize** -- to collapse a lot of the data into a single Users table. This works -- queries are now blindingly simple (`select * from users` ), and probably blindingly fast, as well. But you'll have a bunch of gaping blank holes in your data, along with a slew of awkwardly named field arrays. And all those pesky data integrity problems the database used to enforce for you? Those are all your job now. Congratulations on your demotion! Both solutions have their pros and cons. So let me put the question to you: **which is better -- a normalized database, or a denormalized database?** Trick question! The answer is that *it doesn't matter!* Until you have millions and millions of rows of data, that is. Everything is fast for small n. Even a modest PC by today's standards -- let's say a dual-core box with 4 gigabytes of memory -- will give you near-identical performance in either case for anything but the very largest of databases. Assuming your team can write reasonably well-tuned queries, of course. There's no shortage of fascinating database war stories from companies that made it big. I do worry that these war stories carry an implied tone of "I lost 200 pounds and so could you!"; please assume the tiny-asterisk disclaimer **results may not be typical** is in full effect while reading them. Here's a series that Tim O'Reilly compiled: - Second Life - Blogline and Memeorandum - Flickr - NASA World Wind - Craigslist - O'Reilly Research - Google File System and BigTable - Findory and Amazon - MySQL There's also the High Scalability blog, which has its own set of database war stories: First, a reality check. It's partially an act of hubris to imagine your app as the next Flickr, YouTube, or Twitter. As Ted Dziuba so aptly said, *scalability is not your problem, getting people to give a shit is.* So when it comes to database design, do measure performance, but try to err heavily on the side of **sane, simple design**. Pick whatever database schema you feel is easiest to understand and work with on a daily basis. It doesn't have to be all or nothing as I've pictured above; you can partially denormalize where it makes sense to do so, and stay fully normalized in other areas where it doesn't. Despite copious evidence that normalization rarely scales, I find that many **software engineers will zealously hold on to total database normalization on principle alone**, long after it has ceased to make sense. When growing Cofax at Knight Ridder, we hit a nasty bump in the road after adding our 17th newspaper to the system. Performance wasn't what it used to be and there were times when services were unresponsive.A project was started to resolve the issue, to look for 'the smoking gun'. The thought being that the database, being as well designed as it was, could not be of issue, even with our classic symptom being rapidly growing numbers of db connections right before a crash. So we concentrated on optimizing the application stack. I disagreed and waged a number of arguments that it was our database that needed attention. We first needed to tune queries and indexes, and be willing to, if required, pre-calculate data upon writes and avoid joins by developing a set of denormalized tables. It was a hard pill for me to swallow since I was the original database designer. Turned out it was harder for everyone else!Consultants were called in. They declared the db design to be just right - that the problem must have been the application.After two months of the team pushing numerous releases thought to resolve the issue, to no avail, we came back to my original arguments. Pat Helland notes that people normalize because their professors told them to. I'm a bit more pragmatic; I think you should normalize when the *data* tells you to: - Normalization makes sense to your team. - Normalization provides better performance. (You're automatically measuring all the queries that flow through your software, right?) - Normalization prevents an onerous amount of duplication or avoids risk of synchronization problems that your problem domain or users are particularly sensitive to. - Normalization allows you to write simpler queries and code. Never, never should you normalize a database out of some vague sense of duty to the ghosts of Boyce-Codd. Normalization is not magical fairy dust you sprinkle over your database to cure all ills; it often creates as many problems as it solves. Fear not the specter of denormalization. Duplicated data and synchronization problems are often overstated and relatively easy to work around with cron jobs. Disks and memory are cheap and getting cheaper every nanosecond. Measure performance on your system and decide for yourself what works, free of predispositions and bias. As the old adage goes, **normalize until it hurts, denormalize until it works**.
true
true
true
One of the items we're struggling with now on Stack Overflow is how to maintain near-instantaneous performance levels in a relational database as the amount of data increases. More specifically, how to scale our tagging system. Traditional database design principles tell you that well-designed databases are always normalized, but I'm
2024-10-12 00:00:00
2008-07-14 00:00:00
null
article
codinghorror.com
Coding Horror
null
null
19,162,261
https://heartbeat.fritz.ai/distributing-on-device-machine-learning-models-with-tags-and-metadata-ccae40f5c059
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,893,212
https://github.com/gactjs/research/tree/main/svelte-instance-identity-model
research/svelte-instance-identity-model at main · gactjs/research
Gactjs
Svelte is a declarative, reactive framework for building user interfaces. The central abstraction is a **component**, which maps state to a user interface declaration. A user interface declaration is a tree composed of components and intrinsic elements (components built into the rendering environment). A component is a blueprint that Svelte instantiates creating an **instance**. As state changes, an instance's user interface is updated. Svelte must transform the current user interface to the updated user interface. Svelte compiles components into code that can handle this transformation. This patch process raises an important question of identity. When should an instance in the current user interface correspond to an instance in the updated user interface? Svelte answers this question by following a set of rules that collectively comprise its **instance identity model**. The instance identity model determines when instances are created and destroyed. This creation and destruction influences nearly every part of a reactive user interface. In this document, we will explore Svelte's instance identity model in depth. This document has an associated MIT licensed playground available here. This document generally excludes styles for clarity. In Svelte, the declarations in our components elide many details. Let's take a look at a simple `Input` component: ``` <script lang="ts"> let value = ""; </script> <input bind:value /> ``` The declaration leaves out many crucial details: - Is our input focused ? - If focused, where's the cursor ? - Is any of the text in our input selected ? The primary reason these details are left out is **encapsulation**. The built-in `input` is a complex element that provides many features. But this complexity is encapsulated. We get focus, cursor position, and selection support without having to do anything, and more importantly without having to know anything. If our declaration needed to provide these details, we would also have to manage these details. In other words, we would completely break encapsulation. **Fungibility** is the property that something is exactly replaceable. The common example of something fungible is a dollar. If the dollar in your pocket is magically swapped with the dollar in my pocket, then we should be indifferent. Fungibility is rare. Most things in the world are not fungible. Even things that are fungible in theory lose fungibility in practice (e.g. each dollar has a serial number). As discussed, our declarations are necessarily vague. And consequently, our instances are non-fungible because Svelte doesn't have the data to create an exact replacement. This non-fungibility is the reason instance identity matters. Svelte's instance identity model is comprised of three rules: - An instance declared outside of control flow has persistent identity. - Each change in the active branch of a conditional declaration creates new instances. - Instances within an `each` block have identity tied to their corresponding index by default and to a key when specified. We will now explore this model in depth in a series of examples. Let's walk through the rendering of our `Input` component: ``` <script lang="ts"> let value = ""; </script> <input bind:value /> ``` As we type, Svelte updates the `value` of our `input` . But Svelte does nothing to manage its focus or cursor state. Nevertheless, the updated user interface shows the correct value, focus, and cursor position. The `input` instance is declared outside of control flow, and the first identity rule applies. Because the identity of our `input` persists across rerenders, its internal state is maintained. The importance of identity preservation becomes conspicuous when it's disturbed. We can use either the second or third identity rule to dynamically change the identity of `input` . The following `BranchInput` component alters `input` identity through control flow: ``` <script lang="ts"> let value = ""; </script> {#if value === "rerender"} <input bind:value /> {:else} <input bind:value /> {/if} ``` `BranchInput` will switch between branches when the `value` becomes and changes from `"rerender"` . By the second identity rule, our `input` identity changes. As a result, focus and cursor position are lost. `KeyInput` provides an alternative expression of the same identity semantics: ``` <script lang="ts"> let value = ""; </script> {#each [value] as currentValue (currentValue === "rerender" ? 1 : 0)} <input bind:value /> {/each} ``` When the value becomes and changes from `"rerender"` , the specified key toggles. Svelte looks for an `input` in the previous render with the key provided in the current render. When it cannot find one, it creates a new `input` for us. Consequently, input focus and cursor position are lost. Instance identity becomes prominent once our user interface is structurally dynamic. Let's consider a user interface that let's us dynamically add and remove `Input` s. The `useDynamicCollection` hook lets us manage a dynamic set of keys: ``` import { writable, type Readable } from "svelte/store"; // createKey will produce an alphabetically sequential series of keys import { createKeyFactory } from "../utils/createKeyFactory"; export type DynamicCollection = { keys: Readable<string[]>; add: () => void; remove: (removedKey: string) => void; }; export const useDynamicCollection = (): DynamicCollection => { const { subscribe, update } = writable<string[]>([]); const createKey = createKeyFactory(); const add = () => { update((keys) => [...keys, createKey()]); }; const remove = (removedKey: string) => { update((keys) => keys.filter((key) => key !== removedKey)); }; return { keys: { subscribe }, add, remove, }; }; ``` The `Removable` component allows us to wrap other components in a removal user interface. ``` <script setup lang="ts"> import { createEventDispatcher } from "svelte"; import Button from "./Button.svelte"; const dispatch = createEventDispatcher(); </script> <div> <slot /> <Button on:click={() => dispatch("remove")}>Remove</Button> </div> ``` When declaring instances based on a dynamic collection, index position is an unreliable indicator of identity. The corresponding index of an instance may be completely different on each update. By specifying a key we can help Svelte correctly correlate identity between renders. To more clearly demonstrate the importance of declaring a key with a dynamic collection, we will first leave them out. ``` <script lang="ts"> import { useDynamicCollection } from "../hooks/useDynamicCollection"; import Button from "./Button.svelte"; import Input from "./Input.svelte"; import Removable from "./Removable.svelte"; const { keys, add, remove } = useDynamicCollection(); </script> <div> <Button on:click={add}>Add</Button> {#each $keys as key} <Removable on:remove={() => remove(key)}> <Input /> </Removable> {/each} </div> ``` After we click the "Add" button for the first time, the value of `$keys` is `["a"]` , and our `DynamicCollection` instance renders a removable `Input` . Let's say we type `"first"` into this input and then click the "Add" button again. The value of `$keys` is now `["a", "b"]` and `DynamicCollection` renders two removable `Input` s. At this point, our user interface seems to be working correctly. The state from our first `Input` is preserved (it reads `"first"` ), and we also have a second removable `Input` . As long as we're just adding `Input` s, index is sufficient to identify our instances. Let's now try to remove the first `Input` by hitting its associated "Remove" button. Unfortunately, when the user interface updates, the second `Input` is removed instead of the first. There was an identity crisis. The value of `$keys` after removal is `["b"]` , which is correct. But because the index of `"b"` is `0` , Svelte preserved the first `Input` . This bug is easy to fix, we just need to specify a key: ``` <script lang="ts"> import { useDynamicCollection } from "../hooks/useDynamicCollection"; import Button from "./Button.svelte"; import Input from "./Input.svelte"; import Removable from "./Removable.svelte"; const { keys, add, remove } = useDynamicCollection(); </script> <div> <Button on:click={add}>Add</Button> {#each $keys as key (key)} <Removable on:remove={() => remove(key)}> <Input /> </Removable> {/each} </div> ``` Svelte now associates identity by the specified key. When we go through the same actions as before, the value of `$keys` is again `["b"]` , but this time Svelte keeps the second `Input` because it's associated with `"b"` . The primary limitation of Svelte's instance identity model is that identity can only be distinguished within a single level of the user interface. Whenever a structural change span levels, we have no direct means to preserve identity. Let's say we have a `Frame` component, and want to provide a user interface to toggle framing an `Input` . `DynamicWrapper` is a component that creates a dynamically framable `Input` : ``` <script lang="ts"> import Button from "./Button.svelte"; import Frame from "./Frame.svelte"; import Input from "./Input.svelte"; let shouldFrame = false; const toggleShouldFrame = () => { shouldFrame = !shouldFrame; }; </script> <div> {#if shouldFrame} <Frame> <Input /> </Frame> {:else} <Input /> {/if} <Button on:click={toggleShouldFrame}>Frame</Button> </div> ``` When we render `DynamicWrapper` , we will see that our `Input` is cleared whenever we hit the toggle. On each toggle, Svelte destroys our current `Input` instance, and creates a new instance. Svelte thinks the identity of our `Input` has changed because the active branch changed. Unfortunately, because this move spans more than a single level, `key` will not help us. The following code is equally broken: ``` <script lang="ts"> import Button from "./Button.svelte"; import Frame from "./Frame.svelte"; import Input from "./Input.svelte"; let shouldFrame = false; const toggleShouldFrame = () => { shouldFrame = !shouldFrame; }; </script> <div> {#if shouldFrame} <Frame> {#each ["a"] as key (key)} <Input /> {/each} </Frame> {:else} {#each ["a"] as key (key)} <Input /> {/each} {/if} <Button on:click={toggleShouldFrame}>Frame</Button> </div> ``` There is no direct way to preserve the identity of our `Input` . Since `Frame` is an internal component, we could try to modify its implementation to produce the same effect while maintaining tree structure. The original `Frame` component was implemented like this: ``` <div class="flex p-3 border-4 rounded-md border-yellow-500"> <slot /> </div> ``` We could change `Frame` 's implementation to the following: ``` <script lang="ts"> export let shouldFrame: boolean; </script> <div class={shouldFrame ? "flex p-3 border-4 rounded-md border-yellow-500" : ""}> <slot /> </div> ``` And then we could rewrite `DynamicWrapper` as follows: ``` <script lang="ts"> import Button from "./Button.svelte"; import Frame from "./Frame.svelte"; import Input from "./Input.svelte"; let shouldFrame = false; const toggleShouldFrame = () => { shouldFrame = !shouldFrame; }; </script> <div> <Frame {shouldFrame}> <Input /> </Frame> <Button on:click={toggleShouldFrame}>Frame</Button> </div> ``` This would work correctly because declarations outside control flow have persistent identity. However, this little hack will only work sometimes. In the common case, we cannot easily achieve the desired effect without using conditional declarations. Further, when we're working with third-party components we're out of luck. The commonly employed workaround for such identity issues is **state-lifting**. With state-lifting, you move your state higher in your user interface. You move your state up until you find somewhere in your tree that has a stable identity for at least the intended lifetime of your state. Let's look at how we can make use of state-lifting in the frame example. First we create a `FungibleInput` component: ``` <script lang="ts"> export let value: string; </script> <input {value} on:input /> ``` Then we make use of it in `LiftDynamicWrapper` : ``` <script setup lang="ts"> import Button from "./Button.svelte"; import Frame from "./Frame.svelte"; import FungibleInput from "./FungibleInput.svelte"; let value = ""; let shouldFrame = false; const toggleShouldFrame = () => { shouldFrame = !shouldFrame; }; const handleInput = (event: InputEvent) => { value = (event.target as HTMLInputElement).value; }; </script> <div> {#if shouldFrame} <Frame> <FungibleInput on:input={handleInput} {value} /> </Frame> {:else} <FungibleInput on:input={handleInput} {value} /> {/if} <Button on:click={toggleShouldFrame}>Frame</Button> </div> ``` Our new `FungibleInput` component is stateless; it takes `value` as a prop and forwards input events to enable its parent to mange state. The `LiftDynamicWrapper` component now manages `input` state, and passes the latest `value` to `FungibleInput` . This version maintains `value` as desired. However, like our `Input` instance in the previous example, our `FungibleInput` instance is replaced on each toggle. The difference here is that `FungibleInput` 's identity is meaningless. The replacement instance is identical in every perceptible way. In this example, state was the only source of non-fungibility. But generally, any aspect of an instance that creates a perceptible difference between itself and its replacement must be managed for this technique to work. The consequences of lost state are often dramatic. However, other fungibility failures can be subtle. For example, failure to make lifecycle functions idempotent. There's a way to understand state-lifting in terms of fungibility and encapsulation. In general, there's a tension between fungibility and encapsulation. If you know every possible thing about something, then you can create an exact copy. On the other hand, encapsulation hides information. State-lifting is a way to tradeoff encapsulation for fungibility. By moving your state higher in the tree, you break encapsulation by exposing state management details. At the same time, you make the instance stateless. A stateless instance is more easily fungible. In addition to breaking encapsulation, state-lifting has many other negative consequences: hinders composition, increases complexity, complicates debugging, and hurts performance. The instance identity model is often improperly distinguished from patching. The instance identity model specifies when an instance's identity is preserved, and is part of Svelte's public API. On the other hand, the code Svelte uses to patch the current user interface to the updated user interface is an implementation detail. Crucially, the patch code must respect the instance identity model. Svelte's instance identity model impacts nearly every aspect of its programming model. The three rules that Svelte uses to determine instance identity are easy to understand and have proved workable in practice. Nevertheless, they are inadequate for expressing common identity requirements. The workarounds for this inadequacy such as state-lifting have serious problems. There is a clear need for a more general instance identity model.
true
true
true
Gact Research Publications. Contribute to gactjs/research development by creating an account on GitHub.
2024-10-12 00:00:00
2023-02-20 00:00:00
https://opengraph.githubassets.com/a2517b17f3497d8db334844d065be76d60ee44924df904af25dfaf71d9142d83/gactjs/research
object
github.com
GitHub
null
null
31,720,703
https://twitter.com/watcherguru/status/1536172065732104195
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
1,854,837
http://www.appmarket.tv/opinion/785-google-tv-is-aiming-for-the-living-room-with-or-without-hollywood-.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,594,284
http://www.theguardian.com/technology/2015/nov/18/flip-phone-samsung-clamshell
Hello, it's me. On a flip-phone. Samsung unveils clamshell model
Hannah Jane Parkinson
We all owe Adele an apology. After the internet mercilessly took the piss out of the flip-phone she used in her video for Hello (a decision the director said was thought through – “it’s so distracting to see an iPhone in a movie”), news has emerged that Samsung is releasing a flip model. In photographs that were (probably) greeted with much suspicion – because, flip-phone, 2015 – the SM-W2016 looks a little bit like a clamshell version of the Galaxy S6. According to Sammobile, the new phone will include 64GB storage, 3GB RAM, 16 megapixel and 5 megapixel cameras and will run Android Lollipop. Samsung did actually bring a flip-phone to the market back in July – but only in Korea. And it’s unclear whether Samsung is planning to bring the SM-W2016 to the European or US markets but let’s hope so, because I’m excited at the prospect of playing a businessman circa 2002. Back in December 2014, I wrote about my love affair with clamshell models, which allow a (literally) snappy end to a conversation; an “onomatopoeic full-stop” as I put it then. There was something a little sassy in a flip-phone, and it was perfectly sized for all pockets. Now, phones are so large it’s often difficult to sit down if they are being carried in jeans. Adele and I aren’t the only fans. Anna Wintour and Rihanna have also been spotted with humble clamshells. Who wants to tell Drake that the only reason we stopped calling him on his cell phone was because he switched to a smartphone? **Forget Apple’s iPhone and Samsung’s Galaxy, the humble flip phone is back** ## Comments (…) Sign in or create your Guardian account to join the discussion
true
true
true
Possibly inspired by singer Adele (possibly not), Samsung is bringing a new flip-phone model to the market
2024-10-12 00:00:00
2015-11-18 00:00:00
https://i.guim.co.uk/img…ef082c2a9edf4755
article
theguardian.com
The Guardian
null
null
6,366,462
https://www.edx.org/course/harvard-university/hks211-1x/central-challenges-american/1087
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,632,909
http://thomaslarock.com/2013/04/doing-it-wrong-virtualizing-sql-server/
Doing It Wrong: Virtualizing SQL Server
Thomas LaRock
I’ve been involved in a virtualization projects for almost ten years now. In that time I’ve had the opportunity to track my own list of “best practice” items. It’s a list I share freely with clients and customers that seek me out for virtualization advice. I can tell that virtualization (and Cloud) efforts are on the rise simply by the number of requests I get for help, specifically for virtualizing SQL Server. I like to call this list my “facepalm” special, as any one of these essentially triggers a facepalm reaction. They have helped my customers and clients in the past and I am certain they will help you. ### 1. Build Your Own Don’t build a host – especially a PRODUCTION host – out of spare parts leftover from servers that are near the end of their life. If you want to go on the cheap and use spare parts, do it for a development host and get ready to spend extra time keeping that bucket of bolts together. If you are going virtual, you will want to buy new hardware to use for your hosts, and hardware that is more powerful than the servers you already have deployed. There is also licensing considerations here. It could be the case that it is cheaper to buy new hardware and have less to license overall. ### 2. No Performance Expectations You cannot go virtual without having any idea as to what is an acceptable performance level. VMWare has a whitepaper that essentially says they can offer you 98% of the same performance as a current physical implementation of SQL Server. Note that doesn’t mean you will get *better* performance by moving to VMWare itself. Often times you get better performance because you have moved to better hardware (see the first item in this list). But if you don’t know what your current performance SLAs are then you won’t have any idea if you have still met the SLAs once you have converted to virtual. Get your expectations set now so you can track them going forward. ### 3. Select Wrong Disk Choice You have two main options here: raw device mappings (RDM) and virtual machine disk format (VMDK). Which one do you want to use, and when? VMWare says that in terms of performance the difference is minimal. The real difference is functional, or architectural (I know I just scared away some DBAs because I used the word ‘architecture’, but yeah I went there). VMWare has published a list of scenarios where RDMs would be a better solution for your shop. You need to know these differences before you start deploying a solution that fails to meet some critical business requirement. ### 4. Thin Provisioning Thin provisioning is one of those bad ideas that sounds good and often produces the same results as do-it-yourself dentistry. It starts out innocently enough: someone wants to save space and only allocate storage as needed. The end result is that no one keeps efficient track of what VMs have been thin provisioned and eventually as the files grow in size they fill up all the available storage until all activity stops because the disk is full. VMWare has a recommendation for you listed here: use vMotion to migrate the guests to new hosts where they will fit. Great advice, but I’m guessing you didn’t have enough room to start with, otherwise you wouldn’t be using thin provisioning. ### 5. Over-Overallocation Of Memory/CPU It’s okay to want to over allocate your memory and CPU resources. Want you don’t want to have happen is to have them over committed, as that’s where performance issues manifest themselves. When I am reviewing a customer’s configuration I tell them the line in the sand I draw is 1.5:1 ratio as an upper bound default for CPU resources (so, 16 logical cores means you can allocate 24 CPU as a baseline and adjust up or down as needed based upon workload and load balancing allows). You can find a good description on allocating vCPU at this blog post. For memory settings I follow what is outlined in the VMWare Performance Best Practices guide which states “…avoid over-allocating memory”. In other words, I’m much more conservative with memory over allocation than with CPU over allocation. ### 6. Trusting O/S Counters When you go virtual that means you have an additional layer of abstraction (i.e., the hypervisor) between you and your data. I usually just say “there are many layers of delicious cake between you and your data”. The trouble is that you need to know which layer is causing you performance issues. Is it the host? Is it the guest? As such, you need to rely on the VM performance counters in order to get a complete picture of what is happening. You can see the counters explained in more detail from this page. If you are still relying on standard O/S metrics for a virtualized server then you are doing it wrong. (And if you need a tool to show you all those layers of cake, I can help.) ### 7. Running It All At Once Remember how I said that you want to avoid over committing all your resources at once? That’s where load balancing and knowing your workloads are key. You cannot carve out a dozen guests to be used as production database servers to be run during regular business hours and expect that performance will remain at that 98% mark that VMWare suggests is attainable. You have to balance your workload otherwise you are going to find that your over allocation of resources is now an over commit of resources. Yet I still see customers stretching their hosts way too thin. These are the seven items that I see hurting a majority of virtualization efforts. They result on bad performance that leaves users and administrators frustrated. They are also easily avoidable with just a bit of up front knowledge and requirements gathering. Nice post! Number 5 is a struggle at times for us. All 7 of these are spot on. Thanks Chris, glad you enjoyed the post. Which O/S counters do you see trusted the most for virtualized SQL Servers but shouldn’t be? And what would the relevant counters be to look at (in VMWare inmy case :)? Great post btw
true
true
true
Looking at virtualizing SQL Server? Here's a list of 7 things you want to avoid.
2024-10-12 00:00:00
2013-04-30 00:00:00
https://thomaslarock.com…diy-dentist1.jpg
article
thomaslarock.com
Thomas LaRock
null
null
9,518,601
http://typedrummer.com
typedrummer
null
load new samples share this beat
true
true
true
typedrummer is an instrument for making ascii beats.
2024-10-12 00:00:00
null
http://typedrummer.com/images/typedrummer.png
null
typedrummer.com
Typedrummer
null
null
10,496,245
http://techcrunch.com/2015/11/02/plex-lands-on-the-new-apple-tv/?ncid=rss
Plex Lands On The New Apple TV | TechCrunch
Greg Kumparak
Plex, the much adored app for streaming totally-legit-and-not-at-all-bootlegged-cough-cough content from your PC to your other gadgets, has just hit the new Apple TV. Wondering what the heck Plex is? The short answer: you run a Plex server on your PC, and load it up with video and music files in just about any format — from your standard DiVX/XVid file to a big ol’ high res MKV. Bring up a Plex client on your phone, Chromecast, or now your AppleTV, tap the video you want to play, and bam — it streams it from your PC to your device, transcoding it into a device-friendly format on the fly. “But wait!” you say. “My friend had an old Apple TV, and it was running Plex!” That’s possible! Plex *was* available on Apple TV before today — you just had to hack the heck out of the Apple TV to make it work. And it wasn’t official Plex, but a third party rebuild (and, its worth noting, both of the main developers on that build now work for Plex). Today, however, Plex on AppleTV goes legit, and it’s free to download. You can find the universal iOS/Apple TV Plex app right over here.
true
true
true
Plex, the much adored app for streaming totally-legit-and-not-at-all-bootlegged-cough-cough content from your PC to your other gadgets, has just hit the new Apple TV.
2024-10-12 00:00:00
2015-11-02 00:00:00
https://techcrunch.com/w…t-3-27-23-pm.png
article
techcrunch.com
TechCrunch
null
null
7,001,688
http://www.digitaltrends.com/computing/pirate-bay-uploads-leap-50-thwarting-anti-piracy-groups/
Pirate Bay Uploads Leap 50 Percent, Thwarting Anti-Piracy Groups | Digital Trends
Matthew S Smith
Piracy never changes. For years it has made copyright owners furious, and for years the efforts to stop it have fallen short. 2013 was no different. The Pirate Bay has thwarted nation-wide ISP blockades, domain changes and the continuing imprisonment of co-founder Gottfrid Svartholm with a 50 percent increase in uploads over the last year. This raises the number of torrent files available on the site to 74,195 as of this writing, an all-time high. The number of torrents indexed has reached a staggering 2.8 million, which are shared by over 18 million people if both seeds and leechers are included. About half of the sharing volume is devoted entirely to video, followed by audio (at 17 percent) and porn (at 13 percent). Surprisingly, games and software make up only 5 percent of share volume each. While The Pirate Bay remains healthy in spite of efforts by copyright holders, corporations and even nations to stop it, the war against piracy continues. The site spent much of the last month fighting a running battle against copyright holders, which resulted in several domain name changes as previous domains were seized, forcing the site to move. The Pirate Bay hopes to thwart future domain seizures with a peer-to-peer browser (predictably named PirateBrowser) which will act as the site’s main hub and circumvent domain name seizures. Of course, the site’s enemies will no doubt try to find a way to limit the browser’s distribution. While uploads have grown, the drama surrounding the world’s most well-known source of torrents is certainly far from over.
true
true
true
The Pirate Bay has thwarted domain name seizures in 2013 and enjoyed a 50 percent increase in file uploads over the the last year.
2024-10-12 00:00:00
2013-12-31 00:00:00
https://www.digitaltrend…e=1200%2C630&p=1
article
digitaltrends.com
Digital Trends
null
null
1,204,023
http://www.chatroulette-clone.net/2010/03/chatroulette-script-red5/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,125,216
https://www.technologyreview.com/s/545626/venture-capitalists-chase-rising-cybersecurity-spending/#/set/id/600832/
Venture Capitalists Chase Rising Cybersecurity Spending
Mike Orcutt
# Venture Capitalists Chase Rising Cybersecurity Spending The rash of headline-grabbing cyberattacks on major companies over the past few years has made one thing abundantly clear: it’s not enough to rely only on traditional security tools. To venture capitalists, that means there’s money to be made by betting on startups developing new ones. VCs are hoping to get a piece of companies’ increased spending on cybersecurity. In 2014 Gregg Steinhafel, the CEO of Target, became the first head of a major company to lose his job over a data breach. Now, worried company leaders are giving their security units a “blank check,” says Scott Weiss, a general partner who specializes in security at the venture capital firm Andreessen Horowitz: “The CEO has said, ‘Look, whatever you need, you’ve got.’” Today’s advanced threats are much too sophisticated for traditional tools like antivirus software and firewalls. Not wanting to buy obsolete products, security executives are increasingly venturing into agreements with cybersecurity startups. To Weiss and other venture investors, that kind of customer demand is an investment opportunity. According to CB Insights, the global VC community poured a record $2.5 billion into cybersecurity companies in 2014, a strong year for IT startups in general and software in particular. Security companies raised another $3.3 billion in 2015. The problems these startups are trying to solve are complex. The bad guys do have better weapons, but business systems are also becoming vulnerable in new ways. Businesses are relying more on cloud services and connecting more “things” to the Internet, and their employees are using more connected devices. Before a few years ago, the conventional approach to security entailed basically building a wall around valuable data and using software to detect known signatures of malicious code. Then security researchers began finding extremely complex malware, derived from government-designed exploits and sophisticated enough to circumvent traditional antivirus tools. This new generation of malware can be custom-built for a specific network and more precisely controlled by its human operators. Dealing with such specialized, fast-evolving adversaries requires changing the security paradigm from prevention to “active cyberdefense,” says Nicole Eagan, CEO of Darktrace, a two-year-old company based in Cambridge, United Kingdom. Hackers are going to get in, so the trick now is to find them “in near real time as they are moving subtly and silently around your network” and catch them before they do any real damage, she says. A number of companies, taking a range of different approaches, promise that their detection technologies can do this. Darktrace, which has raised $110 million in VC funding, relies on advanced machine-learning technology to analyze raw network traffic and, as Eagan explains, “determine a baseline for what’s normal” for every person using the network so that it can detect abnormal behavior. Not only are the threats more numerous and advanced, but companies must also secure networks that are growing more complex and massive. Every device on a network is a potential target for hackers, and new security technologies focused on them are getting lots of attention from investors. A company called Tanium, which is now valued at $3.5 billion after its most recent round of VC investment, has technology that allows network operators to ask questions about what’s happening in any one of millions of devices on a network. They get an answer within 15 seconds and can quickly take action—for example, by quarantining an infected computer. Security investors are also focused on the fact that businesses and organizations are putting more and more data in the cloud. In response to this trend, a new breed of cloud security companies are offering services such as novel encryption schemes and technologies for continuously monitoring what goes on in a company’s cloud servers. With so much funding available, the burgeoning cybersecurity startup scene is chaotic. Greg Dracon, a partner at .406 Ventures who has invested in several security companies, thinks a consolidation cycle may already be starting. Bigger companies are buying up individual technologies and could eventually offer suites of products, he says. Dracon thinks all this investor attention is driving prices too high overall, and that the market has gotten ahead of itself, at least for the near term. However, the security market itself has another decade of growth at least, he believes. “The problem set is outpacing the solution set,” he says, “and I don’t think there’s any end in sight to that.” ### Keep Reading ### Most Popular ### Happy birthday, baby! What the future holds for those born today An intelligent digital agent could be a companion for life—and other predictions for the next 125 years. ### This researcher wants to replace your brain, little by little The US government just hired a researcher who thinks we can beat aging with fresh cloned bodies and brain updates. ### How to break free of Spotify’s algorithm By delivering what people seem to want, has Spotify killed the joy of music discovery? ### Stay connected ## Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
true
true
true
Investors have been pouring money into companies selling “next-generation” security products.
2024-10-12 00:00:00
2016-01-25 00:00:00
null
article
technologyreview.com
MIT Technology Review
null
null
9,067,818
https://raspberrypi.stackexchange.com/questions/27454/odd-ethernet-problem-slow-write-speeds
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,398,331
https://jamesclift.ca/how-to-not-get-scammed-hiring-remote-freelancers-on-upwork-bfc6d215d22e
Pitch deck consulting
null
null
true
true
false
null
2024-10-12 00:00:00
2024-05-16 00:00:00
https://images.unsplash.com/photo-1434030216411-0b793f4b4173?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNjI5NjF8MHwxfHNlYXJjaHwzfHxCdXNpbmVzc3xlbnwwfHx8fDE2NDc1MzI2Mjc&ixlib=rb-1.2.1&q=80&w=1080
website
jamesclift.ca
jamesclift.ca
null
null
3,376,186
http://procrastineering.blogspot.com/2011/02/low-cost-video-chat-robot.html
Low-Cost Video Chat Robot
Johnny Chung Lee
Since I relocated down to Mountain View, I wanted a good way to keep in touch with my fiance who is still back in Seattle. So, I decided to mount an old netbook I had on top of an iRobot Create to create a video chat robot that I could use to drive around the house remotely. Since it was a good procrastineering project, I decided to document it here. There are two major components to the project: the iRobot Create which costs around $250 (incl. battery, charger, and USB serial cable) and the netbook which I got for around $250 as well. At $500, this is a pretty good deal considering many commerical ones go for several thousand dollars. The software was written in C# with Visual Studio Express 2010 and only tested on Windows 7 with the "Works on my machine" certifcation. =o) I'm sure there are TONs of problems with it, but the source is provided. So, feel free to try to improve it. ** Download Software:** VideoChatRobot v0.1 (posted 2/9/2011) VideoChatRobot v0.2 (posted 2/11/2011) Included are the executable, C# source, and two PDFs: one describing installation and usage of the control software, the other more information about modifying the charging station. The software does a few nice things like try to setup UPnP router port forwarding automatically, queries the external IP needed to make a connection over the open internet, maintains a network heartbeat which stops the robot if the connection is lost, a control password, auto-connect on launch options, and even mediates the maximum acceleration/deceleration of the motors so it doesn't jerk so much or fall over. The UPnP port forwarding is far from perfect is not well tested at all. If it works for you, consider yourself lucky. Otherwise, ask a friend how to set up port forwarding to enable remote control over the internet. Once you have all the parts: the netbook, the robot, the serial cable, the software. You can probably be up an running within 5 minutes. Assembly is merely plugging cables together. Mounting the netbook can be done with velcro or tape. Building the rise stand is more challenging, but entirely optional. I happen to have access to a laser cutter to make my clear plastic stand, but you can probably make something adequate out of wood. **Optional: Modifying the Charging Station** Probably one of the more interesting parts of this project from a procrastineering standpoint is the modifcation to the docking station so that it would charge something else in addition to the robot base. What I did is admittedly pretty crude and arguably rather unsafe. So, this is HIGHLY NOT RECOMMENDED unless you are very comfortable working with high voltage electricity and accept all the personal risks of doing so and potential risks to others. This is provided for informational purposes only and I am not responsible for any damages or harm resulting from the use of this material. Working with household power lines can be very dangerous posing both potential **electrocution** and **fire** risk. This is also unquestionably a warranty voiding activity. DO NOT ATTEMPT THIS without appropriate supervision or expertise. Now that I've hopefully scared you away from doing this... what exactly did I do? A high level picture is shown here: The PDF document in the download describes changes in more detail. But, I had a lot of trouble trying to tap the existing iRobot Create charging voltage to charge something else. Primarily, because the charging voltage dips down to 0V periodically and holds for several milliseconds. That would require making some kind of DC uninterruptable power supply and made the project much more complex. The easiest way to support a wide range of devices that could ride on the robot was to somehow get 120V AC to the cargo bay... for those of you with some familiarity with electronics, you probably can see the variety of safety hazards this poses. So, again this is **HIGHLY NOT RECOMMENDED** and is meant to just be a reference for trying to come up with something better. I actually do wish iRobot would modify the charging station for the Create to officially provide a similar type of charging capability. It is such a nice robot base and it is an obvious desire to have other devices piggy back on the robot that might not be able to run off the Create's battery supply. I personally believe it would make it a dramatically more useful and appealing robot platform. **Usage Notes** At the time of this post, I've been using it remotely for about a month on a regular basis between Mountain View and Seattle. My nephews in Washington DC were also able to use it chase my cat around my house in Seattle quite effectively. Thus far, it has worked without any real major problems. The only real interventions on the remote side have been when I ran it too long (>4 hours) and the netbook battery dies or having the optional 4th wheel on the iRobot Create pop-off which can be solved with some super glue. Otherwise, the control software and the charging station have been surprisingly reliable. Using remote desktop software like TeamViewer, I can push software changes to the netbook remotely, restart the computer, put Skype into full screen (which it frustratingly doesn't have as a default option for auto-answered video calls), and otherwise check in on the heath of the netbook. ## Wednesday, February 9, 2011 ### Low-Cost Video Chat Robot Posted by Johnny Chung Lee at 1:43 AM Subscribe to: Post Comments (Atom) ## 27 comments: This is great! I wonder if, to save power, you could use a jailbroken iPad (or perhaps an iPad running a Netbook OS, there are a few hacks out there) with a camera kit webcam? Thanks for the great post. ~ Aaron I'm amazed how much fantasy you have! I'll keep following your works. What about a recognition software so the robot can follow the subjet around the house? Looking forward your next idea!!! Antonio Awesome project. I like how you changed the charging dock. I'd love to see the photos of the modifications. Your project is a great inspiration. Thanks for sharing. nice project dude,.. very indpiering,.. i might build something like this one day,.. (c: Perhaps using bluetooth,.. Nice! Are you going to keep updating the code or this is it? Hey Johnny, You always give me great inspiration to try these projects. Keep them coming! I wonder if a good alternative wouldn't be to rig some sort of induction charger? Instead of a notebook , what about an android phone? It has everything needed: cameras(new phones have front and back,so you could even see from both sides), gps (could make an out doors unit!). Would be more lighter , flexible and easier to charge. Amazing, man!!! It could be a great idea using a Galaxy Tab (or similar) better than tne notebook: smaller, less battery drainer and videoconference cam attached from factory, plus integrated 3G module. Although the first thing I´ve think off whaun watching the video was "how about mounting this inside a R2-D2 real size replica?". ;-) Best of all, the integration with iRiver, movement and intelligent recharging. As always, very good job. Another great Blog entry and something to take up for inspiration and take it to the next level. Thanks Johnny For the charging base, how about replacing the original AC/DC adapter with one that supplies much more current, then tapping into the DC side on the Create, and use a DC-DC converter (like PicoPSU) to get the proper voltages needed for the various charging inputs? This will be MUCH safer than having AC voltage sitting around on potentially exposed contacts. If liquid is spilled on that base, it could become deadly easily. Awesome stuff dude ! You rock !! Would be awesome if you could make this for iPod Touch! 1 word: beamed power So what was the point of the UPnP port forwarding that doesn't work? Honestly never heard of UPnP actually working, except for being included in products that need to pass some Cablelabs certification. Why would anyone use it in a hobby project? Oh man.....Johnny....write an app for this. Irobot, ipad 2 running skype or FaceTime. I Want One! This is awesome! Does your software support irobot roomba or irobot scooba. Thnx for your reply. Awesome. It would have been cool at the end of the video if you'd turned right down the stairs instead of going straight! ;-) This is really interesting! I live overseas too and once had that idea when I started playing around with Skype. Its such a nice feeling to see someone had actually built it! Great Job! You should mount a laser pointer on it that you can remotely control so that you'll be able to point at things for your loved ones... Sometimes its useful to be able to do that rather than telling them over video: "Oh its in the 2nd shelf of that cabinet... No not that one.. The other shelf over there!" http://sparkyjr.ning.com/ mac version using skype At the rate that you make great projects what we really need is a chat robot that just watches you all day :) Suggestion for avoiding the 110VAC connection via the iRobot dock: Use a split transformer (such as used in cordless toothbrushes) to allow magnetic coupling between the netbook's charger and the AC power line. I haven't tried to find one ready-made, but winding a transformer yourself is tedious, not difficult. i'd like to join others in requesting a version where i can stick my samsung galaxy tab over the irobot. Galaxy has cameras both ways and i assume will take up less power. please please please!!! even an android phone version would do. Cool project. I am sure we will use such devices in the future. I test your program in XP. it runs but crash after pushing the button "Enable network control". Other buttons work correctly. In Win7 all is ok. Another fantastic Johnny Lee Project! I think the mechanical safety switch is a good idea in the docking station, but especially with a remote application I can see where this could go tragically wrong. I would suggest something like the Energizer Qi http://www.energizer.com/inductive/default.aspx which could possibly be made to work. Also, if you could ditch the netbook battery, everything would be lighter up top and you wouldn't stress the castors. The plastic is also surprisingly heavy - I wonder if a simple 1/4" steel post with three tension guy wires wouldn't be lighter and easier. Johnny (Dr. Lee now that your CMU days are over), I have to say "thank you" for your willingness to post these things in the past and present. Certainly for me these have pushed the envelope and heightened my awareness of what is possible. I set this up with a spare netbook and a Create. The local conection works great, but as soon as I connect to it remotely from another machine, one of the wheels begins spinning, even though I am not sending any input. The remote machine shows a velocity of "0 0", but the local machine shows "-18 -98", which seems corroborated by flipping the Create over and observing the wheels. As soon as I disconnect, the Create stops. Did you encounter this issue? I really like this idea. We provide video remote interpreting for none english patients using a flex application. We have figured out a way to replicate this with the controls in our application. I will keep you posted...great idea. great idea..we provide video remote interpreting and we have figured out a way to control the robot within our application (flex). I would of never thought of this...keep up the good work and I will keep you posted with our version and share the results. Post a Comment
true
true
true
Since I relocated down to Mountain View, I wanted a good way to keep in touch with my fiance who is still back in Seattle. So, I decided to...
2024-10-12 00:00:00
2011-02-09 00:00:00
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_sPaFqfWNMPoX-3WiY5KikTNETMGsQu0KeMDMKxIIrjlwYat790bVqGe18mJqDi5Us66S2l_gvKR26sDzEht1avp4cnsXokiLTo1Cjr71yipTGz0A=w1200-h630-n-k-no-nu
null
blogspot.com
procrastineering.blogspot.com
null
null
1,928,267
http://kinecthacks.net/real-time-lightsaber-tracking-and-rendering-on-the-kinect/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,958,919
https://www.wsj.com/articles/dwac-the-trump-social-media-spac-quadruples-on-first-day-11634843253
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null