id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
3,195,560
http://matteodallombra.net/2011/11/04/oink-rate-the-adventure/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,364,144
https://www.genomeweb.com/sequencing/new-oxford-nanopore-sequencing-chemistry-reaches-99-percent-accuracy-many-reads
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,570,680
https://arstechnica.com/information-technology/2018/03/theres-a-currency-miner-in-the-mac-app-store-and-apple-seems-ok-with-it/
There’s a currency miner in the Mac App Store, and Apple seems OK with it
Dan Goodin
Resource-draining currency miners are a regular part of the Google Play market, as scammers pump out apps that covertly harness millions of devices, in some cases with malware so aggressive it can physically damage phones. A popular title in the Mac App Store recently embraced coin mining openly, and so far Apple gatekeepers haven't blocked it. The app is Calendar 2, a scheduling app that aims to include more features than the Calendar app that Apple bundles with macOS. In recent days, Calendar 2 developer Qbix endowed it with code that mines the digital coin known as Monero. The xmr-stack miner isn't supposed to run unless users specifically approve it in a dialog that says the mining will be in exchange for turning on a set of premium features. If users approve the arrangement, the miner will then run. Users can bypass this default action by selecting an option to keep the premium features turned off or to pay a fee to turn on the premium features. ## Feels like the first time If Calendar 2 isn't the first known app offered in Apple's official and highly exclusive App Store to do currency mining, it's one of the very few. The discovery comes as sky-high valuations have pushed the limits of currency mining and led to a surge of websites and malware that surreptitiously mine digital coins on mobile devices, personal computers, and business servers. Calendar 2 is slightly different in the sense that it clearly discloses the miner it runs by default. That puts it in a grayer zone than most of the miners seen to date. "On the one hand, using the user's CPU for cryptomining has become extremely unpopular," Thomas Reed, director of Mac offerings at antimalware provider Malwarebytes, told Ars. "The fact that this is the default is something I don't like. I would want to see a legit app informing the user in advance or making it an option that can be turned on but is off by default. On the other hand, they [the developers] do disclose that they are doing it and give other options for people who don't like it. My personal feeling on this is that, given the disclosure, I think the user should be allowed to make their own choice. Some people might be perfectly willing to let an app like this mine cryptocurrency so that they can use it for free."
true
true
true
Popular Calendar 2 app mines Monero by default, but at least it discloses it.
2024-10-12 00:00:00
2018-03-12 00:00:00
https://cdn.arstechnica.…mrstak-usage.png
article
arstechnica.com
Ars Technica
null
null
27,629,096
https://www.phoronix.com/scan.php?page=news_item&px=OpenPOWER-Microwatt-5.14
Linux 5.14 To Support The OpenPOWER Microwatt Soft CPU Core
Michael Larabel
# Linux 5.14 To Support The OpenPOWER Microwatt Soft CPU Core Announced two years ago was the OpenPOWER Microwatt as an FPGA-based soft CPU core. This open-source soft processor core complies with the Power ISA 3.0 instruction set and can be run on various FPGA hardware. Microwatt marks the first processor written from scratch using the open Power ISA 3.0 specification and serves as one of the organization's reference designs. While a basic design and catered for FPGA usage, it's going to see 130nm chip fabrication this year if all goes well. In any case, the news today is that the forthcoming Linux 5.14 is set to add support for the Microwatt. Queued within powerpc-next is the Microwatt platform support, the MicroWatt DeviceTree, and various other patches for enabling the OpenPOWER Microwatt support. This initial support is good enough for handling FPGA implementations of Microwatt while the patches do reiterate the early state of the platform with not yet having any SMP, VMX, VSX, transactional memory, or other features found with mature POWER hardware. The Microwatt sources written in VHDL can be found via GitHub. This open-source soft processor core complies with the Power ISA 3.0 instruction set and can be run on various FPGA hardware. Microwatt marks the first processor written from scratch using the open Power ISA 3.0 specification and serves as one of the organization's reference designs. While a basic design and catered for FPGA usage, it's going to see 130nm chip fabrication this year if all goes well. In any case, the news today is that the forthcoming Linux 5.14 is set to add support for the Microwatt. Queued within powerpc-next is the Microwatt platform support, the MicroWatt DeviceTree, and various other patches for enabling the OpenPOWER Microwatt support. This initial support is good enough for handling FPGA implementations of Microwatt while the patches do reiterate the early state of the platform with not yet having any SMP, VMX, VSX, transactional memory, or other features found with mature POWER hardware. The Microwatt sources written in VHDL can be found via GitHub. 5 Comments
true
true
true
Announced two years ago was the OpenPOWER Microwatt as an FPGA-based soft CPU core
2024-10-12 00:00:00
2021-06-23 00:00:00
null
null
null
Phoronix
null
null
21,824,213
https://stratechery.com/2019/the-2019-stratechery-year-in-review/
The 2019 Stratechery Year in Review
Author by Ben Thompson
2019 was a transition year for me personally, and by extension, Stratechery. The nadir was an Article I regret — The WeWork IPO — where, despite not believing in the company, I wrote the contrarian take because it seemed more interesting (my full *mea culpa* is here). It was not a healthy approach, but, six years after starting Stratechery, and five years of it being my full-time job, it was perhaps an understandable one. The turning point was The China Cultural Clash; writing about the crisis surrounding the NBA in China and the implications for the technology industry reminded me of something I had started to forget in my attempt to be even-handed and dispassionate in my analysis: values matter, and there is a freedom that comes from recognizing and articulating those values. Indeed, honesty about values makes analysis better, because underlying assumptions are pushed to the forefront, instead of fading to the background — and, I’d add, it is invigorating! On to 2020! This year I wrote 139 Daily Updates (including tomorrow) and 36 Weekly Articles, and, as per tradition, today I summarize the most popular and most important posts of the year. You can find previous Stratechery Years in Review here: 2018 | 2017 | 2016 | 2015 | 2014 | 2013 Here is the 2019 list. #### The Five Most-Viewed Articles - The WeWork IPO — This, to be perfectly frank, is brutal: I not only regret this article for being insufficiently negative (although, for the record, I was clear I would not invest in WeWork), but now also have to face the fact it was my most popular post of the year. *Related: Uber Questions, which took the boring skeptical approach to an S-1 I should have repeated.* - The Google Squeeze — Google, the real Aggregator, is squeezing OTAs, which acted like Aggregators while depending on Google for demand. It’s easy to say Google is being unfair, but this may be better for consumers. - Shopify and the Power of Platforms — It is all but impossible to beat an Aggregator head-on, as Walmart is trying to do with Amazon. The solution instead is to build a platform like Shopify. - Disney and the Future of TV — TV is moving from a world where distribution dictates business models to one where business models need to fit the jobs consumers want done. That is the best way to understand Disney’s latest announcement. - AWS, MongoDB, and the Economic Realities of Open Source — Amazon’s latest offering highlights the economic challenges facing open source companies — and Amazon should pay attention. #### Lessons Learned The good thing about making mistakes is that it is an opportunity to learn; two of these articles are directly connected to WeWork. - Neither, and New: Lessons from Uber and Vision Fund — Uber represents something new: a company that is different than incumbents because of technology, yet not itself a tech company — just like the Vision Fund is not a VC. - What is a Tech Company? — The question of “What is a tech company” comes down to how much software and its unique characteristics affects the company’s core business. - The Value Chain Constraint — Companies succeed or fail not based on technology but rather according to their ability to integrate within their value chains. #### Values and Society Almost all decisions involve trade-offs; the most difficult are those that require understanding and prioritizing our values. - The Internet and the Third Estate — Mark Zuckerberg suggested that social media is a “Fifth Estate”; in fact, social media is a means by which the Third Estate — commoners — can seize political power. Here history matters. *Related: Tech and Liberty and the policing of political speech.* - The China Cultural Clash — The NBA controversy in China highlights a culture clash that both tech companies and the U.S. government need to take to heart. Plus, why Tiktok being Chinese is increasingly a problem. *Related: China, Leverage, and Values, about the trade war.* - Privacy Fundamentalism — The current privacy debate is making things worse by not considering trade-offs, the inherent nature of digital, or the far bigger problems that come with digitizing the offline world. #### Regulation and Antitrust While regulation was also a theme in 2018, this year I tried to get much more specific about how to think about the challenges presented by the Internet. - Where Warren’s Wrong — Senator Warren’s proposal about how to regulate tech is wrong about history, the source of tech giant’s power, and the fundamental nature of technology itself. That doesn’t mean there aren’t real problems — and potential solutions — though. - Tech and Antitrust — A review of the potential antitrust cases against Google, Apple, Facebook, and Amazon suggests that only Google is vulnerable. - Three Frameworks: - A Framework for Regulating Content on the Internet — Regulators need to stop blindly regulating “the Internet” and instead understand that every part of the Internet stack is different, and only one part is suffering from market failure. - A Framework for Moderation — The question of what should be moderated, and when, is an increasingly frequent one in tech. There is no bright line, but there are ways to get closer to an answer. - A Framework for Regulating Competition on the Internet — Understanding the differences between platforms and Aggregators is critical when it comes to considering regulation. #### The Big Tech Companies Tech’s continued centralization means that the biggest companies — Microsoft, Apple, Google, Amazon, and Facebook — receive the largest scrutiny. - Microsoft, Slack, Zoom, and the SaaS Opportunity — The Zoom and Slack IPOs show what Microsoft is missing in its growth story: a way to acquire new customers. - The First Post-iPhone Keynote — This year’s WWDC presented an Apple that was finally ready to move past the iPhone. *Related: Apple’s Errors and The iPhone and Apple’s Services Strategy.* - Google and Ambient Computing — Google presented a vision of ambient computing that goes beyond the smartphone. The company is well-placed, but faces challenges both in the marketplace and in the mirror. *Related: Beachheads and Obstacles and Google Fights Back.* - Day Two to One Day — Amazon.com was showing signs of being a Day Two company, including the alleged manipulation of search. There is reason, though, to be optimistic that the company has gotten back to Day One. Plus, where are the other big tech companies? - Facebook’s Privacy Cake — Mark Zuckerberg’s announcement of A Privacy-Focused Vision for Social Networking is not some dramatic pivot: it is a growth opportunity for Facebook and a challenge for regulators. *Related: Facebook, Libra, and the Long Game.* #### Media and Technology The most important development of the year in media was the launch of Disney+; I already linked to Disney and the Future of TV. Also: - Netflix Flexes — Netflix is an Aggregator, with a value chain that lets it drive demand, raise prices, and dismiss competition. - Spotify’s Podcast Aggregation Play — Spotify is making a major move into podcasts, where it appears to have clear designs to be the sort of Aggregator it cannot be when it comes to music. - The BuzzFeed Lesson — The lesson of BuzzFeed is that dominant Aggregators like Facebook have no incentive to act against their self interest and support suppliers. *Related: The Cost of Apple News.* #### The Year in Daily Updates This year the Daily Update not only continued the trend towards single topics, but often became the place where new ideas and future Weekly Articles were first presented and fleshed out. I’m really proud of this evolution — this was a hard list to cull. Some of my favorites: - The Alexandria Ocasio-Cortez Phenomenon, Ninja’s 2018, Facebook and the Human Condition — The connection between Alexandria Ocasio-Cortez, Ninja, and Facebook’s scandals - Pinterest S-1, Zoom S-1, The Enterprise-Consumer Flip-Flop — Pinterest’s S-1 shows why too much funding can be bad for startups, while Zoom’s S-1 shows the benefits the come from being great. That, by extension, is a result of the enterprise and consumer markets flip-flopping. - Disney Takes Full Control of Hulu, Disney’s Organization, The Streaming Struggle — Disney has acquired control of Hulu, and has structured itself to take full advantage. Other streaming services, though, are not nearly as well-positioned. - The Problem with “Aggregation Theory”, Demand at Scale, Supplier Power and Value — A response to *The Problem with Ben Thompson’s ‘Aggregation Theory’*, and why the Internet really is different. - AMD Launches 7nm Chips, Sony Partners with Microsoft, Apple and AWS — AMD leapfrogs Intel thanks to modularity, Sony partners with Microsoft thanks to scale, and Apple balances both. - Libra’s Questionable Benefits, Facebook’s Hidden Costs, Philosophical Objections — While Facebook, Libra, and the Long Game was about analysis, this Daily Update is about opinion: I don’t think Libra is a good idea. *Related: Libra in Congress; Global Community, Revisited; Messianic Versus Money-Making.* - Jony Ive Leaves Apple, Ive’s Legacy, The Post-Ive Apple — Jony Ive is leaving Apple: how it happened, Ive’s legacy, and what it means for Apple going forward. - Facebook’s FTC Fine, Apple and Microsoft’s Mistake, IBM’s Unbundling — Facebook’s FTC fine is being pilloried, but it really is large and unprecedented. Plus, why Facebook critics were asleep at the wheel. Then, Microsoft saving Apple has an analogy to IBM, and is a potential argument in favor of antitrust action. *Related: Facebook’s FTC Settlement, Assigning Blame, Facebook’s Earnings.* - Microsoft’s Earnings, Teams Passes Slack, The Partner Advantage — Microsoft continues to crush earnings with its integrated approach. Then, Teams passed Slack, and its lead will likely widen, because it is a sustaining technology, not a disruptive one. Plus, the importance of Microsoft partners. - Why Cloudflare Matters, The Absence of Gatekeepers, Promotion Versus Moderation — More on moderation, including why Cloudflare is important systematically, a reminder that there are no more gatekeepers, which means moderation is always reactive, and why Facebook and YouTube still deserve the most scrutiny. - Apple Versus Project Zero, Apple’s Poor Response, The China Paradox — Apple took on Google’s Project Zero over the weekend, and didn’t come out looking particularly good, particularly since China is a huge paradox for Apple. *Related: Apple Pulls Hong Kong App, Tim Cook’s Memo, Honesty and Control.* - Child Sexual Abuse Material Online, The Problem With Community, Towards More Friction — A horrifying article on Child Sexual Abuse Material online is actually a sign that Facebook is doing the right thing, at least for now. Encrypting private communications, though, may make things worse. - Facebook’s All-Hands Leak, Facebook Versus Warren, Zuckerberg’s Culture — The most newsworthy aspect of Facebook All-Hands leak is what its existence says about Facebook itself. What is most interesting, though, are not the comments about Elizabeth Warren but what Mark Zuckerberg showed about himself. - Microsoft’s Surface Event, Victors and History, Microsoft’s Hardware Prospects — Microsoft (eventually) selling a phone that runs Android is not particularly meaningful in terms of its impact financially but is a totem of a major shift culturally. - Trump Visits Mac Pro Factory, Apple’s Tariffs, Apple Versus GitHub — Trump visited the Mac Pro factory, and people are disappointed in Tim Cook. First off, tariffs are certainly the driving fact, but I am disappointed too, for different reasons than most. - Facebook Buys Beat Saber, Capital and Copyright, The Oculus Conundrum — Facebook has bought Beat Games, a company of the future, and not just because they made a game for VR. Then, why it is the old world that needs capital, and why Oculus is still confusing strategically. - AWS re:Invent; Transformations, Transitions, and Databases; Amazon Outposts — The AWS re:Invent keynote was quite compelling, as Amazon made the case for enterprises to not simply transition to the cloud but to transform their approach to IT — which, of course, favors Amazon. - TV Advertising Falls; The Sports Linchpin, Revisited; NBA Ratings — TV Advertising is down, as price increases finally overwhelm the decline in viewers. It’s important to note, though, that sports still matter. This is something the NBA may not completely understand. - Cisco Sells Chips, The Conservation of Attractive Networking — Cisco is selling chips, which is a fascinating example of the Conservation of Attractive Profits. - SBNation and AB5, Understanding SB Nation, AB 5 and the Internet — SB Nation is a publishing company that was only ever possible because of the Internet. That it has to change its model because of AB 5 shows why AB 5 is fundamentally flawed. I also conducted six interviews for The Daily Update: - Zillow CEO Rich Barton - Microsoft CEO Satya Nadella - Facebook’s Kevin Weil and Dante Disparte on Libra - Substack Co-Founders Christopher Best and Hamish McKenzie - Cloudflare CEO Matthew Prince - Ghost CEO John O’Nolan I can’t say it enough: I am so grateful to Stratechery’s readers and especially subscribers for making all of these posts possible. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2020!
true
true
true
The most popular and most important posts on Stratechery in 2019.
2024-10-12 00:00:00
2019-12-18 00:00:00
https://i0.wp.com/strate…1200%2C797&ssl=1
article
stratechery.com
Stratechery by Ben Thompson
null
null
36,978,102
https://enki.org/2023/08/01/steel-prices/
Steel Prices and Dumb Policies
View more posts
On March 8, 2018, McCulley Marine Services bought 59,973 pounds of steel in various sizes and shapes to repair and maintain one of our barges. The total paid for this steel was $40,710. On March 23, 2018, the Trump Administration’s tariffs on imported steel took effect. These tariffs imposed a 25% duty on steel imports into the United States. Certain countries were initially exempted from these tariffs, but some exemptions were later removed or modified. The tariffs were part of a broader trade policy aimed at protecting domestic industry. At the time, I thought this was a myopic attempt to buy votes with protectionist policies which would do harm in the long term. The goal was ostensibly to preserve and grow the steel manufacturing capacity of the United States. This gamble on building capacity did not pay off. The United States is producing less steel in 2023 than it did in 2018, while domestic steel manufacturers are benefiting from the protectionist policy. The United States exported less steel in 2022 than in 2018. China is exporting more steel in 2023 than it was in 2018. We have all been paying 25% extra for no good reason. The Biden Administration has continued this policy. To do otherwise would presumably anger domestic steel workers, costing votes. Five years later, it is time to dry dock the same barge and replace steel. A quote for the identical configuration of 59,973 pounds of steel is $63,680, a 56.4% increase. This is a much greater increase than for other materials in the same time frame. The increased cost of steel has contributed to scarcity of barges (and other depreciable assets made of steel). As a barge or other steel vessel gets older, one has to decide every time a repair is made if the costs of the repairs will yield a positive return on investment. Adding a tax to steel means barges and tugs get retired earlier. Barge owners are scrapping barges rather than put steel into them at prices with a shortsighted tax built in. This ill-conceived policy has resulted in fewer barges in service, increasing costs of transportation. We had to make a choice for our 36 year old barge: Put money into the barge and just accept the 25% tax or sell the barge. We are selling the barge. It turns out that this is not a bad time for us to sell the barge, due to the aforementioned decrease in supply. We are making similar decisions for all of our assets that need steel. Some will be sold into foreign markets with less regulation of steel condition and provenance. Some will be scrapped. These assets could have remained in service in the United States for much longer.
true
true
true
null
2024-10-12 00:00:00
2023-08-01 00:00:00
null
null
null
null
null
null
21,753,207
https://blog.cloudflare.com/introducing-load-balancing-analytics/
Introducing Load Balancing Analytics
Brian Batraski
Cloudflare aspires to make Internet properties everywhere faster, more secure, and more reliable. Load Balancing helps with speed and reliability and has been evolving over the past three years. Let’s go through a scenario that highlights a bit more of what a Load Balancer is and the value it can provide. A standard load balancer comprises a set of pools, each of which have origin servers that are hostnames and/or IP addresses. A routing policy is assigned to each load balancer, which determines the origin pool selection process. Let’s say you build an API that is using cloud provider ACME Web Services. Unfortunately, ACME had a rough week, and their service had a regional outage in their Eastern US region. Consequently, your website was unable to serve traffic during this period, which resulted in reduced brand trust from users and missed revenue. To prevent this from happening again, you decide to take two steps: use a secondary cloud provider (in order to avoid having ACME as a single point of failure) and use Cloudflare’s Load Balancing to take advantage of the multi-cloud architecture. Cloudflare’s Load Balancing can help you maximize your API’s availability for your new architecture. For example, you can assign health checks to each of your origin pools. These health checks can monitor your origin servers’ health by checking HTTP status codes, response bodies, and more. If an origin pool’s response doesn’t match what is expected, then traffic will stop being steered there. This will reduce downtime for your API when ACME has a regional outage because traffic in that region will seamlessly be rerouted to your fallback origin pool(s). In this scenario, you can set the fallback pool to be origin servers in your secondary cloud provider. In addition to health checks, you can use the ‘random’ routing policy in order to distribute your customers’ API requests evenly across your backend. If you want to optimize your response time instead, you can use ‘dynamic steering’, which will send traffic to the origin determined to be closest to your customer. Our customers love Cloudflare Load Balancing, and we’re always looking to improve and make our customers’ lives easier. Since Cloudflare’s Load Balancing was first released, the most popular customer request was for an analytics service that would provide insights on traffic steering decisions. Today, we are rolling out Load Balancing Analytics in the Traffic tab of the Cloudflare dashboard. The three major components in the analytics service are: An overview of traffic flow that can be filtered by load balancer, pool, origin, and region. A latency map that indicates origin health status and latency metrics from Cloudflare’s global network spanning 194 cities and growing! Event logs denoting changes in origin health. This feature was released in 2018 and tracks pool and origin transitions between healthy and unhealthy states. We’ve moved these logs under the new Load Balancing Analytics subtab. See the documentation to learn more. In this blog post, we’ll discuss the traffic flow distribution and the latency map. ## Traffic Flow Overview Our users want a detailed view into where their traffic is going, why it is going there, and insights into what changes may optimize their infrastructure. With Load Balancing Analytics, users can graphically view traffic demands on load balancers, pools, and origins over variable time ranges. Understanding how traffic flow is distributed informs the process of creating new origin pools, adapting to peak traffic demands, and observing failover response during origin pool failures. Figure 1 In Figure 1, we can see an overview of traffic for a given domain. On Tuesday, the 24th, the red pool was created and added to the load balancer. In the following 36 hours, as the red pool handled more traffic, the blue and green pool both saw a reduced workload. In this scenario, the traffic distribution graph did provide the customer with new insights. First, it demonstrated that traffic was being steered to the new red pool. It also allowed the customer to understand the new level of traffic distribution across their network. Finally, it allowed the customer to confirm whether traffic decreased in the expected pools. Over time, these graphs can be used to better manage capacity and plan for upcoming infrastructure needs. ## Latency Map The traffic distribution overview is only one part of the puzzle. Another essential component is understanding request performance around the world. This is useful because customers can ensure user requests are handled as fast as possible, regardless of where in the world the request originates. The standard Load Balancing configuration contains monitors that probe the health of customer origins. These monitors can be configured to run from a particular region(s) or, for Enterprise customers, from all Cloudflare locations. They collect useful information, such as round-trip time, that can be aggregated to create the latency map. The map provides a summary of how responsive origins are from around the world, so customers can see regions where requests are underperforming and may need further investigation. A common metric used to identify performance is request latency. We found that the p90 latency for all Load Balancing origins being monitored is 300 milliseconds, which means that 90% of all monitors’ health checks had a round trip time faster than 300 milliseconds. We used this value to identify locations where latency was slower than the p90 latency seen by other Load Balancing customers. Figure 2 In Figure 2, we can see the responsiveness of the Northeast Asia pool. The Northeast Asia pool is slow specifically for monitors in South America, the Middle East, and Southern Africa, but fast for monitors that are probing closer to the origin pool. Unfortunately, this means users for the pool in countries like Paraguay are seeing high request latency. High page load times have many unfortunate consequences: higher visitor bounce rate, decreased visitor satisfaction rate, and a lower search engine ranking. In order to avoid these repercussions, a site administrator could consider adding a new origin pool in a region closer to underserved regions. In Figure 3, we can see the result of adding a new origin pool in Eastern North America. We see the number of locations where the domain was found to be unhealthy drops to zero and the number of slow locations cut by more than 50%. Figure 3 Tied with the traffic flow metrics from the Overview page, the latency map arms users with insights to optimize their internal systems, reduce their costs, and increase their application availability. ## GraphQL Analytics API Behind the scenes, Load Balancing Analytics is powered by the GraphQL Analytics API. As you’ll learn later this week, GraphQL provides many benefits to us at Cloudflare. Customers now only need to learn a single API format that will allow them to extract only the data they require. For internal development, GraphQL eliminates the need for customized analytics APIs for each service, reduces query cost by increasing cache hits, and reduces developer fatigue by using a straightforward query language with standardized input and output formats. Very soon, all Load Balancing customers on paid plans will be given the opportunity to extract insights from the GraphQL API. Let’s walk through some examples of how you can utilize the GraphQL API to understand your Load Balancing logs. Suppose you want to understand the number of requests the pools for a load balancer are seeing from the different locations in Cloudflare’s global network. The query in Figure 4 counts the number of unique (location, pool ID) combinations every fifteen minutes over the course of a week. Figure 4 For context, our example load balancer, lb.example.com, utilizes dynamic steering. Dynamic steering directs requests to the most responsive, available, origin pool, which is often the closest. It does so using a weighted round-trip time measurement. Let’s try to understand why all traffic from Singapore (SIN) is being steered to our pool in Northeast Asia (asia-ne). We can run the query in Figure 5. This query shows us that the asia-ne pool has an avgRttMs value of 67ms, whereas the other two pools have avgRttMs values that exceed 150ms. The lower avgRttMs value explains why traffic in Singapore is being routed to the asia-ne pool. Figure 5 Notice how the query in Figure 4 uses the loadBalancingRequestsGroups schema, whereas the query in Figure 5 uses the loadBalancingRequests schema. loadBalancingRequestsGroups queries aggregate data over the requested query interval, whereas loadBalancingRequests provides granular information on individual requests. For those ready to get started, Cloudflare has written a helpful guide. The GraphQL website is also a great resource. We recommend you use an IDE like GraphiQL to make your queries. GraphiQL embeds the schema documentation into the IDE, autocompletes, saves your queries, and manages your custom headers, all of which help make the developer experience smoother. ## Conclusion Now that the Load Balancing Analytics solution is live and available to all Pro, Business, Enterprise customers, we’re excited for you to start using it! We’ve attached a survey to the Traffic overview page, and we’d love to hear your feedback.
true
true
true
Our customers love Cloudflare Load Balancing, and today, we are rolling out Load Balancing Analytics.
2024-10-12 00:00:00
2019-12-10 00:00:00
https://cf-assets.www.cl…ytics-I1ofWz.png
article
cloudflare.com
The Cloudflare Blog
null
null
24,222,308
https://www.inverse.com/innovation/time-crystals-quantum-computing-study
Time crystal discovery could change the future of quantum computing
Sarah Wells
**Time crystal **discovery could change **the future** of quantum computing Researchers observed the interaction of a strange state of matter for the first time. __Physicists are used to dealing with some of the very weirdest forms of matter and ideas__ in our known world, from levitating superconducting materials to the mind-bending theory of time dilation. But even for physicists, time crystals are strange. They might sound like some retro science fiction TV villain's hidden treasure, or perhaps fuel for a Time Lord's TARDIS, but this unusual state of matter is very much a fixture of our reality. Critically, scientists have observed the interaction of these crystals for the first time. This observation takes scientists a step closer to understanding the strangeness of our world, and also has the potential to "warm-up" quantum computing, making it much cheaper and more accessible. The interaction is detailed in a study published Monday in the journal *Nature Materials*. ### What is a time crystal? We're all familiar with the most common forms of natural matter — liquid, gas, solid, and even plasma. Time crystals, on the other hand, are a newly discovered type of matter. This strange matter was first theorized by Nobel Laureate and MIT professor, Frank Wilczek in 2012 and confirmed just four years ago. Samuli Autti, a research associate at Lancaster University and first author of the new time crystal study, tells *Inverse* that time crystals are basically **a collection of particles in constant motion** without an external force. "Conceptually a time crystal is a very simple thing: It is a substance where the constituent particles are in constant, systematically repeating motion even in the absence of any external encouragement," Autti explains. "This is very unusual in nature." He also admits the phrase 'time crystal' "sounds like someone adopted the name from a 1980's TV science fiction show instead." __How to make a time crystal —__ In order to create these time crystals, the team first cooled down (to just above absolute zero at nearly -460 degrees Fahrenheit) a mostly vacuum-filled test tube with a rare helium isotope. Two copper coils were then placed around the tube and given a "kick" (aka, a radio-frequency pulse was passed through them) to generate two clouds of constantly rotating magnetic particles. These aren't something you can see with the naked eye per se, but Autti explains that these clouds create a signal that can be measured to confirm their presence and the number of particles they consist of. These mysterious clouds are the time crystals. What the team observed, via these invisible signals from the time crystals, was the exchange of particles back and forth between these two clouds, signaling that the time crystals were in contact with each other. If this result sounds confusing, Autti says it took the research team a few years to fully understand it themselves. "It took us all this time to really understand what was going on in the experiment and what would be the correct, clear language to present it so that the community would understand it," Autti says. "In the end, the outcome may be simple and clear, but it is only so because of a number of failed attempts and a pile of rejected ideas." __What time crystals mean for quantum computing —__ Another exciting part of this discovery for researchers, aside from simply observing this interaction, is that it is also an experimental confirmation of something called the AC Josephson effect, a macroscopic quantum phenomenon that has applications in the field of quantum computing. Autti says that it's hard to know exactly where a discovery like this will take the field of physics or what its future applications might be, but some potential applications for this discovery include improved atomic clocks (which would, in turn, improve technology like gyroscopes and GPS) as well as quantum computing. "Basically, contemporary superconducting candidates for the components of a quantum computer are based on a Josephson junction between two superconducting metals," Autti explains. "Essentially this is the same thing as the interaction we observed between [the] two time crystals." In addition to the time crystals' demonstration of this quantum effect, Autti says that time crystals are good at intrinsically protecting their own coherence. This means they're not easily thrown off by outside stimuli — a necessary trait for sensitive quantum computers. Perhaps most exciting is the potential for these time crystals to usher in a new era of "warm" (or, non-absolute zero) quantum computing. Due to the similarity of these crystals to a form of solid-state matter that condenses at room temperature, Autti and his colleagues believe that time crystals may be able to do the same. The ability to do away with super complicated and expensive cooling chambers for quantum computers could be a big step towards making this technology more accessible and scalable. Still, Autti says there's plenty of work to do before that reality comes to fruition: "As for the timeline of applications, many steps towards a more-approachable practical realization are needed before one should expect applications." Abstract:Quantum time crystals are systems characterized by spontaneously emerging periodic order in the time domain. While originally a phase of broken time translation symmetry was a mere speculation, a wide range of time crystals has been reported. However, the dynamics and interactions between such systems have not been investigated experimentally. Here we study two adjacent quantum time crystals realized by two magnon condensates in superfluid 3He-B. We observe an exchange of magnons between the time crystals leading to opposite-phase oscillations in their populations—a signature of the AC Josephson effect—while the defining periodic motion remains phase coherent throughout the experiment. Our results demonstrate that time crystals obey the general dynamics of quantum mechanics and offer a basis to further investigate the fundamental properties of these phases, opening pathways for possible applications in developing fields, such as quantum information processing. This article was originally published on
true
true
true
An international team of physicists observed the interaction of time crystals for the first time, a phenomenon essential to quantum computing.
2024-10-12 00:00:00
2020-08-18 00:00:00
https://imgix.bustle.com…rop=faces&fm=jpg
article
inverse.com
Inverse
null
null
11,015,969
http://gizmodo.com/we-finally-know-how-much-google-is-losing-on-its-crazy-1756461016
We Finally Know How Much Google Is Losing on Its Crazy Ideas
Mario Aguilar
Google, aka Alphabet, is famous for its “moonshots”—enormous, ambitious technology projects that cost a ton to develop and might not ever work or turn a profit. Today, we’re getting a first look at just how much money the company is flushing on its big ideas. Alphabet encompasses the many different little businesses we used to know collectively as Google. After last summer’s reorganizaion, Google refers only to the company’s hugely lucrative internet business. Everything else gets its own little corner in the company. The bottom line is that moonshots and other ambitious projects like Google Fiber, delivery drones, autonomous cars, Nest etc, don’t make any money. In today’sAlphabet earnings report, the company for the first time broke down its finances into both Google revenues, and revenues from a new category called “Other bets.” Other bets includes basically everything else. No surprises: Google makes a load of money, and the other bets bleed cash: So “other bets” is $3.5 billion in the red. Ouch. Needless to say, the company is still ridiculously profitable even with that huge loss. The only other interesting point to note is that Google nearly doubled its operating loss on other bets in just 12 months. We don’t know just what that money’s going to, but it’s a big jump. Hopefully that means some of those crazy ideas are inching closer to reality. Illustration by Sam Woolley Contact the author at [email protected].
true
true
true
Google, aka Alphabet, is famous for its “moonshots”—enormous, ambitious technology projects that cost a ton to develop and might not ever work or turn a
2024-10-12 00:00:00
2016-02-01 00:00:00
https://gizmodo.com/app/…oybla5bqe2dy.jpg
article
gizmodo.com
Gizmodo
null
null
6,725,380
http://blog.evercontact.com/we-think-the-linkedin-intro-team-deserves-a-thumbs-up-do-you/
We think the LinkedIn Intro team deserves a thumbs up, do you? - Blog | evercontact
Gabriela
## When startups hack for major players … Released in late October, **LinkedIn Intro, an “app” that allows you to visualize anyone’s Linkedin info right above an email in iOS’s Mail**, has received a fair amount of attention, and rightly so. Put quite simply, Rapportive – the developers of LinkedIn Intro and the startup acquired by LinkedIn in 2012- “hacked” a way to *do the impossible*as they said on the LinkedIn blog by adding *hover CSS*directly to your emails thus bypassing the previously closed dev space in **And that’s where the “controversy” starts.** To *do the impossible*, the developers created a second profile in iOS and pushed all of LinkedIn Intro’s users email through LinkedIn servers to add that layer of CSS. So, Linkedin now has access to all of its LinkedIn intro users’ email… **Is it bad that LinkedIn has access to our email?** It’s probably not any worse than Microsoft, Google, Yahoo or other major email providers. Granted the latter two clearly make use of your email content to provide targeted advertising, and, yes, Outlook.com has been positioning itself very strongly as the “non-invasive” email client, but how does any major company having your email impact you? For most individuals, it doesn’t. - Absolutely, if your job or your company’s activity involves **confidential messaging**ie law or medicine, then you might be abusing*legal privilege*and have to reconsider your email provider and any 3rd party solutions that you add to it. - But otherwise, as long as you’re not a criminal 😉 if you have the possibility and willingness to **open your data**, then isn’t it the same if it’s LinkedIn, Google, Microsoft, Apple or any other major player? - Most **productivity plugins**have to see your email to extract value. A short trip to your gmail permissions https://accounts.google.com/b/0/IssuedAuthSubTokens will probably show more than 10 third parties (for me it’s Baydin, Brewster, Gmail Meter, Evercontact, Rapportive (more on that below), Yesware, Nimble, Mailbox Unroll.me, MxHero… and if you were already using Rapportive, like millions of users, you were already sending your metadata to LinkedIn **Is their method dangerous?** Using this “Man in the Middle approach” is not new and it’s actually the way most anti-spam services like Postini or MessageLabs have been providing their service for years. It is one of the few ways to improve a platform that is not yet open, as clearly iOS mail is not. It’s true that it adds another layer, another vulnerability point. However, as mentioned above, we are already trusting our email to mega players so, does anyone really believe that LinkedIn is less trustworthy? One of the strongest critiques of Intro’s security came from Bishop Fox, and it’s interesting to see how his position evolved on a second post after many more back and forths with the Linkedin team. *With LinkedIn being a prime target for attack, it’s important to recognize the value of taking the right steps to secure a service like Intro. With the threat of hackers, one always wonders will the battle end or will it continue, but we’ve found it’s best to be proactive. [With LinkedIn Intro], Cory and his team have done this.* And let’s be honest, in the past few years, what large company has been spared from a security breach? Not many. Facebook, Twitter, Apple, Google, Microsoft, Linkedin and many more have had their issues, and unfortunately, it’s often not directly related to internal security, but vulnerabilities from browser or more global platforms (Java, Flash, Oracle bugs come to mind). ## Why did they do it? Email is still among the best ways to communicate with the outside world. Far from being dead, it’s very much alive, but for many professionals one of the things it continues to lack is “context”. For example, often we don’t know who a new person is contacting us and whether there’s value or AN IMMEDIATE priority in replying to their specific email— Rapportive has been this solution for web-based email, and LinkedIn Intro is now a part of that solution for mobile on iOS. Email needs new tools like LinkedIn Intro – or Evercontact which can always show you who’s calling on the phone as all contact data we updated can be synced to your phone or CRM or elswhere. We can benefit so much from context, more automation and easy access to external data, but unfortunately certain platforms, iOS being the prime example, remain fairly closed and LinkedIn Intro’s clever hack is providing an opening for all of us, users and service providers alike. **What we think…** We’d like to applaud Rapportive and LinkedIn’s innovation in pushing the enveloppe in what some may consider a “borderline hack” whereas it’s really another creative way to use the man in the middle approach securely and transparently, thus making it but only technically different from another more standard API / Permissions protocol. While many of us are huge fans of Apple’s innovation and their product, we’re still a bit disappointed in an overly closed relationship to third party services with many of their applications such as iOS Mail App. We hope that work-arounds like this will help to slowly but surely bring a bit more “openness” to external developments which could so clearly benefit end-users and the overall innovative drive of technology.
true
true
true
When startups hack for major players … Released in late October, LinkedIn Intro, an “app” that allows you to visualize anyone’s Linkedin info right above an email in iOS’s Mail, has received a fair amount of attention, and rightly so. Put quite simply, Rapportive – the developers of LinkedIn Intro and the startup […]
2024-10-12 00:00:00
2013-11-13 00:00:00
null
article
evercontact.com
Blog | evercontact
null
null
33,224,752
https://medium.com/@aplaceofmind/9-ways-i-code-at-the-speed-of-thought-part-1-2431743d1013
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,380,836
https://blogs.oracle.com/opal/node-oracledb-21-is-now-available-from-npm
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,602,698
http://www.youtube.com/watch?v=NNahBJqLZ1o
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,607,891
http://blog.websecurify.com/2017/02/your-next-encoder.html
Your Next ENcoder
null
The ENcoder is a very useful tool when it comes to breaking complex chains of encapsulated data, such as if you are to decode a Base64 string, parse it somehow and create a SHA256 hash of the output - all dynamically. Traditionally the ENcoder works with a sequence of steps, where each step is generated via one of the available transforms. This functionality typically provides enough flexibility for most cases but there are situations when it is simply not enough. This is why now you can not only sequence transforms, as you typically would, but also use our powerful text building system which allows you to apply transforms inside the original input and as such generate very, very powerful transformations easily that go beyond the traditional single dimension achieved with this tool. The following example, although rather trivial, illustrates how powerful this feature is. In the screenshot above we are decoding a Base64 string, representing Basic Auth username and password, which was built with a dynamic value which was generated by inner text transform. Useful but not as cool as what is coming next. This example is slightly better. We are building a JSON payload which contains a message value that is generated with a generator that is properly escaped for JSON encapsulation. The JSON payload is is minified and outputted as a Python or Ruby string (C/C++ also supported). We can see how the message attribute is built by opening up the dynamic value. If you click on the dynamic field, you will be presented with the following levels of encapsulations as seen in the screenshot bellow. Due to the very powerful nature of the text builder, we can employ many of the available features to do some amazing hackery without touching any programming language. Here is an example where an attacker is generating a JWT payload that is signed with the None algorithm. The output value is printed than decoded and prettified for all to see. Now this is really cool! It makes JWT hacking a lot easier but we are going to talk about this in the next post. *As you can see, this subtitle change will take your hacking skills to the next level. We are sure of it. It certainly makes my life easier when hunting for interesting bugs. For more updates, simply follow us on twitter. More awesome stuff are coming very soon!*
true
true
true
null
2024-10-12 00:00:00
2017-02-01 00:00:00
null
null
null
null
null
null
4,742,118
http://ventureminded.me/post/33654673444/foursquare-vs-yelp-is-the-new-facebook-vs-google
foursquare vs. Yelp is the New Facebook vs. Google
Adambesvinick
### foursquare vs. Yelp is the New Facebook vs. Google Previously I wrote about The Amazing Race between Facebook and Google, particularly how Facebook was leveraging social and its massive network effects to drive the rest of its business (i.e. potentially search) and Google was using its existing user base from search to drive the rest of its products (i.e. Google+). With the launch of a web search engine for non-foursquare users and “logged out Explore,” it’s now clear more than ever that Yelp is in Dens and the team’s sights. foursquare very clearly focused on building up a user base through social connections and the network effects that come along with them, including merchant deals, before expanding into a more traditional “city guide” business. On the other hand, Yelp consciously focused on reviews and becoming a guide and is now backing in to adding social features. In my post on Facebook vs. Google from August 2011, I thought Google would win in a sprint and Facebook would win in a marathon. In this instance, regardless of the race’s distance, I give the edge to foursquare. First, the strategy of social first makes even more sense to me in this business, and second, I don’t believe there’s a better CEO-company match in tech than Dens and foursquare.
true
true
true
foursquare vs. Yelp is the New Facebook vs. Google Previously I wrote about The Amazing Race between Facebook and Google, particularly how Facebook was leveraging social and its massive network...
2024-10-12 00:00:00
2012-10-15 00:00:00
null
socialmediaposting
null
null
null
null
8,154,443
http://martinweigert.com/my-personal-strategies-and-advises-to-score-the-best-travel-deals/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,758,610
https://teambit.io/blog/employee-engagement-the-most-critical-problem-to-solve/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,609,121
https://medium.com/gridspace/the-mechanical-pollster-71a4aab7a187#.1bcuq5a2t
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,361,449
http://www.icollector.com/Nice-Old-Kellogg-Wall-Phone-complete-with-nickled-brass-bells-and-bake-a-lite-ear-and-mouth-pieces_i9897268
Nice Old Kellogg Wall Phone complete with nickled brass bells and bake-a-lite ear and mouth pieces.
null
**SOLD** Winning Bid Undisclosed This item SOLD at 2010 Nov 01 @ 15:13UTC-7 : PDT/MST **Did you win this lot?** A full invoice should be emailed to the winner by the auctioneer within a day or two. Nice Old Kellogg Wall Phone complete with nickled brass bells and bake-a-lite ear and mouth pieces. In unknown working condition.complete with nickled brass bells and bake-a-lite ear and mouth pieces. In unknown working condition. **Auction Location:** 3525 S. Hwy 69/ mailing P.O. Box 99, Humboldt, Arizona, 86329, United States Tax Details: All Arizona residents pay sales tax of 7.35% otherwise if you come to the auction you must pay the sales tax as well. **Taxes:** Tax | Rate | Desc. | AST | 7.35% | Arizona Tax | **Buyer's Premiums:** From (Incl.) | To (Excl.) | Premium | 0.00 | Infinite | 16% | **Additional Fees:** **Shipping Details:** Cost of shipping will be at the expense of the buyer. We use UPS exclusively and you pay only exact charges which will be charged to the credit card on file. Packing is free. We don’t ship or pack large items such as furniture, juke boxes, trunks, etc any thing over 50 lbs or requires us to build a box. You must use a outside shipping source. All handguns will be shipped overnight air to a valid licensed FFL dealer. Rifles can go ground. All Post 1898 firearms are register able and must be registered in compliance with both Arizona and Federal Laws. Buyers of Post 1898 firearms must complete all necessary registration forms at Auction Productions 3525 S. Hwy 69 Humboldt, AZ 86329. Arizona Dealers must have on possession on the day of the sale signed copies of their Federal Firearms License. All items are offered for sale “AS IS” and the AUCTIONEER OR AUCTION HOUSE ASSUMES NO LIABILITY FOR BREAKAGE OR DAMAGE AFTER THE FALL OF THE GAVEL. THE ACCEPTANCE OF A BID BY THE AUCTIONEER CONSTITITES A LEGAL AND BIDDING CONTRACT WITH THE BUYER AND THE BUYER ACCEPTANCE OF THE TERMS AND CONDITIONS OF SALE UNDER ARIZONA STATE LAW. **FURTHER THE AUCTIONEER OR AUCTION HOUSE IS NOT RESPONSIBLE FOR ACCIDENTS! **Payment Details:** Payment Details: Customers will be charged automatically after the auction for all purchases. Then shipping will be charged to the card when it ships. Any other arrangements must be made prior to auction. **Accepted Payment Methods:** All items are offered for sale as is, where is and no special warranty or guarantee is made or implied as to condition, authenticity or era of any described item. All items are offered for sale to the highest bidder and the bidders must establish to there own satisfaction the authenticity and condition before bidding. All sales are final, no refunds, no exceptions. 1.A ALL OUT OF STATE BUYER MUST KNOW THERE STATES GUN LAWS BEFORE BIDDING. PLEASE CONTACT YOUR LOCAL FFL DEALER IN YOUR STATE FOR UP TO DATE INFORMATION. IT IS OUR RESPONSIBLITY TO KNOW THE LAWS IN YOUR STATE. All items offered for sale in this auction will be offered for sale by auction only and there are no presales. Arizona Sales Tax is payable by all buyers liable to tax. All Dealers must provide a valid resale number upon registration and fill out Arizona Resale Card prior to bidding for Tax Exempt Status. All bids are per number in the catalog and all items will be sold in numerical order only. The right is reserved to with draw any item or lot prior to sale. Absolutely no lots will be broken. The highest bidder will prevail. The Auctioneer will regulate the bidding and reserves the right to refuse any bid not believed to be in good faith. Should any disputes arise between bidders, the decision of the Auctioneer in the exercise of judgement as to the successful bidder will be final. Additionally the Auctioneer may re-offer or resell any lot in dispute. A 16% Buyers Premium will be charged on all sales. The Auctioneer or Auction House assumes no responsibility for errors in the catalog, advertising or on the part of the bidder. All items purchased by the successful bidder must be paid for on the day of sale. Purchases not paid within 14 days of final sale date will b e charged a 5% Late Charge, all accounts past due will accrue interest at the rate of 1.5% per month on the unpaid balance and compounded monthly. Storage will be provided at no charge for 14 days following the sale date and then at a rate of $5.00 per item per day unless prior arrangements have been made. Shipping is the responsibility of the Buyer and the Buyer must make all Shipping or Delivery arrangements on large items. Smalls will be packed and shipped by us at the buyer’s expense. We do not charge other than exact cost for smalls creating and shipping. Shipping insurance is the responsibility of the Buyer. All items for sale at auction are offered “AS IS” and the AUCTIONEER OR AUCTION HOUSE ASSUMES NO LIABILITY FOR BREAKAGE OR DAMAGE AFTER THE FALL OF THE GAVEL. THE ACCEPTANCE OF A BID BY THE AUCTIONEER CONSTITUTES A LEGAL AND BINDING CONTRACT WITH THE BUYER AND ACCEPTANCE OF THE TERMS AND CONDITIONS OF SALE UNDER ARIZONA STATE LAW. All Post 1898 firearms are registrable and must be registered in compliance with both Arizona and Federal Laws. Buyers of Post 1898 firearms must complete all necessary registration forms at Auction Productions Ltd., 3525 South Hwy 69, Humboldt, Arizona. Phone 928-632-8000. Arizona Dealers must have in their possession on day of sale, signed copies of their Federal Firearms License in order to accept same day delivery of modern weapons. Shipping for Out of State Buyers must be arranged through the Auction Company and must be shipped to a valid holder of a Federal Firearms License on all modern weapons. Cost of shipping and packing will be at the expense of the buyer. No Guarantee as to the operation or firing condition of any weapon offered for sale is made by the Auctioneer or The Auction Company. It is recommended that all weapons bought at auction be checked out by a competent gunsmith prior to attempting to fire them. Consignors are not allowed to bid on their own merchandise nor have any agent bid on their behalf. If the Auctioneer recognizes such bidding or is advised of the same, he may withdraw any or all lots consigned by the offender and will publicly recognize that consignor. Auctioneer/Auction House is “NOT RESPONSIBLE FOR ACCIDENTS”.
true
true
true
Nice Old Kellogg Wall Phone complete with nickled brass bells and bake-a-lite ear and mouth pieces. - Reata Pass Auctions
2024-10-12 00:00:00
2010-11-01 00:00:00
https://dygtyjqp7pi0m.cl…=8CD453BDAB8A930
null
icollector.com
iCollector.com Online Auctions
null
null
26,941,026
https://www.dbltap.com/posts/twitch-bans-7-5-million-bots-inflating-viewership-01f3b27amxh2
Twitch Bans 7.5 Million Bots Inflating Viewership
Noam Radcliffe
# Twitch Bans 7.5 Million Bots Inflating Viewership Twitch has banned more than 7.5 million accounts it says were contributing to "the rise of fake engagement" on the platform for violating its terms of service agreement. These accounts were believed to be used to manipulate viewership and follower counts and did not belong to individual users. Twitch says most were discovering using a machine learning technology that will improve as time goes on. Twitch defines fake engagement as "artificial inflation of channel statistics, such as views or follows, through coordination or 3rd party tools." "This behavior is characterized by the creation of incidental or duplicitous views or follows," reads Twitch's help page on the subject. The most commonly decried form of engagement inflation is view-botting, defined as "using illegitimate scripts or tools to make a channel appear to have more concurrent viewers than it actually does." Follow-botting performs a similar function, driving up follower counts using scripting. Fake engagement also includes agreements such as "Follow 4 Follow" or "Host 4 Host." "Artificial engagement and botting limit growth opportunities for legitimate broadcasters and are damaging to the community as a whole. False viewer growth is not conducive to establishing a career in broadcasting because the 'viewers' do not contribute to a healthy, highly engaged community," says Twitch. Twitch's crackdown comes without an acknowledgement of the company's failure to protect streamers who are unwittingly view- or follow-botted. Especially women streamers are often the victims of targeted attacks, leading to their accounts becoming dangerously inflated or outright disabled. Twitch streamer Brittany "MTGNerdGirl" Hamilton says her account received more than 70,000 new followers almost overnight, and the followers all spammed her chat with sexist messages exhorting her to "Go back to the kitchen." "Every single one of those 73k bots spammed this in my channel and I had to turn on subscriber mode, turn off alerts and ultimately end the stream," she tweeted Wednesday. Twitch has yet to address complaints such as MTGNerdGirl's.
true
true
true
Twitch has banned more than 7.5 million accounts it says were contributing to 'the rise of fake engagement' on the platform for violating its terms of service a
2024-10-12 00:00:00
2021-04-15 00:00:00
https://images2.minuteme…c1211e9c6af7.jpg
article
dbltap.com
dbltap.com
null
null
35,123,194
https://en.wikipedia.org/wiki/Security_printing
Security printing - Wikipedia
null
# Security printing This article needs additional citations for verification. (November 2022) | **Security printing** is the field of the printing industry that deals with the printing of items such as banknotes, cheques, passports, tamper-evident labels, security tapes, product authentication, stock certificates, postage stamps, and identity cards. The main goal of security printing is to prevent forgery, tampering, or counterfeiting. More recently many of the techniques used to protect these high-value documents have become more available to commercial printers, whether they are using the more traditional offset and flexographic presses or the newer digital platforms. Businesses are protecting their lesser-value documents such as transcripts, coupons and prescription pads by incorporating some of the features listed below to ensure that they cannot be forged or that alteration of the data cannot occur undetected. A number of technical methods are used in the security printing industry.[1] Security printing is most often done on security paper, but it can also occur on plastic materials. ## Features detectable by humans [edit]Secured documents, such as banknotes, use visible, tactile, and acoustic features to allow humans to verify their authenticity without tools. The European Central Bank (ECB) recommends feel, look, and tilt:[2] First check the tactility of the banknote (including the substrate), then look at the optical design and finally the characteristics of certain optical features when tilting the banknote in relation to the incident light. In general, the introduction of a new banknote series is accompanied by information campaigns describing the design and the security features. Several central banks also provide mobile apps explaining the characteristics by interactive methods and enrich them by animated effects. In general, they use the camera of a mobile device to explain the features of a presented banknote. As they do not support the direct verification of authenticity they also work with simple printouts or screen displays. *SwissBanknotes*from the Swiss National Bank for the Swiss franc with animated effects[3]*MalawiKwacha*from the Reserve Bank of Malawi for the Malawian kwacha with simulations of tilting and tactility as well as interactive effects by enhanced reality[4]*SARBCurrency*from the South African Reserve Bank for the South African rand as an offline application explaining the security features by enhanced reality[5]*Lilangeni*from the Central Bank of Eswatini for the Swazi lilangeni with simulations of tilting and tactility as well as interactive effects by enhanced reality[6] ### Substrate [edit]#### Paper [edit]The substrate of most banknotes is made of paper, almost always from cotton fibres for strength and durability; in some cases linen or specially coloured or forensic fibres are added to give the paper added individuality and protect against counterfeiting. Paper substrate may also include windows based on laser-cut holes covered by a security foil with holographic elements. All of this makes it difficult to reproduce using common counterfeiting techniques. #### Polymer [edit]Some countries, including Canada, Nigeria, Romania, Mexico, Hong Kong, New Zealand, Israel, Singapore, Malaysia, United Kingdom, and Australia, produce polymer (plastic) banknotes, to improve longevity and to make counterfeiting more difficult. Polymer can include transparent windows, diffraction grating, and raised printing.[7] - Recto of 1 Romanian Leu banknote (series 2005) with partially overprinted window on the left (polymer substrate) - Recto of 20 euro banknote (series ES2) with holographic foil over the window (upper right side) (paper substrate) - Verso of 20 euro banknote (series ES2) with holographic foil over the window (upper left side) #### Format [edit]Most currencies use different dimensions of length, width, or both for the different denominations, with smaller formats for the lower denominations and larger formats for the higher denominations, to hinder reuse of the substrate with embedded security features for counterfeiting higher denominations. Blind and visually impaired people may also rely on the format for distinguishing between the denominations. ### Visible security features [edit]#### Watermark [edit]**True watermark** A true watermark is a recognizable image or pattern in paper that appears lighter or darker than surrounding paper when viewed with a light from behind the paper, due to paper density variations. A watermark is made by impressing a water coated metal stamp or dandy roll onto the paper during manufacturing. Watermarks were first introduced in Bologna, Italy in 1282; as well as their use in security printing, they have also been used by paper makers to identify their product. For proofing the authenticity, the thinner part of the watermark will shine brighter with a light source in the background and darker with a dark background. The watermark is a proven anti-counterfeit technology because most counterfeits only simulate its appearance by using a printing pattern. - Watermark in a postage stamp from Zululand (around 1900) - Watermark in a 100 euro (series ES1) from European Central Bank - Watermark in a 5 euro (series ES2) from European Central Bank **Simulated watermark** Printed with white ink, simulated watermarks have a different reflectance than the base paper and can be seen at an angle. Because the ink is white, it cannot be photocopied or scanned.[8] A similar effect can be achieved by iriodin varnish which creates reflections under certain viewing angles only and is transparent otherwise. Watermarks are sometimes simulated on polymer currency by printing an according pattern, but with little anti-counterfeiting effect. For example, the Australian dollar has its coat of arms watermarked on all its plastic bills. A Diffractive Optical Element (DOE) within the transparent window can create a comparable effect but requires a laser beam for its verification. #### See-through register [edit]See-through registers are based on complementary patterns on the obverse and reverse of the banknote and constitute a complete pattern under backlight conditions. Examples are the *D* of the Deutsche Mark (1989 series, BBk III) and the value number of the first series of euro banknotes (ES1). Counterfeiting is difficult because the printing registration requires an extremely high printing accuracy on both sides and minor deviations are easily detectable. - See-through register of EUR 100 (ES1) (obverse) - See-through register of EUR 100 (ES1) (reverse) - See-through register of EUR 100 (ES1) (transmission) #### See-through window [edit]Polymer banknotes which are printed on a basically transparent substrate easily provide clear areas by sparing the white coating. This window may be overprinted by patterns. Initially this was the main human security feature for polymer banknotes which cannot use watermark or security threads. It attracted counterfeiting of large volumes when printing technology for polymer substrate became commonly available. Therefore new designs additionally laminate this window with an ultra-thin security foil, e.g., on the Frontier series of the Canadian dollar which was issued from 2011, and the Australian dollar (2nd series) issued from 2016. A very similar security feature is achieved with banknotes on paper substrate. For this an area of up to 300 mm² is punched out and sealed with a partially transparent security foil. The ES2 series of euro banknotes is using this feature for the higher denominations (EUR 20 and above) and calls it *portrait window*. The European Central Bank (ECB) recommends to *look at the banknote against the light – the window in the hologram becomes transparent and reveals a portrait of Europa on both sides of the note*.[9] - Obverse of Romanian RON 1 (series 2005) with overprinted window (polymer substrat) - Obverse of EUR 20 (ES2) with holographic foil over the see-through window (top right) - Reverse of EUR 20 (ES2) with transparent foil over the see-through window (top left) #### Micro-perforation [edit]Micro-perforation is used as *Microperf* in the Swiss franc and the Romanian leu. Very small holes are punched or laser-engraved into the substrate or a foil application without generating a *crater*. In backlight illumination, the holes form a pattern, e.g., the value numeral like in the SFR 20 (eighth series). #### Geometric lathe work [edit]A guilloché is an ornamental pattern formed of two or more curved bands that interlace to repeat a circular design. They are made with a geometric lathe. #### Microprinting [edit]This involves the use of extremely small text, and is most often used on currency and bank checks. The text is generally small enough to be indiscernible to the naked eye without either close inspection or the use of a magnifying glass. Cheques, for example, use microprint as the signature line. - Recto of 100 euro (series ES1) (lower left) - Recto of 20 Swiss franc (8th series) - Recto of 1 US dollar with microprinting and guilloché in the pyramid #### Optically variable ink [edit]Optically Variable Ink (OVI) displays different colours depending on the angle at which it is viewed. It uses mica-based glitter.[10] As an example, the euro banknotes use this feature as *emerald number* on the ES2 series. The ECB recommends to "tilt the banknote". The shiny number in the bottom left corner displays an effect of the light that moves up and down. The number also changes colour from emerald green to deep blue. The EUR 100 and EUR 200 banknotes also show € symbols inside the number.[11] Colouured magnetizable inks are prepared by including chromatic pigments of high colour strength. The magnetic pigments’ strong inherent colour generally reduces the spectrum of achievable shades. Generally, pigments should be used at high concentrations to ensure that sufficient magnetizable material is applied even in thin offset coats. Some magnetic pigment are best suited for coloured magnetizable inks due to their lower blackness. Homogeneous magnetization (no preferred orientation) is easily obtained on pigment made of spherical particles. Best results are achieved when remanence and coercive field strength are very low and the saturating magnetization is high. When pearlescent pigments are viewed at different angles the angle of the light as it's perceived makes the colour appear to change as the magnetic fields within the particles shift direction. - OVI of 50 euro (series ES1) - Emerald number of 5 euro (series ES2) #### Holograms [edit]A hologram may be embedded either via hot-stamping foil, wherein an extremely thin layer of only a few micrometers of depth is bonded into the paper or a plastic substrate by means of a hot-melt adhesive (called a size coat) and heat from a metal die, or it may be directly embossed as holographic paper, or onto the laminate of a card itself. When incorporated with a custom design pattern or logo, hologram hot stamping foils become security foils that protect credit cards, passports, bank notes and value documents from counterfeiting. Holograms help in curtailing forging, and duplication of products hence are very essential for security purposes. Once stamped on a product, they cannot be removed or forged, enhancing the product at the same time. Also from a security perspective, if stamped, a hologram is a superior security device as it is virtually impossible to remove from its substrate.[ citation needed] - Hologram on a 50 euro (series ES1) - Hologram on a 100 euro (series ES1) #### Security threads [edit]Metal threads and foils, from simple iridescent features to foil colour copying to foils with additional optically variable effects are often used. There are two kinds of security threads. One is a thin aluminum coated and partly de-metallized polyester film thread with microprinting which is embedded in the security paper as banknote or passport paper. The other kind of security thread is the single or multicolour sewing thread made from cotton or synthetic fibers, mostly UV fluorescent, for the bookbinding of passport booklets. In recent designs the security thread was enhanced with other security features such as holograms or three-dimensional effects when tilted. On occasion, the banknote designers succumb to the Titanic effect (excess belief in the latest technology), and place too much faith in some particular trick. An example is the forgery of British banknotes in the 1990s. British banknotes in the 1990s featured a "windowed" metal strip through the paper about 1 mm wide that comes to the paper surface every 8 mm. When examined in reflected light, it appears to have a dotted metallic line running across it, but when viewed through transmitted light, the metal strip is dark and solid. Duplicating this was thought to be difficult, but a criminal gang was able to reproduce it quickly. They used a cheap hot-stamping process to lay down a metal strip on the surface of the paper, then printed a pattern of solid bars over it using white ink to leave the expected metal pattern visible. At their trial, they were found to have forged tens of millions of pounds’ worth of notes over a period of years.[12] - Security thread of 100 euro (series ES1) (only visible in transmitted light) - Security thread of 100 US dollar (series 2009) with the 3D security ribbon - Details of 3D security ribbon on 100 US dollar - Security thread of 500 Russian ruble (series 2010) with hologram #### Prismatic colouration [edit]The use of colour can greatly assist the prevention of forgeries. By including a colour on a document a colour photocopier must be used in the attempt to make a copy however the use of these machines also tends to enhance the effectiveness of other technologies such as Void Pantographs and Verification Grids (see Copy-evident below). By using two or more colours in the background and blending them together a prismatic effect can be created. This can be done on either a traditional or a digital press. When a document using this technique is attempted to be photocopied the scanning and re-creation by a colour copier is inexact usually resulting in banding or blotching and thereby immediate recognition of the document as being a copy. A frequent example of prismatic colouring is on checks where it is combined with other techniques such as the void pantograph to increase the difficulty of successful counterfeiting.[13] #### Copy-evidence [edit]Sometimes only the original document has value. An original, signed cheque for example has value but a photocopy of it does not. An original prescription script can be filled but a photocopy of it should not be. Copy-evident technologies provide security to hard copy documents by helping distinguish between the original document and the copy. The most common technology to help differentiate originals from copies is the void pantograph. Void pantographs are essentially invisible to the untrained, naked eye on an original but when scanned or copied the layout of lines, dots and dashes will reveal a word (frequently VOID and hence the name) or symbol that clearly allows the copy to be identified. This technology is available on both traditional presses (offset and flexographic) and on the newer digital platforms. The advantage of a digital press is that in a single pass through the printer a void pantograph with all the variable data can be printed on plain paper. Copy-evident paper, sometimes marketed as ‘security paper’, is pre-printed void pantograph paper that was usually produced on an offset or flexographic press. The quality of the void pantograph is usually quite good because it was produced on a press with a very high resolution, and, when only a small number of originals are to be printed, it can be a cost-effective solution; however, the advent of the digital printer has rapidly eroded this benefit. A second technology which complements and enhances the effectiveness of the void pantograph is the Verification Grid. This technology is visible on the original, usually as fine lines or symbols but when photocopied these lines and images disappear; the inverse reaction of the void pantograph. The most common examples of this technology are on the fine lines at the edge of a cheque which will disappear when copied or on a coupon when a symbol, such as a shopping cart, disappears when an unauthorized copy is made. Verification Grid is available for either traditional or digital presses. Together the void pantograph and the Verification Grid complement each other because the reactions to copying are inverse, resulting in a higher degree of assurance that a hard copy document is an original. #### Registration of features on both sides [edit]Banknotes are typically printed with fine alignment (so-called *see-through registration window*) between the offset printing on each side of the note. This allows the note to be examined for this feature, and provides opportunities to unambiguously align other features of the note with the printing. Again, this is difficult to imitate accurately enough in most print shops. - Registration pattern of 100 euro (series ES1) (recto) - Registration pattern of 100 euro (series ES1) (verso) - Registration pattern of 100 euro (series ES1) (transmission) - Registration pattern of 50 Swiss franc (8th series) (transmission) #### Thermochromatic ink [edit]Several types of ink are available which change colour with temperature. Security ink with a normal "trigger" temperature of 88 °F (31 °C), which will either disappear or change colours when the ink is rubbed, usually by the fingertips. This is based on a thermochromatic effect. #### Serial numbers [edit]Serial numbers help make legitimate documents easier to track and audit. However, they are barely useful as a security feature because duplicates of an existing serial number are not easily detectable, except for a series of identical counterfeits. To support correct identification serial numbers normally have a check digit to verify the correct reading of the serial number. In banknote printing the unique serial number provides effective means for the monitoring and verification of the production volume. In some cases the recording of serial numbers may help to track and identify banknotes from blackmail or robbery. In most currencies the serial number is printed on two edges of the banknotes to aggravate the making of so-called *composed banknotes* by combining parts of different banknotes. Even if made from genuine banknotes, most central banks consider such items as manipulated banknotes without value if the serial numbers do not match. - 1 German thaler issued on 6 September 1855 - US dollar (series 2003) with green serial number - Russian ruble (series 2006) with variable font size (right) - 200 Guatemalan quetzal with laser-engraved serial number (in the white area) ### Tactile security features [edit]#### Paper feeling [edit]Security paper for banknotes is different from standard paper due to special ingredients like fibers from cotton, linen or abaca. Together with intaglio printing crisp feeling provides an excellent tactile perception (crisp feeling) to reject counterfeits which are based on standard paper with cellulose fibers. Polymer substrates and limp banknotes on paper substrate do not offer this tactile characteristic. #### Intaglio printing [edit]Intaglio printing is a technique in which the image is incised into a surface. Normally, copper or zinc plates are used, and the incisions are created by etching or engraving the image, but one may also use mezzotint. In printing, the surface is covered in ink, and then rubbed vigorously with tarlatan cloth or newspaper to remove the ink from the surface, leaving it in the incisions. A damp piece of paper is placed on top, and the plate and paper are run through a printing press that, through pressure, transfers the ink to the paper. The very sharp printing obtained from the intaglio process is hard to imitate by other means. Intaglio also allows for the creation of latent images which are only visible when the document is viewed at a very shallow angle. The mobile app *ValiCash* from Koenig & Bauer evaluates specific characteristics of the intaglio printing of euro banknotes printed on paper substrate.[14] It is available for iOS devices and takes a picture of the banknote. Within a few seconds it determines abnormality by a message "not successful" but cannot finally identify counterfeits. #### Embossing [edit]The substrate may be embossed to create raised designs as tactile security feature. It may be combined with intaglio printing. As an example, the euro series ES2 has different pattern of lines at the short edges of the banknote to support blind people in distinguishing the denominations. ## Security features detectable with simple tools [edit]### Test pen [edit]A counterfeit banknote detection pen can be used to quickly determine the starch in wood-based paper substrate. While genuine banknotes hardly change color at all, counterfeits turn black or blue immediately. This method, which is not very reliable – there is no color change on newsprint – is often used in the retail trade for reasons of cost and time. ### Halo [edit]Carefully created images can be hidden in the background or in a picture on a document. These images cannot be seen without the help of an inexpensive lens of a specific line screening. When placed over the location of the image and rotated the image becomes visible. If the document is photocopied the Halo image is lost. A known implementation is *Scrambled Indicia*.[15] Halo can be printed on traditional or digital presses. The advantage of traditional presses is that multiple images can be overlaid in the same location and become visible in turn as the lens is rotated. Halo is used as a technique to authenticate the originality of the document and may be used to verify critical information within the document. For example, the value of a coupon might be encoded as a Halo image that could be verified at the time of redemption or similarly the seat number on a sporting event ticket. ### Latent images [edit]Pressure-sensitive or hot stamped labels characterized with a normal (gray or colored) appearance. When viewed via a special filter (such as a polarizer) an additional, normally latent, image appears. With intaglio printing, a similar effect may be achieved for viewing the banknote from a slanted angle. ### False-positive testing [edit]False-positive testing derives its name because the testing requires both a false and a positive reaction to authenticate a document. The most common instance is the widely available counterfeit detector marker seen in many banks and stores. Counterfeit detector markers use a chemical interaction with the substrate, usually paper, of a document turning it a particular color. Usually a marker turns newsprint black and leaves currency or specially treated areas on a document clear or gold. The reaction and coloring varies depending upon the formulation. Banknotes, being a specially manufactured substrate, usually behave differently than standard newsprint or other paper and this difference is how counterfeits are detected by the markers. False-positive testing can also be done on documents other than currencies as a means to test their authenticity. With the stroke of a marker a symbol, word or value can be revealed that will allow the user to quickly verify the document, such as a coupon. In more advanced applications the marker creates a barcode which can be scanned for verification or reference to other data within the document resulting in a higher degree of assurance of authenticity. Photocopied documents will lack the special characteristics of the substrate so are easily detectable. False-positive testing generally is a one time test because once done the results remain visible so while useful as part of a coupon this technique is not suitable for ID badges for example. ### Fluorescent dyes [edit]Fluorescent dyes react with fluorescence under ultraviolet light or other unusual lighting. These show up as words, patterns or pictures and may be visible or invisible under normal lighting. This feature is also incorporated into many banknotes and other documents - e.g. Northern Ireland NHS prescriptions show a picture of local '8th wonder' the Giant's Causeway in UV light. Some producers include multi-frequency fluorescence, such that different elements fluoresce under specific frequencies of light. Phosphorescence may accompany fluorescence and shows an after-glow when the UV light is switched off. - Recto at 350 nm The foil of the kinegram (bottom right) and colored fibres show up - Verso at 350 nm The colored fibres are clearly visible ### Infrared characteristics [edit]Inks may have identical color characteristics in the visible spectrum but differ in the infrared spectrum. - Recto illuminated at 700 nm: partially disappearing colors which appear identical in the CMYK color model. - Verso illuminated at 700 nm: The serial number (left bottom) nearly disappears. - Recto illuminated at 1000 nm: Most color absorption has disappeared (the Europe flag top left). The watermark is easily detectable. - Verso illuminated at 1000 nm: All color absorptions have disappeared except the 50 (bottom right) and the serial number (top right). ## Machine-readable security features [edit]Machine-readable features are used in passports for border control and in banknote processing. - The commercial market is using *Level 2 features*(L2) which are partly disclosed by the central banks. This applies for cash handling machines, such as automated teller machines and ticket machines. - The central banks are additionally using *Level 3 features*(L3) which are kept completely secret. They are necessary to maintain the integrity of cash in circulation and isolate professional counterfeiting. **There are the following machine-readable features (extract):** ### Magnetic ink [edit]Because of the speed with which they can be read by computer systems, magnetic ink character recognition is used extensively in banking, primarily for personal checks. The ink used in magnetic ink character recognition (MICR) technology is also used to greatly reduce errors in automated (or computerized) reading. The pigment is dispersed in a binder system (resin, solvent) or a wax compound and applied either by pressing or by hot melt to a carrier film (usually polyethylene).[16] Some people believe that the magnetic ink was intended as a fraud prevention concept, yet the original intent was to have a non-optical technology so that writing on the cheque, like signatures, would not interfere with reading. The main magnetic fonts (E13-B and CMC7) are downloadable for a small fee and in addition magnetic toner is available for many printers. Some higher resolution toners have sufficient magnetic properties for magnetic reading to be successful without special toner. ### Phosphorescent dyes [edit]Phosphorescence may accompany fluorescence and shows an after-glow when the UV light is switched off. ### Anti-copying marks [edit]In the late twentieth century advances in computer and photocopy technology made it possible for people without sophisticated training to easily copy currency. In an attempt to prevent this, banks have sought to add filtering features to the software and hardware available to the public that senses features of currency, and then locks out the reproduction of any material with these marks. One known example of such a system is the EURion constellation. - Recto (cutout) of 5 euro (series ES2) - Recto (cutout) of 20 US dollar (as part of the value numeral *20*) ### Electronic devices [edit]With the advent of Radio Frequency Identification (RFID) which is based on smart card technology, it is possible to insert extremely small RF-active devices into the printed product to enhance document security. This is most apparent in modern biometric passports, where an RFID chip mirrors the printed information. Biometric passports additionally include data for the verification of an individual's fingerprint or face recognition at automated border control gates. ### Copy detection pattern and digital watermark [edit]A copy detection pattern or a digital watermark can be inserted into a digital image before printing the security document. These security features are designed to be copy-sensitive[17] and authenticated with an imaging device.[18] ### Level 3 features [edit]Most central banks also implement so-called *Level 3* (L3) security features which are kept totally secret for their ingredients as well as their sophisticated measurement. Such covert features may be embedded within the substrate and/or the printing ink and are not commercially available. They are the ultimate safeguard in banknote security and restricted to the use of central banks. The machine-readable *M-Feature* from Giesecke+Devrient is the worldwide leading L3 feature and currently used by more than 70 central banks and more than 100 billion banknotes in circulation.[19] Other products are *ENIGMA* from De La Rue[20] and *Level III Authentication* from Spectra Systems.[21] ## See also [edit]- Authentication, particularly the subject *product authentication* - Tamper-evident technology, particularly for money and stamps - Tamper resistance, particularly the subject *packaging* - Brand protection - Security label - Banknote processing, particularly how security features are detected ## References [edit]**^**"EUIPO Anti-Counterfeiting Technology Guide".*European Observatory on Infringements of Intellectual Property Rights*. 2021-02-26.**^**"Security features: Europa series €100 banknote". 2022-01-01. Retrieved 2022-04-25.**^**"Swiss National Bank releases banknote app" (PDF).*Swiss National Bank*. 2016-04-12. Retrieved 2022-05-23.**^**Orama Chiphwanya (2019-02-01). "Malawi kwacha app to curb counterfeit currency".*The Nation*. Retrieved 2019-05-06.**^**"SARB Currency Mobile App". South African Reserve Bank. Archived from the original on 2019-05-01. Retrieved 2019-05-06.**^**"Currency App: Introducing banknotes in a new and interactive way".*Giesecke+Devirent*. Retrieved 2022-05-23.**^**Singh, Netra (2008). "Polymer Banknotes–A Viable Alternative to Paper Banknotes".*Asia Pacific Business Review*.**4**(2): 42–50. doi:10.1177/097324700800400206. S2CID 154615011 – via Researchgate.**^**"Security Features" (PDF). Atlanta, GA: Advantage Laser Products. p. 1. Retrieved 26 May 2014.**^**"Security features Europa series €100 banknote".*European Central Bank*. 11 September 2018. Retrieved 2022-05-14.**^**"Weather Resistance Series, Pearlescent Pigment, Pearl EX Pigments". Dynasty Chemicals (NingBo) Co., Ltd. Retrieved 26 May 2014.Pearlescent Pigments are made from mica and they are widely used in paits, coating, printing ink, plastic, cosmetic, leather, wallpaper etc. **^**"Security features Europa series €100 banknote".*European Central Bank*. 11 September 2018. Retrieved 2022-05-14.**^***Security Engineering: A Guide to Building Dependable Distributed Systems*(PDF). p. 245. Retrieved 26 May 2014.banknote designers succumb to the Titanic effect **^**"Security Features" (PDF). Advantage Laser Products. Retrieved 26 May 2014.Prismatic Two-Colour Pantograph A multi-colour background in which two colours change density and blend into each other, making it very difficult to reproduce **^**"Fast and reliable authentication of banknotes".*Koneig & Bauer*. 2022-04-08. Retrieved 2022-05-26.**^**"Digital Document Security" (PDF). H.W. Sands Corp. and Graphic Security Systems Corporation. pp. 7–11. Retrieved 2019-06-15.**^**"Magnetic pigments" (PDF). BASF, The Chemical Company. July 2004: 6. Retrieved 26 May 2014.`{{cite journal}}` : Cite journal requires`|journal=` (help)**^**Haas, B.; Dirik, A.E. (2012-11-01). "Copy detection pattern-based document protection for variable media".*IET Image Processing*.**6**(8): 1102–1113. doi:10.1049/iet-ipr.2012.0297. ISSN 1751-9659.**^**Abele, Eberhard. (2011).*Schutz vor Produktpiraterie : ein Handbuch für den Maschinen- und Anlagenbau*. Ksuke, Philipp., Lang, Horst. Berlin: Springer. ISBN 978-3-642-19280-7. OCLC 726826809.**^**"Unparalleled security: The M-Feature".*Giesecke+Devrient*. 2022. Retrieved 2023-10-26.**^**"ENIGMA: Our invisible high security taggant feature".*De La Rue*. 2023-10-26. Retrieved 2023-10-26.**^**"Banknote Security Features".*Spectra Systems Corporation*. 2023-10-26. Retrieved 2023-10-26. ## External links [edit]- "Banknote security features" (PDF). Billetaria issue 16. Madrid: Banco de Espana. October 2014. pp. 46–47. Retrieved 2019-06-13. It presents a catalogue of the main banknote security features recognisable by the public and currently in use worldwide. - The council of the EU: Glossary of Security Documents, Security Features and other related technical terms - EUIPO Anti-Counterfeiting Technology Guide
true
true
true
null
2024-10-12 00:00:00
2004-05-12 00:00:00
https://upload.wikimedia…0px-Hologram.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
31,680,181
https://watchinnz.co.nz/netflix-rumored-to-acquire-roku/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,272,184
https://www.nytimes.com/2020/08/25/science/bacteria-bdellovibrio-predator-prey.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
693,011
http://www.techcrunch.com/2009/07/07/so-much-for-that-idea-facebook-has-killed-off-its-great-apps-initiative/
So Much For That Idea. Facebook Has Killed Off Its Great Apps Initiative | TechCrunch
Jason Kincaid
Last summer Facebook announced two new programs designed to help surface some of the best applications on Facebook Platform. The first, called Verified Apps, was to help users find applications they could trust — in other words, apps that always stayed true to Facebook’s guidelines, and wouldn’t spam users. Verified Apps finally launched in May after lengthy delays, with around 120 apps in the inaugural class. But the program was only meant to serve as the first stepping stone on the path towards Platform greatness, serving as something of a minor league before the very best apps made it to the Majors. The second step was a program dubbed “Great Apps”, which was meant to reward the very best applications on Facebook Platform, enticing developers with promises of “greater visibility on Facebook, earlier access to new features, and more feedback from Facebook”. It was going to highlight the true cream of the crop, launching with iLike and Causes as inaugural members with plans to add a dozen or so more applications within the next year. Now, we’ve learned, that isn’t going to happen, as Facebook has killed off the program. Or, rather, it’s combined Great Apps with Verified Apps — the two are now one and the same. The few applications that were members have been notified of their demotion to plain Verified Apps, and nearly all literature relating to the program has been removed from Facebook. So what happened? Facebook decided to simply give the benefits it was going to reserve for Great Apps and give them to the Verified Apps instead. Verified Apps are currently being more prominently displayed than their unverified brethren, and Facebook has recently been testing out some new features, like its payment platform, with a handful of them. Here’s Facebook’s full explanation: We decided to merge Great Apps with the App Verification program, as they achieve similar goals of helping users identify trusted applications and rewarding the developers who create them. Given the high quality of the applications that have come through the Verification Program and the positive response by users, we believe focusing on one program will provide the best outcome for both users and developers. This all makes sense, but it’s hard to argue that being grouped into a field of hundreds of good apps is comparable to ranking among a dozen or so truly *excellent* applications. With 15 or so Great Apps, every top app could have been shown on a single screen, perhaps as the first thing users saw when they clicked over to the “Browse Applications” section. Facebook gives Verified Apps better positioning in the App Directory, but this promotion is diluted to some extent by the many other applications that are given the same treatment. That said, we’ve heard that Verified Applications have been reaping the benefits of better placement and less restrictive invitation limits and seeing boosts in traffic. Still, I’m sure many of the truly great apps would have appreciated the chance to really stand above the rest.
true
true
true
Last summer Facebook announced two new programs designed to help surface some of the best applications on Facebook Platform. The first, called Verified Apps, was to help users find applications they could trust — in other words, apps that always stayed true to Facebook's guidelines, and wouldn't spam users. Verified Apps finally launched in May after lengthy delays, with around 120 apps in the inaugural class. But the program was only meant to serve as the first stepping stone on the path towards Platform greatness, serving as something of a minor league before the very best apps made it to the Majors. The second step was a program dubbed "Great Apps", which was meant to reward the very best applications on Facebook Platform, enticing developers with promises of "greater visibility on Facebook, earlier access to new features, and more feedback from Facebook". It was going to highlight the true cream of the crop, launching with iLike and Causes as inaugural members with plans to add a dozen or so more applications within the next year. Now, we've learned, that isn't going to happen, as Facebook has killed off the program. Or, rather, it's combined Great Apps with Verified Apps — the two are now one and the same. The few applications that were members have been notified of their demotion to plain Verified Apps, and nearly all literature relating to the program has been removed from Facebook.
2024-10-12 00:00:00
2009-07-07 00:00:00
https://techcrunch.com/w…reatappsfeat.png
article
techcrunch.com
TechCrunch
null
null
35,786,223
https://www.wsj.com/articles/why-is-inflation-so-sticky-it-could-be-corporate-profits-b78d90b7
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,392,051
https://thenewstack.io/whats-next-serverless-platform/?utm_content=buffer5ee8f&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
What's Next for the Serverless Platform
Austen Collins
# What’s Next for the Serverless Platform In 2015, we released the Serverless Framework, wanting to help developers build applications on top of new serverless computing services, such as AWS Lambda (also known as Functions-as-a-Service). The Framework has since become one of the fastest growing open-source projects on GitHub and a major focal point of the serverless movement in general. Our project tends to serve as a bellwether: a place where anyone can go to track use-cases and emerging trends. This gives us some insight into why the serverless movement is so strong and leaves us clues as to where it’s heading. ## The Past When the Framework was first introduced in 2015, we were surprised by how many progressive enterprises lined up to try it out. These early adopters were true converts: defectors from the world of microservices, containers and traditional cloud architectures. They sought refuge from the complexity of modern software development — the daunting amount of tools and best practices, operational overhead, and general propaganda pertaining to the religious war of “how to build x”. While the “serverless” term was a bit semantically controversial — and remains so — early adopters haven’t minded. Engineers were desperate to get products to market and reduce overhead. The word “serverless” promised them a technology that just got out of their way. It was a message that resonated. The serverless community was united under one maxim: less was more. They wanted more results, not more technology. ## Convenience Attracts Uses By the end of 2015, engineers began conducting serverless proofs-of-concept within their organizations. Success in those early trials empowered them to look at every piece of their existing business logic and ask, *“Will it Lambda?”* Initial serverless use cases were dominated by data processing. But suddenly, engineers were cramming all types of workloads into Lambda: using serverless compute to build back-ends and automate workflows. They churned out auto-scaling, pay-per-execution microservices with minimal effort or administration. The gain in efficiency was too enticing to ignore. Even as some early limitations of serverless compute became potential roadblocks, engineers chose to compromise or find workarounds. Cloud providers rushed to address limitations with relentless fixes and new feature releases. This provided, even more, momentum for serverless-first engineers. To realize the full potential of serverless architectures, the community needs a single, uniform experience for deploying functions and managing their lifecycles across all serverless compute providers. When the answer to that initial question, “*Will it Lambda?*”, was no, companies kept pushing forward with a positive outlook. They said, “*Okay, how long until it can?*” Serverless computing took the majority of administrative tasks that occupied an engineer’s day and eliminated them. This additional free time began to manifest in new projects and creative tinkering. All sorts of projects surfaced: chatbots, DevOps automation, file manipulators, policy enforcers, webhook listeners, HTML renderers, scheduled tasks, and so much more. In traditional architectures conferences, talks are often about administration; in serverless conferences, talks are often celebrations of things that were built. ## Rise of the Serverless Team An interesting situation arises after a serverless project is completed: Its engineers are (mostly) free to take on new work. Within companies, rogue serverless pioneers gained a following and formed the first unofficial serverless teams — teams which were remarkably productive, despite their size. They often operated as independent units, practicing random acts of automation and venturing on missions to tackle new jobs in a serverless fashion. By the end of 2016, only a year after the Serverless Framework was released, several gung-ho teams in large enterprises had already put hundreds of serverless functions into production. ## Serverless First Managers took notice. Top-level business leadership were ever chasing aggressive digital product goals. They wanted to innovate faster than their competitors and immediately ship to market, rinse, repeat. Business leaders started looking at serverless architectures for a competitive edge — some CTOs even opting to build their own serverless prototypes for faster buy-in. Large enterprises who had not even embraced public cloud yet wanted to jump straight into serverless. They viewed it as a far more accessible version of the cloud, one which made it relatively easy to convert their large workforce into a heavy-hitting team of cloud engineers. ## Present The momentum of the serverless movement has inspired other vendors to offer their own serverless compute services. It’s not just AWS Lambda anymore. There is now Azure Functions, Google Cloud Functions and IBM Cloud Functions (based on OpenWhisk), along with smaller vendors serving specific use-cases, like Twilio Functions, PubNub Functions, Auth0 Webtask. There are also several serverless computing implementations being built with Docker and Kubernetes, like Kubeless and Fission. Larger providers are introducing dozens of new managed services and APIs, like Google’s Speech and Vision APIs, and AWS’s Lex and Polly services, which pair nicely with serverless computing and generally follow the serverless ethos. The rate of feature competition by major cloud providers is accelerating. Meanwhile, developers and organizations are delighted by the growing number of tools they have at their disposal to solve more problems and deliver more innovation. ## Time to Operationalize Serverless advocates want to standardize serverless development so others can adopt it throughout their organization. There is clearly a need for better tooling to support serverless development and lifecycle management. Organizations have adopted many platform-specific solutions for this (such as AWS CodeBuild), but with slight hesitation. Vendor lock-in takes away any flexibility they have in terms of runtime and choosing their own lifecycle management tools. It also makes it hard to shift these provider-specific architectures to another provider. So for serverless-forward teams, The Serverless Framework has become an integral part of their adoption story. ## Incentivizing Multicloud Somewhere along the way, an unintended consequence of serverless computing emerged. It made leveraging a cloud’s services as easy as uploading a function. *Need to adopt a cloud provider? Stick a function there. *Further, serverless functions were auto-scaling and pay-per-execution, which meant having functions provisioned across multiple regions, and even multiple providers, was trivial. AWS Lambda was originally thought of as a way to glue AWS’s services together. It quickly became clear that serverless computing, in general, was a way to glue *all* services and platforms together. This binding power offers an interesting take on multi-cloud and hybrid-cloud architectures. While exciting, it does prompt a discussion of what the serverless architecture should look like. Should we continue to think of serverless architectures in a platform-specific manner, given that they live everywhere all at once? Further, if serverless architectures are universal, what does that mean for data? ## Future There is currently no standardization across vendors. To realize the full potential of serverless architectures, the community needs a single, uniform experience for deploying functions and managing their lifecycles across all serverless compute providers. The Serverless Framework does this. It was the first tool to offer an application model for serverless architectures; now it’s the first tool to offer a uniform experience for deploying and managing serverless functions on every serverless compute vendor. But we still need serverless standards. Each vendor has a different design for things like function signatures, which makes adoption and usability more challenging than it needs to be. The team behind the Serverless Framework is working with the Cloud Native Computing Foundation and major cloud providers to converge on standards for serverless concepts. Discussions are still in their early phases, but all stakeholders have been extraordinarily collaborative, including larger vendors. ## A Story of Functions …and Events The serverless story has largely been focused on functions thus far. Yet, this is only the first half. The other half of the story is about data — and since serverless architectures are essentially based on event-driven computing, their data is expressed in the form of events. Now that serverless functions have enabled us to easily react to anything, *everything* is starting to look like an event: business events, application state changes, synchronous requests, asynchronous requests, notifications, messages, audit trails, system logs, errors, metrics, alerts, webhooks, clickstreams, health checks… to the serverless function, the world is filled with events, just waiting for action and analysis. Events make data portable and liquid. They are the simple contracts serverless functions use to coordinate, regardless of where functions are located. They protect against vendor lock-in while enabling vendor choice. If there exists a single pattern that will bring order to the vast amounts of logic organizations will create over the coming years, it will be the event-driven pattern. Unfortunately, event-driven tools and services currently exist in a haphazard state. In the future, these tools and best practices must evolve to sustain the levels of productivity we are now able to achieve. Recently, the team behind the Serverless Framework announced a new type of open-source infrastructure titled the Event Gateway. The technology blends API Gateways and the Pub/Sub pattern to create a highly productive event router for serverless computing. The Event Gateway is the missing piece of middleware for the serverless era, and can potentially become the backbone of the modern digital business. Its first goal is to express all data in the form of events (even raw HTTP requests). Its second goal is to route those events to any serverless compute provider, anywhere. Event Gateway will enable teams to easily write serverless functions across cloud providers (or on-premise) and act upon them. This project is soon to be followed by the Serverless Platform, which enables team collaboration across serverless applications and provides solutions for event data management. All of these tools will follow the serverless standards described above. ## Total Automation and Intelligence Automation is predicted to increase substantially — and necessarily so, given the world’s growing complexity and cost. To thrive in hyper-competitive and expensive times, every organization, team and sole individual needs the power of automation. Along with the rise in automation will be the rise in systems of intelligence. An architecture that allows real-time, contextual data processing will allow businesses to make better decisions in the exact moment a problem arises. Serverless computing and the serverless architecture can enable these scenarios, and their potential to do so cannot be understated — especially when you combine the ease of serverless functions with the scalable, organizational capacity of event-driven design. Functions and events are the two simple concepts we’ve been waiting for, to build vast systems of automation and intelligence, while keeping operational overhead minimized. This is only the beginning of the serverless architecture and the story of serverless overall. While the potential is endless, to realize that potential we’ll need new types of tools and infrastructure to build and operate these next-generation serverless architectures. This is what we are building at Serverless Inc. The Cloud Native Computing Foundation, Google and Microsoft are sponsors of The New Stack. Feature image via Pixabay.
true
true
true
In 2015, we released the Serverless Framework, wanting to help developers build applications on top of new serverless computing services,
2024-10-12 00:00:00
2017-09-26 00:00:00
https://cdn.thenewstack.…-1130492_640.jpg
article
thenewstack.io
The New Stack
null
null
19,003,038
https://www.walmartlabs.com/case-studies/ntransit
Walmart Global Tech | LinkedIn
null
Each quarter, we honor a team with the Tech>FWD award for their outstanding efforts in creating solutions that enhance the experiences of our customers, members, sellers, and associates. We often receive dozens of nominations, and this round was no different. After narrowing it down to seven finalists, our leaders selected the Global Seller Experience Team as the winner for their solution ‘Global Item Set-up’! As we continue to expand into new markets, suppliers and Walmart Marketplace sellers have opportunities to reach new customers. ‘Global Item Set-up’ has streamlined the process of listing items within those markets by enabling Walmart sellers to select existing items in their U.S. catalog, choose the markets they want to list in and add tax codes and prices to items in one place. The listings are then translated and added to the catalogs and corresponding websites with the click of a button. Congrats to the teams that were nominated and to the winning team! # Walmart Global Tech ## Software Development ### Bentonville, Arkansas 341,411 followers #### We're powering the next great retail disruption. ## About us Walmart has a long history of transforming retail and using technology to deliver innovations that improve how the world shops and empower our 2.1 million associates. It began with Sam Walton and continues today with Global Tech associates working together to power Walmart and lead the next retail disruption. Our world-class software engineers, data scientists and engineers, cybersecurity professionals, product managers and business service professionals work with top talent on cutting-edge technologies that create unique and innovative experiences for our associates, customers and members across Walmart, Sam’s Club and Walmart International. At Walmart Global Tech, one line of code or bold idea can make life easier for hundreds of millions of people – talk about epic impact at a global scale. - Website - http://tech.walmart.com External link for Walmart Global Tech - Industry - Software Development - Company size - 10,001+ employees - Headquarters - Bentonville, Arkansas - Type - Public Company - Specialties - Data Science, Data Analytics, Software Engineering, Technology, UX Design, Cybersecurity, Emerging Tech, Retail Tech, Supply Chain Tech, and Cloud ## Locations - Primary Bentonville, Arkansas, US ## Employees at Walmart Global Tech - ### Jonathan Sidhu #### Transforming business challenges into business value using data science & advanced analytics across multiple industries & domains - ### Sanjay Shahri #### Software Engineering Leader | eCommerce, Advertising Tech, Front-End, Mobile, Enterprise software - ### Vimal Nelson #### Principal Engineer | Architecting cloud solutions - ### Samuel Druker #### Engineer, scientist, coach. ## Updates - Today we’re announcing how we’re advancing AI and GenAI capabilities through a platform approach, accelerating Adaptive Retail and hyper-personalized shopping experiences: • Introducing Wallaby: Our proprietary retail-specific large language model, trained with decades of Walmart data to deliver highly contextual and natural customer interactions. • Enhanced Customer Care: Our AI-powered Customer Support Assistant is now more personalized, providing a smoother experience for handling orders and returns. • Personalized Online Shopping: Our Content Decision Platform tailors Walmart.com homepages to each individual shopper for a highly personalized experience. • Immersive Commerce: Leveraging AR and AI to bring shopping into new virtual environments, including exciting partnerships with Unity and ZEPETO. • Reality Reimagined: Our AR and XR solutions go beyond placing items in a room, and instead leverage the technology for inspiration and full home design. Learn more about how we're innovating and building experiences that adapt to customers' individual preferences and needs: https://lnkd.in/dVuUWTwv - Missed Converge @ Walmart 2024? Here’s your chance to catch up! This year’s event was packed with engaging discussions and insightful perspectives from more than 35 industry experts and thought leaders on the future of retail. Topics included supply chain intelligence, GenAI-powered personalization and emerging shopping trends. Watch the highlights here: https://lnkd.in/gefXm6Wj - We’re honored to be named one of Fast Company's “Best Workplaces for Innovators”! This recognition underscores our purpose of being people-led and tech-powered. We strive to provide an environment where associates can learn new skills and grow their careers, while leveraging cutting-edge technologies and engineering excellence to deliver the future of retail to our customers, members and associates globally. Whether you’re looking to start your career or take it to the next level, we offer a wide range of roles and opportunities to tackle complex challenges and transform customer experiences. Check them out here: https://lnkd.in/gqBBj6yR - As summer comes to a close, we'd like to give a shout-out to all the interns who joined us for 10+ impactful weeks! From tackling complex challenges at scale to immersing themselves in our Walmart culture, we couldn’t have asked for a better group. We hope that the meaningful connections you built with your teams, mentors, fellow interns, and early career associates had as much impact on you as they did on us. - “Retailers need to predict shoppers’ needs, reduce decision-making, enable highly personal experiences, and meet them where they are. But how we get there and what that could look like is not up to the retailer, it’s up to customers to define the experiences we deliver.” Check out Suresh Kumar’s thoughts on understanding and meeting the customer “why”: https://lnkd.in/gCCDzZcX ## Meeting the Customer “Why” ### Suresh Kumar on LinkedIn - Great seeing everyone come together in Sunnyvale for an inspiring town hall! Doug McMillon took the stage with Sundar Pichai for a fireside chat, discussing how tech is transforming customer experiences—enabling more personalized and seamless shopping journeys—and the importance of continued investment in deep tech to stay ahead of the competition and drive innovation. - Our first Cyber Intelligence Summary is out now, and it's packed with noteworthy trends shaping the cybersecurity landscape: 🤖 AI adoption is shaking up the world of cybersecurity and spurring conversations around convenience and security. 🎣 Threat actors continue to innovate on their phishing methods and the barrier to entry is lowering. 📦 Supply chain vulnerabilities are a top concern for backdoor access to organizations. Read on to discover how threats are evolving and unlock valuable insights to enhance your cybersecurity posture. - Customers are becoming increasingly savvy of how new technology can fulfill their needs quickly, and our associates are benefitting from the optimization of tasks and processes when they leverage tech in their workflows. Solutions like our AI/ML platform Element are enabling us to deliver experiences with speed and at scale, rapidly expanding our lineup of GenAI-powered experiences. And tools like our Shopping Assistant, and internal tools such as My Assistant, DX AI Assistant and Coding Assistant, are helping answer challenging questions and reduce decision-making so users can achieve results faster. Saving time and money is at the heart of what we do, and our latest earnings results confirm we're on the right track with impressive growth across the company: https://lnkd.in/gkSR4Ypg #TeamWalmart #SamsTech #WalmartInternational #WalmartGlobalTech #RetailInnovation
true
true
true
Walmart Global Tech | 341,411 followers on LinkedIn. We're powering the next great retail disruption. | Walmart has a long history of transforming retail and using technology to deliver innovations that improve how the world shops and empower our 2.1 million associates. It began with Sam Walton and continues today with Global Tech associates working together to power Walmart and lead the next retail disruption. Our world-class software engineers, data scientists and engineers, cybersecurity professionals, product managers and business service professionals work with top talent on cutting-edge technologies that create unique and innovative experiences for our associates, customers and members across Walmart, Sam’s Club and Walmart International.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://media.licdn.com/dms/image/v2/C560BAQEjq-bcaW-GSg/company-logo_200_200/company-logo_200_200/0/1630657358931/walmartglobaltech_logo?e=2147483647&v=beta&t=dTQgTVkyhv8y3OuiH0TXT1iGvTe8KBt6EK_W6n9y_vo
article
linkedin.com
Walmart Global Tech
null
null
20,798,942
https://breakingdefense.com/2019/08/wearing-the-network-to-war/
Wearing The Network To War
Sydney J Freedberg Jr
TECHNET AUGUSTA: Wifi gunsights that tell your smart goggles where to aim. Artificial intelligence that tells distant artillery batteries whenever you spot a high-priority target. Backpack transmitters, remotely controlled by technicians miles away, that jam enemy communications while you focus on the fight. A jamming-resistant GPS that double-checks your location against a wearable inertial navigation system and pedometers in your boots. These are all technologies the Army is now developing or, in some cases, fielding in a few months. The American grunt has gotten ever more high-tech since 2001. Handheld GPS, tactical radios, night vision goggles, electronic gunsights, and more have accumulated to the point where the weight of batteries has become a major burden. But now the Army aims *connect* all these devices in a wearable, ultra-short-range wifi network around the soldier’s body and, in many cases, over long-range battlefield networks as well. This trend is a distinctly double-edged. On the upside, the kind of near-instantaneous sharing of data — on friendly locations, reported threats, potential targets — is exactly what the Army and its sister services need to coordinate the kind of high-speed, high-complexity multi-domain operations they see a crucial to victory on future battlefields. On the downside, the likely enemies on in such future fights, like Russia and China, are scarily good at jamming, spoofing, hacking, and disrupting wireless networks. That requires a new kind of muddy-boots cybersecurity. **Connect & Secure** The modern soldier is increasingly “an integrated weapons platform,” like a tank or helicopter, said Brig. Gen. Tony Potts, the Army’s acquisition chief for soldier gear (aka PEO-Soldier). “I want everything within a squad to be able to talk to each other.” To move all that data, “we are building our own processors, we’re going to spin our own chips … because we needed processors more powerful and faster than we had commercially available,” Potts said. “They’re going to be on-body processors… part of a hybrid cloud system” that pulls data from far-off servers. But when everything is interconnected, you have to take new kinds of precautions, Potts warned the TechNet Augusta conference this week. So, since he took over as Program Executive Officer – Soldier early last year, Potts has pushed hard to get the systems in his portfolio to comply to strict standards for cybersecurity. “When I got there 18-19 months ago, most of the things that we did [got] waivers for all that cybersecurity stuff: ‘Hey, sir, we don’t need that, they’re just goggles,'” Potts recounted. “Now in [PEO] Soldier, we’re now having to build a very robust capability for RF [radio frequency, i.e. wireless] cybersecurity, because everything we are doing … we are moving and sharing all that data across the soldier as a platform and across the squad as a platform, and we’ve got to protect that data.” Many of the devices foot troops already carry, in fact, send and receive data over the wider tactical network with other units, from command posts to artillery batteries. That allows unprecedented cooperation, but also unprecedented vulnerability for an enemy to hack one system and then spread malware or bad data around the entire network. Today, for example, PEO-Soldier produces all the precision-targeting devices that troops used to call in fire support, Potts said. For the near future, Army’s augmented-reality goggles known in development — a militarized Microsoft HoloLens known as the Integrated Visual Augmentation System (IVAS), aka HUD 3.0 — will display both information from the soldier’s personal electronics, such as a targeting cross-hairs showing where his gun is pointed, and tactical information from the battlefield network, like the distance and direction to the objective. IVAS is also exploring an artificially intelligent object-classification system that can automatically recognize, say, an anti-aircraft missile launcher in the soldier’s field of view and transmit its coordinates to the network, warning friendly aircraft to steer clear and prompting friendly artillery to destroy it. Then there’s the invisible weapons of electronic warfare. The Army acquisition PEO for Intelligence, Electronic Warfare, & Sensors has urgently fielded portable radio-detection and radio-jamming systems, VROD and VMAX, in both Europe and the Middle East. Standard procedure is to issue these rucksack-sized systems to specially trained electronic warfare troops, who use a tablet to control them. Typically three or four EW troops go out together — so they get multiple bearings on a signal and triangulate — with a squad of regular infantry as escorts. But mere months from now, the Army will issue new electronic warfare command-and-control software, EWPMT, which will allow a single EW technician in a distant command post to control multiple VROD or VMAX systems all over the battlefield. “It can be put on any soldier’s back and remoted through EWPMT,” Army C5ISR Center expert Ken Gilliard told reporters at Aberdeen Proving Ground last week. That means the infantryman carrying it can focus on things like running, shooting, and not getting killed, while the Army’s relatively small number of EW technicians can hunker down behind the frontline and control the digital battle over a wide area. But that also means *enemy* electronic warfare technicians can potentially detect the transmissions back and forth, then use them to fix the US troops’ location or, arguably worse, hack into the American battlefield network. **Standards & Collaboration** PEO-Soldier is working with the network modernization team to ensure all the soldier-borne tech connects reliably and securely with the wide-area battlefield wifi, Potts said. He’s also working with the teams developing new armored vehicles and aircraft so that, when soldiers get in and out of their transports, they remain connected to the network. And he’s joined at the him with the Army’s director for training systems modernization, Maj. Gen. Maria Gervais, so her simulators can run on his IVAS goggles, which will troops train against VR enemies who pop up in their field of vision like a hardcore version of Pokémon Go. But it’s tricky enough just getting all the different systems the infantry squad carries to connect to each other. To make that simpler, PEO-Soldier is developing what they call the Adaptive Squad Architecture (ASA) that specifies how contractors must configure whatever component they built to interconnect with the others, with IVAS as the central pillar of the entire structure. “Adaptive Squad Architecture, we’re building the tools today,” Potts said. “We’re actually. going to release an early version in January.” “We had an industry day earlier this week,” he went on, where he explained to industry he has no designs on their intellectual property, he just wants them to build tech compatible with the new architecture so everything works together — an approach called open architecture. “I don’t want to own a lot of proprietary things,” Potts said. “I want to own the *interfaces*, and I want you to know what those interfaces are.” The ultimate goal is a layered system that degrades gracefully. If jamming, hacking, reinforced-concrete buildings, or an intervening hill you off from the wide-area battlefield network, you can at least share data within your squad. If you’re cut off from your squad, you can at least share data among the devices you wear and carry. And if everything is cut off — well, your IVAS still has night-vision built in. “Worst case is I lose it all and it’s a set of goggles,” Potts said. The Army needs to build its kit — and train its soldiers — to keep functioning even if every network connection is shut down, but make the most of all that data when and if they get it. “We need to be networked-enabled and not network-dependent,” Potts said. “We have to build our systems where, if I’m completely denied connectivity, I can still fight.” *Corrected: The original version of this story mistakenly said Ken Gilliard worked for PEO-IEWS (an Army *procurement* organization); he actually works for the C5ISR Center (an *R&D *organization formerly known as CERDEC). The story has been corrected.* ### ‘A lot … at once’: Army cyber and network officials take stock 1 year after organizational shuffle Last year, PEO C3T and PEO IEW&S absorbed several programs from the PEO Enterprise Information Systems (EIS), in hopes of better orienting the land forces for the fights of the future.
true
true
true
Army foot soldiers are going into battle with more and more electronics, wirelessly networked both to each other and to distant command posts. So can GI Joe be hacked?
2024-10-12 00:00:00
2019-08-26 00:00:00
https://breakingdefense.…ion-1024x576.jpg
article
breakingdefense.com
Breaking Defense
null
null
4,615,796
http://onewebsql.com/homepage
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,031,027
http://www.delanceyplace.com/view_archives.php?2564&p=2564#.U8Pnco1dW6M
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,076,215
https://tech.facebook.com/reality-labs/2022/12/boz-look-back-2023-look-ahead/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
1,469,762
http://users.rowan.edu/~polikar/WAVELETS/WTtutorial.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,126,775
https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/
NVIDIA Grace Hopper Superchip
null
The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace™ and Hopper™ architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications. With 900 gigabytes per second (GB/s) of coherent interface, the superchip is 7X faster than PCIe Gen5. And with HBM3 and HBM3e GPU memory, it supercharges accelerated computing and generative AI. GH200 runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, the HPC SDK, and Omniverse™ The Dual GH200 Grace Hopper Superchip fully connects two GH200 Superchips with NVLink and delivers up to 3.5x more GPU memory capacity and 3x more bandwidth than H100 in a single server.
true
true
true
For giant-scale AI and HPC applications.
2024-10-12 00:00:00
2024-10-02 00:00:00
https://www.nvidia.com/c…superchip-og.jpg
Website
nvidia.com
NVIDIA
null
null
10,135,223
https://medium.com/@mahringer_a/designing-for-zero-content-bbfdc1402d16
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,205,056
https://www.qubes-os.org/news/2018/01/22/qubes-air/
Qubes Air: Generalizing the Qubes Architecture
Joanna Rutkowska
# Qubes Air: Generalizing the Qubes Architecture The Qubes OS project has been around for nearly 8 years now, since its original announcement back in April 2010 (and the actual origin date can be traced back to November 11th, 2009, when an initial email introducing this project was sent within ITL internally). Over these years Qubes has achieved reasonable success: according to our estimates, it has nearly 30k regular users. This could even be considered a great success given that 1) it is a new *operating system*, rather than an *application* that can be installed in the user’s favorite OS; 2) it has introduced a (radically?) new approach to managing one’s digital life (i.e. an explicit partitioning model into security domains); and last but not least, 3) it has very *specific* hardware requirements, which is the result of using Xen as the hypervisor and Linux-based Virtual Machines (VMs) for networking and USB qubes. (The term “qube” refers to a compartment – not necessarily a VM – inside a Qubes OS system. We’ll explain this in more detail below.) For the past several years, we’ve been working hard to bring you Qubes 4.0, which features state-of-the-art technology not seen in previous Qubes versions, notably the next generation Qubes Core Stack and our unique Admin API. We believe this new platform (Qubes 4 represents a major rewrite of the previous Qubes codebase!) paves the way to solving many of the obstacles mentioned above. The new, flexible architecture of Qubes 4 will also open up new possibilities, and we’ve recently been thinking about how Qubes OS should evolve in the long term. In this article, I discuss this vision, which we call Qubes Air. It should be noted that what I describe in this article has not been implemented yet. ## Why? Before we take a look at the long-term vision, it might be helpful to understand why we would like the Qubes architecture to further evolve. Let us quickly recap some of the most important current weaknesses of Qubes OS (including Qubes 4.0). ### Deployment cost (aka “How do I find a Qubes-compatible laptop?”) Probably the biggest current problem with Qubes OS – a problem that prevents its wider adoption – is the difficulty of finding a compatible laptop on which to install it. Then, the whole process of needing to install a new *operating system*, rather than just adding a new *application*, scares many people away. It’s hard to be surprised by that. This problem of deployment is not limited to Qubes OS, by the way. It’s just that, in the case of Qubes OS, these problems are significantly more pronounced due to the aggressive use of virtualization technology to isolate not just apps, but also devices, as well as incompatibilities between Linux drivers and modern hardware. (While these driver issues are not inherent to the architecture of Qubes OS, they affected us nonetheless, since we use Linux-based VMs to handle devices.) ### The hypervisor as a single point of failure Since the beginning, we’ve relied on virtualization technology to isolate individual qubes from one another. However, this has led to the problem of over-dependence on the hypervisor. In recent years, as more and more top notch researchers have begun scrutinizing Xen, a number of security bugs have been discovered. While many of them did not affect the security of Qubes OS, there were still too many that did. :( Potential Xen bugs present just one, though arguably the most serious, security problem. Other problems arise from the underlying architecture of the x86 platform, where various inter-VM side- and covert-channels are made possible thanks to the aggressively optimized multi-core CPU architecture, most spectacularly demonstrated by the recently published Meltdown and Spectre attacks. Fundamental problems in other areas of the underlying hardware have also been discovered, such as the Row Hammer Attack. This leads us to a conclusion that, at least for some applications, we would like to be able to achieve better isolation than currently available hypervisors *and* commodity hardware can provide. ## How? One possible solution to these problems is actually to “move Qubes to the cloud.” Readers who are allergic to the notion of having their private computations running in the (untrusted) cloud should not give up reading just yet. Rest assured that we will also discuss other solutions not involving the cloud. The beauty of Qubes Air, we believe, lies in the fact that all these solutions are largely isomorphic, from both an architecture and code point of view. ## Example: Qubes in the cloud Let’s start with one critical need that many of our customers have expressed: Can we have “Qubes in the Cloud”? As I’ve emphasized over the years, the essence of Qubes does not rest in the Xen hypervisor, or even in the simple notion of “isolation,” but rather in the careful decomposition of various workflows, devices, apps across securely compartmentalized containers. Right now, these are mostly desktop workflows, and the compartments just happen to be implemented as Xen VMs, but neither of these aspects is essential to the nature of Qubes. Consequently, we can easily imagine Qubes running on top of VMs that are hosted in some cloud, such as Amazon EC2, Microsoft Azure, Google Compute Engine, or even a decentralized computing network, such as Golem. This is illustrated (in a very simplified way) in the diagram below: It should be clear that such a setup automatically eliminates the deployment problem discussed above, as the user is no longer expected to perform any installation steps herself. Instead, she can access Qubes-as-a-Service with just a Web browser or a mobile app. This approach may trade security for convenience (if the endpoint device used to access Qubes-as-a-Service is insufficiently protected) or privacy for convenience (if the cloud operator is not trusted). For many use cases, however, the ability to access Qubes from any device and any location makes the trade-off well worth it. We said above that we can imagine “Qubes running on top of VMs” in some cloud, but what exactly does that mean? First and foremost, we’d want the Qubes Core Stack connected to that cloud’s management API, so that whenever the user executes, say, `qvm-create` (or, more generally, issues any Admin API call, in this case `admin.vm.Create.*` ) a new VM gets created and properly connected in the Qubes infrastructure. This means that most (all?) Qubes Apps (e.g. Split GPG, PDF and image converters, and many more), which are built around qrexec, should Just Work (TM) when run inside a Qubes-as-a-Service setup. Now, what about the Admin and GUI domains? Where would they go in a Qubes-as-a-Service scenario? This is an important question, and the answer is much less obvious. We’ll return to it below. First, let’s look at a couple more examples that demonstrate how Qubes Air could be implemented. ## Example: Hybrid Mode Some users might decide to run a subset of their qubes (perhaps some personal ones) on their local laptops, while using the cloud only for other, less privacy-sensitive VMs. In addition to privacy, another bonus of running some of the VMs locally would be much lower GUI latency (as we discuss below). The ability to run some VMs locally and some in the cloud is what I refer to as *Hybrid Mode*. The beauty of Hybrid Mode is that the user doesn’t even have to be aware (unless specifically interested!) in whether a particular VM is running locally or in the cloud. The Admin API, qrexec services, and even the GUI, should all automatically handle both cases. Here’s an example of a Hybrid Mode configuration: Another benefit of Hybrid Mode is that it can be used to host VMs across several different cloud providers, not just one. This allows us to solve the problem of over-dependence on a single isolation technology, e.g. on one specific hypervisor. Now, if a fatal security bug is discovered that affects one of the cloud services hosting a group of our VMs, the vulnerability will not automatically affect the security of our other groups of VMs, since the other groups may be hosted on different cloud services, or not in the cloud at all. Crucially, different groups of VMs may be run on different underlying containerization technologies and different hardware, allowing us to diversify our risk exposure against any single class of attack. ## Example: Qubes on “air-gapped” devices This approach even allows us to host each qube (or groups of them) on a physically distinct computer, such as a Raspberry PI or USB Armory. Despite the fact that these are physically separate devices, the Admin API calls, qrexec services, and even GUI virtualization should all work seamlessly across these qubes! For some users, it may be particularly appealing to host one’s Split GPG backend or password manager on a physically separate qube. Of course, it should also be possible to run normal GUI-based apps, such as office suites, if one wants to dedicate a physically separate qube to work on a sensitive project. The ability to host qubes on distinct physical devices of radically different kinds opens up numerous possibilities for working around the security problems with hypervisors and processors we face today. ## Under the hood: Qubes Zones We’ve been thinking about what changes to the current Qubes architecture, especially to the Qubes Core Stack, would be necessary to make the scenarios outlined above easy (and elegant) to implement. There is one important new concept that should make it possible to support all these scenarios with a unified architecture. We’ve named it **Qubes Zones**. A **Zone** is a concept that combines several things together: - An underlying “isolation technology” used to implement qubes, which may or may not be VMs. For example, they could be Raspberry PIs, USB Armory devices, Amazon EC2 VMs, or Docker containers. - The inter-qube communication technology. In the case of qubes implemented as Xen-based VMs (as in existing Qubes OS releases), the Xen-specific shared memory mechanism (so called Grant Tables) is used to implement the communication between qubes. In the case of Raspberry PIs, Ethernet technology would likely be used. In the case of Qubes running in the cloud, some form of cloud-provided networking would provide inter-qube communication. Technically speaking, this is about how Qubes’ vchan would be implemented, as the qrexec layer should remain the same across all possible platforms. - A “local copy” of an *Admin qube*(previously referred to as the “AdminVM”), used mainly to orchestrate VMs and make policing decisions for all the qubes within the Zone. This Admin qube can be in either “Master” or “Slave” mode, and there can only be one Admin qube running as Master across all the Zones in*one Qubes system*. - Optionally, a “local copy” of *GUI qube*(previously referred to as the “GUI domain” or “GUIVM”). As with the Admin qube, the GUI qube runs in either Master or Slave mode. The user is expected to connect (e.g. with the RDP protocol) or log into the GUI qube that runs in Master mode (and only that one), which has the job of combining all the GUI elements exposed via the other GUI qubes (all of which must run in Slave mode). - Some technology to implement storage for the qubes running within the Zone. In the case of Qubes OS running Xen, the local disk is used to store VM images (more specifically, in Qubes 4.0 we use Storage Pools by default). In the case of a Zone composed of a cluster of Raspberry PIs or similar devices, the storage could be a bunch of micro-SD cards (each plugged into one Raspberry PI) or some kind of network storage. So far, this is nothing radically new compared to what we already have in Qubes OS, especially since we have nearly completed our effort to abstract the Qubes architecture away from Xen-specific details – an effort we code-named *Qubes Odyssey*. What *is* radically different is that we now want to allow more than one Zone to exist in a single Qubes system! In order to support multiple Zones, we have to provide transparent proxying of qrexec services across Zones, so that a qube need not be aware that another qube from which it requests a service resides in a different zone. This is the main reason we’ve introduce multiple “local” Admin qubes – one for each Zone. Slave Admin qubes are also bridges that allow the Master Admin qube to manage the whole system (e.g. request the creation of new qubes, connect and set up storage for qubes, and set up networking between qubes). ## Under the hood: qubes’ interfaces Within one Zone, there are multiple *qubes*. Let me stress that the term “qube” is very generic and does not imply any specific technology. It could be a VM under *some* virtualization system. It could be some kind of a container or a physically separate computing device, such as a Raspberry PI, Arduino board, or similar device. While a qube can be implemented in many different ways, there are certain features it should have: - A qube should implement a vchan endpoint. The actual technology on top of which this will be implemented – whether some shared memory within a virtualization or containerization system, TCP/IP, or something else – will be specific to the kind of Zone it occupies. - A qube should implement a qrexec endpoint, though this should be very straightforward if a vchan endpoint has already been implemented. This ensures that most (all?) the qrexec services, which are the basis for most of the integration, apps, and services we have created for Qubes, should Just Work(TM). - Optionally, for some qubes, a GUI endpoint should also be implemented (see the discussion below). - In order to be compatible with Qubes networking, a qube should expect *one*uplink network interface (to be exposed by the management technology specific to that particular Zone), and (optionally) multiple downlink network interfaces (if it is to work as a proxy qube, e.g. VPN or firewalling qube). - Finally, a qube should expect two kinds of volumes to be exposed by the Zone-specific management stack: - one read-only, which is intended to be used as a root filesystem by the qube (the management stack might also expose an auxiliary volume for implementing copy-on-write illusion for the VM, like the `volatile.img` we currently expose on Qubes), - and one read-writable, which is specific to this qube, and which is intended to be used as home directory-like storage. This is, naturally, to allow the implementation of Qubes templates, a mechanism that we believe brings not only a lot of convenience but also some security benefits. - one read-only, which is intended to be used as a root filesystem by the qube (the management stack might also expose an auxiliary volume for implementing copy-on-write illusion for the VM, like the ## GUI virtualization considerations Since the very beginning, Qubes was envisioned as a system for desktop computing (as opposed to servers). This implied that GUI virtualization was part of the core Qubes infrastructure. However, with some of the *security-optimized* management infrastructure we have recently added to Qubes OS, i.e. Salt stack integration (which significantly shrinks the attack surface on the system TCB compared to more traditional “management” solutions), the Qubes Admin API (which allows for the fine-grained decomposition of management roles), and deeply integrated features such as templates, we think Qubes Air may also be useful in some non-desktop applications, such as the embedded appliance space, and possibly even on the server/services side. In this case, it makes perfect sense to have qubes not implement GUI protocol endpoints. However, I still think that the primary area where Qubes excels is in securing desktop workflows. For these, we need GUI ~~virtualization~~multiplexing, and the qubes need to implement GUI protocol endpoints. Below, we discuss some of the trade-offs involved here. The Qubes GUI protocol is optimized for security. This means that the protocol is designed to be extremely simple, allowing only for very simple processing on incoming packets, thus significantly limiting the attack surface on the GUI daemon (which is usually considered trusted). The price we pay for this security is the lack of various optimizations, such as on-the-fly compression, which others protocols, such as VNC and RDP, naturally offer. So far, we’ve been able to get away with these trade-offs, because in current Qubes releases the GUI protocol runs over Xen shared memory. DRAM is very fast (i.e has low latency and super-high bandwidth), and the implementation on Xen smartly makes use of page *sharing* rather than memory *copying*, so that it achieves near native speed (of course with the limitation that we don’t expose GPU functionalities to VMs, which might limit the experience in some graphical applications anyway). However, when qubes run on remote computers (e.g in the cloud) or on physically separate computers (e.g. on a cluster of Raspberry PIs), we face the potential problem of graphics performance. The solution we see is to introduce a local copy of the GUI qube into each zone. Here, we make the assumption that there should be a significantly faster communication channel available between qubes within a Zone than between Zones. For example, inter-VM communication within one data center should be significantly faster than between the user’s laptop and the cloud. The Qubes GUI protocol is then used between qubes and the local GUI qube within a single zone, but a more efficient (and more complex) protocol is used to aggregate the GUI into the Master GUI qube from all the Slave GUI qubes. Thanks to this combined setup, we still get the benefit of a reasonably secure GUI. Untrusted qubes still use the Qubes secure GUI protocol to communicate with the local GUI qube. However, we also benefit from the greater efficiency of remote access-optimized protocols such as RDP and VNC to get the GUI onto the user’s device over the network. (Here, we make the assumption that the Slave GUI qubes are significantly more trustworthy than other non-privileged qubes in the Zone. If that’s not the case, *and* if we’re also worried about an attacker who has compromised a Slave GUI qube to exploit a potential bug in the VNC or RDP protocol in order to attack the Master GUI qube, we could still resort to the fine-grained Qubes Admin API to limit the potential damage the attacker might inflict.) ## Digression on the “cloudification” of apps It’s hard not to notice how the model of desktop applications has changed over the past decade or so, where many standalone applications that previously ran on desktop computers now run in the cloud and have only their frontends executed in a browser running on the client system. How does the Qubes compartmentalization model, and more importantly Qubes as a *desktop* OS, deal with this change? Above, we discussed how it’s possible to move Qubes VMs from the user’s local machine to the cloud (or to physically separate computers) without the user having to notice. I think it will be a great milestone when we finally get there, as it will open up many new applications, as well as remove many obstacles that today prevent the easy deployment of Qubes OS (such as the need to find and maintain dedicated hardware). However, it’s important to ask ourselves how relevant this model will be in the coming years. Even with our new approach, we’re still talking about classic standalone desktop applications running in qubes, while the rest of the world seems to be moving toward an app-as-a-service model in which everything is hosted in the cloud (e.g. Google Docs and Microsoft Office 365). How relevant is the whole Qubes architecture, even the cloud-based version, in the app-as-a-service model? I’d like to argue that the Qubes architecture still makes perfect sense in this new model. First, it’s probably easy to accept that there will always be applications that users, both individual and corporate, will prefer (or be forced) to run locally, or at least on trusted servers. At the same time, it’s very likely that these same users will want to embrace the general, public cloud with its multitude of app-as-a-service options. Not surprisingly, there will be a need for isolating these workloads from interfering with each other. Some examples of payloads that are better suited as traditional, local applications (and consequently within qubes), are MS Office for sensitive documents, large data-processing applications, and… networking and USB drivers and stacks. The latter things may not be very visible to the user, but we can’t really offload them to the cloud. We have to host them on the local machine, and they present a huge attack surface that jeopardizes the user’s other data and applications. What about isolating web apps from each other, as well as protecting the host from them? Of course, that’s the primary task of the Web browser. Yet, despite vendors’ best efforts, browser security measures are still being circumvented. Continued expansion of the APIs that modern browsers expose to Web applications, such as WebGL, suggests that this state of affairs may not significantly improve in the foreseeable future. What makes the Qubes model especially useful, I think, is that it allows us to put the whole browser in a container that is isolated by stronger mechanisms (simply because Qubes does not have to maintain all the interfaces that the browser must) and is managed by Qubes-defined policies. It’s rather natural to imagine, e.g. a Chrome OS-based template for Qubes (perhaps even a unikernel-based one), from which lightweight browser VMs could be created, running either on the user’s local machine, or in the cloud, as described above. Again, there will be pros and cons to both approaches, but Qubes should support both – and mostly seamlessly from the user’s and admin’s points of view (as well the Qubes service developer’s point of view!). ## Summary Qubes Air is the next step on our roadmap to making the concept of “Security through Compartmentalization” applicable to more scenarios. It is also an attempt to address some of the biggest problems and weaknesses plaguing the current implementation of Qubes, specifically the difficulty of deployment and virtualization as a single point of failure. While Qubes-as-a-Service is one natural application that could be built on top of Qubes Air, it is certainly not the only one. We have also discussed running Qubes over clusters of physically isolated devices, as well as various hybrid scenarios. I believe the approach to security that Qubes has been implementing for years will continue to be valid for years to come, even in a world of apps-as-a-service.
true
true
true
The Qubes OS project has been around for nearly 8 years now, since its original announcement back in April 2010 (and the actual origin date can be traced back to November 11th, 2009, when an initial email introducing this project was sent within ITL internally). Over these years Qubes h...
2024-10-12 00:00:00
2018-01-22 00:00:00
https://www.qubes-os.org…/qubes-cloud.png
article
qubes-os.org
Qubes OS
null
null
20,508,126
https://www.sciencedaily.com/releases/2019/07/190722182126.htm
Widespread aspirin use despite few benefits, high risks
null
# Widespread aspirin use despite few benefits, high risks - Date: - July 22, 2019 - Source: - Beth Israel Deaconess Medical Center - Summary: - Nearly 30 million Americans older than 40 take aspirin daily to prevent cardiovascular disease. More than 6 million Americans take aspirin daily without physician's recommendation. Nearly half of Americans more than 70 years of age without cardiovascular disease, an estimate of nearly 10 million people, take aspirin daily -- despite current guidelines against this practice. - Share: Medical consensus once supported daily use of low dose aspirin to prevent heart attack and stroke in people at increased risk for cardiovascular disease (CVD). But in 2018, three major clinical trials cast doubt on that conventional wisdom, finding few benefits and consistent bleeding risks associated with daily aspirin use. Taken together, the findings led the American Heart Association and American College of Cardiology to change clinical practice guidelines earlier this year, recommending against the routine use of aspirin in people older than 70 years or people with increased bleeding risk who do not have existing cardiovascular disease. Aspirin use is widespread among groups at risk for harm including older adults and adults with peptic ulcers -- painful sores in the lining of the stomach that are prone to bleeding that affect about one in ten people. In a research report published today in *Annals of Internal Medicine*, researchers from Beth Israel Deaconess Medical Center (BIDMC) report on the extent to which Americans 40 years old and above use aspirin for primary prevention of cardiovascular disease. "Although prior American Heart Association and American College of Cardiology guidelines recommended aspirin only in persons without elevated bleeding risk, the 2019 guidelines now explicitly recommend against aspirin use among those over the age of 70 who do not have existing heart disease or stroke," said senior author Christina C. Wee, MD, MPH, a general internist and researcher at BIDMC and Associate Professor of Medicine at Harvard Medical School. "Our findings suggest that a substantial portion of adults may be taking aspirin without their physician's advice and potentially without their knowledge." Using data from the 2017 National Health Interview Survey (NHIS), a nationally representative survey of U.S. households conducted before the release of the new guidelines, Wee and colleagues characterized aspirin use for primary prevention of CVD. The team found that about a quarter of adults aged 40 years or older without cardiovascular disease -- approximately 29 million people -- reported taking daily aspirin for prevention of heart disease. Of these, some 6.6 million people did so without a physician's recommendation. Concerningly, nearly half of adults 70 years and older without a history of heart disease or stroke reported taking aspirin daily. The authors noted that a history of peptic ulcer disease -- another contraindication for the routine use of aspirin -- was not significantly associated with lower aspirin use as one would have expected. "Our findings show a tremendous need for health care practitioners to ask their patients about ongoing aspirin use and to advise them about the importance of balancing the benefits and harms, especially among older adults and those with prior peptic ulcer disease," said lead author Colin O'Brien, MD, a senior internal medicine resident at BIDMC and fellow at Harvard Medical School. Coauthor, Stephen Juraschek, MD, PhD, a primary care physician at BIDMC, cautions that "these findings are applicable to adults who do not have a history of cardiovascular disease or stroke. If you are currently taking aspirin, discuss it with your doctor to see if it is still needed for you." Juraschek, who is also an Assistant Professor at Harvard Medical School, is supported by grant K23HL135273 from the National Heart, Lung and Blood Institute of the National Institutes of Health. **Story Source:** Materials provided by **Beth Israel Deaconess Medical Center**. *Note: Content may be edited for style and length.* **Journal Reference**: - Colin W. O'Brien, Stephen P. Juraschek, Christina C. Wee. **Prevalence of Aspirin Use for Primary Prevention of Cardiovascular Disease in the United States: Results From the 2017 National Health Interview Survey**.*Annals of Internal Medicine*, 2019; DOI: 10.7326/M19-0953 **Cite This Page**: *ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com
true
true
true
Nearly 30 million Americans older than 40 take aspirin daily to prevent cardiovascular disease. More than 6 million Americans take aspirin daily without physician's recommendation. Nearly half of Americans more than 70 years of age without cardiovascular disease, an estimate of nearly 10 million people, take aspirin daily -- despite current guidelines against this practice.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.sciencedaily…cidaily-icon.png
article
sciencedaily.com
ScienceDaily
null
null
12,091,173
http://a.singlediv.com/
A Single Div: a CSS drawing project by Lynn Fisher
null
A Single Div : a CSS drawing project by Lynn Fisher 2014-2019 GitHub #divtober Buy me a coffee
true
true
true
A CSS drawing experiment to see what’s possible with a single div.
2024-10-12 00:00:00
2014-01-01 00:00:00
null
null
null
null
null
null
4,235,656
http://lostinjit.blogspot.com/2012/07/call-for-new-open-source-economy-model.html
Call for a new open source economy model
Maciej Fijalkowski
**DISCLAIMER:** This post is incredibly self-serving. It only makes sense if you believe that open source is a cost-effective way of building software and if you believe my contributions to the PyPy project are beneficial to the ecosystem as a whole. If you would prefer me to go and "get a real job", you may as well stop reading here. There is a lot of evidence that startup creation costs are plummeting. The most commonly mentioned factors are the cloud, which brings down hosting costs, Moore's law, which does the same, ubiquitous internet, platforms and open source. Putting all the other things aside, I would like to concentrate on open source today. Not because it's the most important factor -- I don't have enough data to support that -- but because I'm an open source platform provider working on PyPy. Open source is cost-efficient. As Alex points out, PyPy is operating on a fraction of the funds and manpower that Google is putting into V8 or Mozilla into {Trace,Jaeger,Spider}Monkey, yet you can list all three projects in the same sentence without hesitation. You would call them "optimizing dynamic language VMs". The same can be said about projects like GCC. Open source is also people - there is typically one or a handful of individuals who "maintain" the project. Those people are employed in a variety of professions. In my experience they either work on their own or for corporations (and corporate interests often take precedence over open source software), have shitty jobs (which don't actually require you to do a lot of work) or scramble along like me or Armin Rigo. Let me step back a bit and explain what do I do for a living. I work on NumPy, which has managed to generate slightly above $40,000 in donations so far. I do consulting about optimization under PyPy. I look for other jobs and do random stuff. I think I've been relatively lucky. Considering that I live in a relatively cheap place, I can dedicate roughly half of my time to other pieces of PyPy without too much trouble. That includes stuff that noone else cares about, like performance tools, buildbot maintenance, release management, making json faster etc., etc.. Now, the main problem for me with regards to this lifestyle is that you can only gather donations for "large" and "sellable" projects. How many people would actually donate to "reviews, documentation and random small performance improvements"? The other part is that predicting what will happen in the near future is always very hard for me. Will I be able to continue contributing to PyPy or will I need to find a "real job" at some point? I believe we can come up with a solution that both creates a reasonable economy that makes working on open source a viable job and comes with relatively low overheads. Gittip and Kickstarter are recent additions to the table and I think both fit very well into some niches, although not particularly the one I'm talking about. I might not have the solution, but I do have a few postulates about such an economical model: - It cannot be project-based (like kickstarter), in my opinion, it's much more efficient just to tell individuals "do what you want". In other words -- no strings attached. It would be quite a lot of admin to deliver each simple feature as a kickstarter project. This can be more in the shades of gray "do stuff on PyPy" is for example a possible project that's vague enough to make sense. - It must be democratic -- I don't think a government agency or any sort of intermediate panel should decide. - It should be possible for both corporations and individuals to donate. This is probably the major shortcoming of Gittip. - There should be a cap, so we don't end up with a Hollywood-ish system where the privileged few make lots of money while everyone is struggling. Personally, I would like to have a cap even before we achieve this sort of amount, at (say) 2/3 of what you could earn at a large company. - It might sound silly, but there can't be a requirement that a receipent must reside in the U.S. It might sound selfish, but this completely rules out Kickstarter for me. The problem is that I don't really have any good solution -- can we make startups donate 5% of their future exit to fund individuals who work on open source with no strings attached? I heavily doubt it. Can we make VCs fund such work? The potential benefits are far beyond their event horizon, I fear. Can we make individuals donate enough money? I doubt it, but I would love to be proven wrong. Yours, leaving more questions than answers, fijal this comic strip from today sounds very appropriate to the first part of the post: ReplyDeletehttp://www.dilbert.com/strips/comic/2012-07-12/ I know this doesn't address your core concerns but could the Software Freedom Conservancy handle the money from Kickstarter to make it a viable option for you? ReplyDeleteI think there is a need for developers to set specific goals and intentions. Giving money to "developing pypy" is nebulous and hard for me to support. Managing releases; supporting the user community through bug triaging, IRC, mailing lists, etc; increasing performance of the parser, JIT, GC, GIL, etc; getting 3rd party libraries (Numpy, Django, database clients, etc) working on pypy; these are all goals I'd happily support. They all fall under "developing pypy" but give me a much better idea of what I'm actually funding. You could kind of convert those goals into Kickstarter projects, but that's a ton of overhead per "project" and maps poorly. Hopefully there's a better way like you outline that reduces the overhead of funding while still allowing funders visibility and confidence in what/who they're funding. With regards to open source and money, you might be interested in reading about fairware, an experiment (still ongoing) I started nearly two years ago: ReplyDeletehttp://open.hardcoded.net/about/ http://www.hardcoded.net/articles/fairware-it-kinda-works.htm http://www.thepowerbase.com/2012/07/monetizing-open-source-with-fairware-interview-with-virgil-dupras/ It is project-based (well, not as in task based, but rather "product-based"), so it doesn't work with your postulates, but I'm thinking you might rule out project-based systems too fast. Of course, it might be very hard to get people to directly contribute to an eventual fairware PyPy, but what if end user projects using PyPy would pledge, let's say, 3% of their contributions towards the paying off timelogs in the PyPy project? Then it could work. You raise good questions, and I have not yet seen any easy answers. ReplyDeleteIt seems to me that you are asking for a peer-based "sponsorship" site. This sounds reasonable in principle, but the devils are all in the details. I think Gittip and Kickstarter have chosen the models they did because of the nature of the problem. If people contributing money have no expectation of receiving concrete, useful work product, then it's just a tipping site like Gittip. If people contributing money receive some assurances of useful deliverables, then someone has to do the evaluation of whether or not payment should be released. This then makes it very "project-based", like Kickstarter. "in my opinion, it's much more efficient just to tell individuals "do what you want". In other words -- no strings attached." While I don't know that I would say it's more "efficient", I do think that one is more likely to obtain superior results by giving people the freedom to explore and solve a problem in their own way. However, there are very, very few people in the world that will give up their money to others just to let those other people "explore" and "play". Historically, artistic patronage was really reserved for the very wealthy and the very artistically talented. Because I think software is a craft, I think it *can* in fact be funded via a patronage model. However, the bottom line is that the patrons have to be able to enjoy the fruits of artist's labor. In your case, since you are building low-level, developer-facing tools, it's unlikely that you will find an individual sponsor who will support open exploration. (Contrast this with, say, someone who does digital art or is making a cool indie game.) Which means you have to look at businesses and organizations for sponsorship of your type of technical projects. In these cases, the people giving the money almost always have to be accountable for how that money is spent. You are unlikely to find sponsorship of open-ended exploration, unless the organization is both very small and very well-capitalized. The only effective approach I've seen is to essentially be your own sponsor. Do the consulting, product dev, or whatnot to raise funds for concrete work for paying customers, then use that money to fund your own explorations. On the basis on those explorations, build the next set of products or tools with which you can then bootstrap the next cycle. "Now, the main problem for me with regards to this lifestyle is that you can only gather donations for "large" and "sellable" projects." The core issue is that you have to balance your own technical interests vs. how others value your time and expertise. While some open-source hackers have managed to avoid or "opt out" of the need to answer this question, most end up in one of a limited set of stable equilibrium positions, especially as they get older and have more financial obligations. To be completely honest, there is nothing in your list of requirements that fundamentally precludes a product-based approach. You just have to be more creative with the pricing and licensing. For instance, you can create ransom-ware, which is proprietary or dual-licensed until some fixed amount of monies have been raised, at which point it gets re-released as LGPL/MIT. You can do temporal licensed things which charge a premium or dual-licenses the latest version, and releases the previous version for free. Etc. (Then there are also more traditional OSS approaches of "support-ware" which charges for support and bugfixes, or "consultant-ware" which is stuff that looks pretty useful, but ultimately funnels users into need to engage your consulting business in order to employ your project at larger scale. These two, I think, are less optimal than the first two I mentioned, because they mis-align the incentives of the users and developers.) Good points Peter. ReplyDeleteMy point is that typically I ask people who *already benefit* from my software, so they have a vested interest in ongoing maintenance or more new stuff. Open Source is very much reputation based, so being able to attract donations (also from companies), is mostly based on the stuff what I've already done not what I promise. I think open source has a lot to learn from commercial endeavors. There are no rules and there is nothing stopping a marketing and sales-savvy development team from cashing in. Throwing up yet-another-post on HN is a good start, but you need a sales funnel. You also prob need $10k to get there. ReplyDeleteI keep thinking that we should have an App Store for open source software. ReplyDeleteThere are many startups and other organizations willing to sponsor open source work, but how is the money going to get from them to you? How are they going to find you or the projects you work on among the thousands of packages they use? They could go for easy targets like Python/PSF or Debian/SPI, but those organizations usually don't sponsor development, only infrastructure, meetings, sprints, conferences, etc. There are social and legal reasons for that. Once you start paying developers, there is a serious risk to cause disputes within the project and to alienate fringe contributors. And then if you pay developers, you might lose your non-profit status, you have to pay taxes and insurance and all that. ReplyDeleteYou say you don't want government involved, but I think things could move that way. Many functions that grew out of volunteer-driven efforts, such as social services and welfare, are being taken over by the government to some degree. I also know people who get sponsorship for non-mainstream open source development from some fairly broad government programs, so maybe it's something to look into more.
true
true
true
DISCLAIMER: This post is incredibly self-serving. It only makes sense if you believe that open source is a cost-effective way of building ...
2024-10-12 00:00:00
2012-07-12 00:00:00
null
null
blogspot.com
lostinjit.blogspot.com
null
null
13,657,277
https://en.wikipedia.org/wiki/PXL-2000
PXL2000 - Wikipedia
null
# PXL2000 This article needs additional citations for verification. (January 2011) | Variant models | 3300 and 3305; PixelVision, Sanwa Sanpix1000, KiddieCorder, and Georgia | ---|---| Manufacturer | Fisher-Price | Introduced | 1987[citation needed] | Batteries | 6 x AA battery | The **PXL2000**, or **Pixelvision**, was a toy black and white video camera, introduced by Fisher-Price in 1987 at the International Toy Fair in Manhattan, which could record sound and images onto inexpensive Walkman-style compact audio cassette.[1] It was on the market for one year with about 400,000 units produced.[2]: 20 After that one year, it was pulled by the market, but rediscovered in the 1990s by low-budget filmmakers who appreciated the grainy, shimmering, monochrome produced by the unit, and the way in which its lens allowed the user to photograph a subject an eighth of an inch away from the camera, and pull back to a long shot without manipulating a dial, while keeping as the background and the foreground in focus.[1] It is also appreciated by collectors, artists, and media historians, and has been used in major films and spawned dedicated film festivals.[2] ## Development [edit]The PXL2000 was created by a team of inventors led by James Wickstead. He sold the invention rights to Fisher-Price in 1987 at the American International Toy Fair in Manhattan.[3] ## Design [edit]The PXL2000 consists of a simple aspherical lens, an infrared filter, a CCD image sensor, a custom ASIC (the Sanyo LA 7306M), and an audio cassette mechanism. This is mounted in a plastic housing with a battery compartment and an RF video modulator selectable to either North American television channel 3 or 4. It has a plastic viewfinder and some control buttons. The system stores 11 minutes of video and sound on a standard audio cassette tape by moving the tape at nearly nine times normal cassette playback speed. It records at roughly 16.875 inches (428.6 mm) per second, compared to a standard cassette's speed of 1.875 inches (47.6 mm) on a C90 CrO2 (chromium dioxide) cassette. In magnetic tape recording, the faster the tape speed, the more data can be stored per second. The higher speed is necessary because video requires a wider bandwidth than standard audio recording. The PXL2000 records the video information on the left audio channel of the cassette, and the audio on the right.[4] In order to reduce the amount of information recorded to fit within the narrow bandwidth of the sped-up audio cassette, the ASIC generates slower video timings than conventional TVs use. It scans the 120 × 90 pixel CCD 15 times per second, feeding the results through a filtering circuit, and then to a frequency modulation circuit driving the left channel of the cassette head, as well as to an ADC, which creates the final image for viewing.[ citation needed] For playback and view-through purposes, circuits read image data from either a recorded cassette or the CCD and fill half a digital frame store at the PXL reduced rate, while scanning the other half of the frame store at normal NTSC rates. Since each half of the frame store includes only 10800 pixels in its 120 × 90 array, the same as the CCD, the display resolution was deemed to be marginal, and black borders were added around the picture, squashing the frame store image content into the middle of the frame, preserving pixels that would otherwise be lost in overscan. An anti-aliasing low-pass filter is included in the final video output circuit.[ citation needed] ## Marketing [edit]The market success of the PXL2000 was ultimately quite low with its targeted child demographic, in part due to its high pricing. Introduced at US$179 (equivalent to about $480 in 2023) and later reduced to $100 (equivalent to about $270 in 2023), it was expensive for a child's toy but affordable by amateur video artists. The PXL2000 was produced in two versions: model #3300 at $100[5][6] with just the camera and necessary accessories; and #3305 at $150[7] adding a portable black and white television monitor with a 4.5-inch (110 mm) diagonal screen. Extra accessories were sold separately, such as a carrying case. It was also produced as Fisher-Price PixelVision, Sanwa Sanpix1000, KiddieCorder, and Georgia.[8] ## Revival [edit]The PXL2000 has received a minor revival in popularity since the 1990s among filmmakers, due to its point-and-shoot simplicity and low-grade aesthetic. Because the unit is degradable and obsolete, its use is aligned with a certain romanticized mortality, unfit for serious mainstream appropriation. Erik Saks wrote this: "Each time an artist uses a PXL2000, the whole form edges closer to extinction."[2]: 93 In 1990, Pixelvision enthusiast Gerry Fialka founded PXL THIS, a film festival dedicated to projects shot exclusively on the PXL2000.[9][10][11][12][13] The festival continues to occur annually in Los Angeles, California, usually at the Beyond Baroque Literary Arts Center[14] and the Echo Park Film Center,[15] with Fialka continuing as organizer and curator. Although the festival operates without a budget,[16] it still manages to tour many locations[2]: 7 including the San Francisco Cinematheque[14] and Boston's MIT campus.[2]: 7 Festival entries, oral history interviews, and other relevant materials donated by Fialka are being processed into the Performing Arts and Moving Image Archives at the University of California, Santa Barbara Library.[17] Recalling the PXL2000's initial promise of accessibility, Fialka's vision includes accepting submissions indiscriminately, juxtaposing the works of established artists with those of amateurs and children.[2]: 71 PXL2000 cameras have been used occasionally in professional filmmaking, with camera modifications to output composite video, enabling it to interface to an external camcorder or VCR.[18] ## Productions [edit]The PXL2000 was used by Richard Linklater in his 1990 debut film, *Slacker*. A roughly two-minute performance art sequence within the film is shot entirely in PixelVision.[ citation needed] Peggy Ahwesh's *Strange Weather* (1993), which follows several crack cocaine addicts in Florida, was shot entirely on a PXL2000. This video relies heavily on the camera's portability to maintain an intimate presence.[ citation needed] Video artist Sadie Benning is among the most critically acclaimed pioneers of the PXL2000, one of which was given to them by their father James Benning around the age of 15. Benning's early video diary works gained popularity in the artist market, earning them a lasting reputation as an innovator, with an important presence in video art.[19] Michael Almereyda used the camera for several of his films. *Another Girl Another Planet* (1992) and his short *Aliens* (1993) were shot with it entirely, it was used for point of view shots of the title character in *Nadja* (1994), and it was used by the title character to make video diaries in *Hamlet* (2000).[ citation needed] The camera has been used for several music videos, including "Mote" by Sonic Youth and "Black Grease" by the Black Angels.[ citation needed] Artist John Humphrey's 2003 video, *Pee Wee Goes to Prison* was shot on a PXL2000, employing a cast of dolls and other toys to stage the imaginary trial, incarceration, and eventual pardoning (by newly-elected president Jesse Ventura) of Pee-wee Herman for the sale of Yohimbe.[ citation needed] The PXL2000 was used by the characters Maggie (Anne Hathaway) and Jamie (Jake Gyllenhaal) in the 2010 film, *Love & Other Drugs*, although the black and white footage from the camera is shown at full film resolution.[20] In 2018, Toronto filmmaker Karma Todd Wiseman used a PXL2000 to shoot key scenes, processing the footage with enhanced monochrome. The custom PXL2000 camera was fitted with windshield mount suction cups and painted with the red and white paint scheme of the Canadian flag.[ citation needed] The PXL2000 was used by the characters Melody and Jess during the show Archive 81.[ citation needed] ## See also [edit]## References [edit]- ^ **a**Revkin, Andrew C. (January 22, 2000). "As Simple as Black and White; Children's Toy Is Reborn as an Avant-Garde Filmmaking Tool".**b***The New York Times*. Archived from the original on July 29, 2023. Retrieved July 29, 2023. - ^ **a****b****c****d****e**McCarty, Andrea Nina (2005).**f***Toying with Obsolescence: Pixelvision Filmmakers and the Fisher Price PXL 2000 Camera*. Massachusetts: Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Comparative Media Studies. **^**Revkin, Andrew C. (January 22, 2000). "As Simple as Black and White; Children's Toy Is Reborn as an Avant-Garde Filmmaking Tool".*The New York Times*. Archived from the original on July 29, 2023. Retrieved July 29, 2023.**^**US Patent 4875107.**^**"FS: Fisher-Price PXL-2000 Pixelvision Camcorder".*groups.google.com*.**^**"Technical Info on Fisher-Price Camcorder??".*groups.google.com*.**^**"PixelVision Camera".*groups.google.com*.**^**"Pixelvision Mystery - More PXL-2000's Than We Thought? - Sanpix 1000".*Retro Thing*.**^**Cardy, Tom (November 19, 2008). "Passion for the pixel" (PDF).*The Dominion Post (Wellington)*. p. D1. Retrieved May 29, 2021.**^**Smith, Lynn (November 9, 2002). "Coming to you in glorious pixelvision".*Los Angeles Times*.**^**Willis, Holly (Nov 14, 2007). "'PXL THIS' 17: The Bad and the Beautiful".*LA Weekly*.**^**Willis, Holly (Aug 27, 1999). "Strange days: Showcases burgeon for every taste".*Variety*.**^**Revkin, Andrew C. (January 22, 2000). "As Simple as Black and White; Children's Toy Is Reborn as an Avant-Garde Filmmaking Tool".*The New York Times*.- ^ **a**"An Invention Without a Future: Greatest Hits of PXL THIS" (PDF).**b***San Francisco Cinematheque*. February 10, 2008. **^**"PXL THIS 28".*Echo Park Film Center*. Retrieved July 18, 2021.**^**"Annual Events: PXL THIS".*Virtual Venice*. Archived from the original on May 7, 2006. Retrieved July 7, 2021.`{{cite web}}` : CS1 maint: unfit URL (link)**^**"Gerry Fialka 'PXL THIS' Archive (PA Mss 231)".*University of California, Santa Barbara Library*. 15 June 2021. Retrieved July 7, 2021.**^**Fox, Claire; Martin, Nicole (January 1, 2021). "Preserving Pixelvision: Image Vulnerability and the Early Video Works of Sadie Benning".*Feminist Media Histories*.**7**(1). University of California Press: 40–60. doi:10.1525/fmh.2021.7.1.40. S2CID 234170231.**^**Chris O'Falt (August 9, 2018). "Pixelvision: How a Failed '80s Fisher-Price Toy Camera Became One of Auteurs' Favorite '90s Tools".*IndieWire*.**^**Movie reviews: Love & Other Drugs ## External links [edit]- Fisher-Price PXL2000 in the Total Rewind museum of Vintage Video - The original Manual to the PXL2000 - Manuals - PXL2000 Forum with Camera Modification Guides - US Patent No. 5010419 Apparatus for storing video signals on audio cassette
true
true
true
null
2024-10-12 00:00:00
2004-10-10 00:00:00
https://upload.wikimedia…IMG_20200524.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
8,332,801
http://www.science20.com/eye_brainstorm/tronlegacy_and_isomorphisms
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,657,618
https://www.youtube.com/watch?v=y7bP0u0jQRQ
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,490,230
https://hexdocs.pm/remix_icon_ex
Documentation
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
RemixIconEx v0.5.0
null
null
29,281,449
https://httpwg.org/http-extensions/draft-ietf-httpbis-message-signatures.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,075,232
http://bins.ribbon.co
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,524,180
http://pixbuf.com
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
30,792,534
https://re-search.xyz/writing/mapping-the-new-world-towards-a-new-information-engine
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
10,321,440
http://blog.statuspage.io/5-customer-service-blogs-you-should-be-reading
Statuspage - Atlassian Blog
null
Articles About # Statuspage Dear IRS, We imagine that Tax Day during the best of times is extremely hectic, so we can’t even imagine... A simplified management portal is coming soon. Dear IRS, We imagine that Tax Day during the best of times is extremely hectic, so we can’t even imagine... A simplified management portal is coming soon.
true
true
true
Downtime communication tips and product updates from Statuspage.
2024-10-12 00:00:00
2020-03-23 00:00:00
null
article
atlassian.com
Work Life by Atlassian
null
null
26,545,973
https://lite.cnn.com/en/article/h_bdd0505f6794e2e4da352ac8abe6eebe
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,127,101
https://www.rethinkdbcloud.com/
Reliable realtime database for Heroku
null
## Realtime Database on Heroku Our Heroku Add-on is built on RethinkDB, the open source database for the realtime web. ## One Click Setup We setup your RethinkDB instance in seconds, so you can focus on your app. ## Scale with Ease Conveniently scale your database up and down with a single command. Scale instantly and seamlessly, without loosing any data. ## Add-on is in Beta! RethinkDB Cloud is a Heroku Marketplace Add-on that provides RethinkDB realtime databases as a service. The Add-on is currently in beta and can be provisioned for free. Check out our Add-on page for more information. ## News & Updates Sign up to stay informed about news and updates.
true
true
true
RethinkDB Cloud provides a fast and reliable realtime database for Heroku. Conveniently scale your database up and down with a single command. Scale instantly and seamlessly, without loosing any data.
2024-10-12 00:00:00
2023-01-01 00:00:00
https://www.rethinkdb.cloud/social.png
website
rethinkdb.cloud
RethinkDB Cloud
null
null
62,783
http://www.iht.com/articles/2007/10/03/news/edsputnik.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,376,465
http://www.kalzumeus.com/2012/08/13/doubling-saas-revenue/
Doubling SaaS Revenue By Changing The Pricing Model
null
Most technical founders abominably misprice their SaaS offerings to start out. I’m as guilty of this as anyone, so I wrote up my observations about un-borking this as The Black Arts of SaaS pricing a few months ago. (It went out to my mailing list — sign up and you’ll get it tomorrow.) A few companies implemented advice in there to positive effect, and one actually let me write about it, so here we go: ## Aligning Price With Customer Value Server Density does server monitoring to a) give you peace of mind when all is well and b) alert you really darn quickly when all isn’t. (Sidenote: If you run a software business, you absolutely need some form of server monitoring, because the application being down costs you money and trust. I personally use Scout because of great Ruby integration options. They woke me up today, as a matter of fact — apparently I had misconfigured a cronjob last night.) Anyhow, Server Density previously used a pricing system much beloved by technical founders: highly configurable pricing. Why do geeks love this sort of pricing? Well, on the surface it appears to align price with customer success (bigger customers pay more money), it gives you the excuse to have really fun widgets on your pricing page, and it seems to offer low-cost entry options which then scale to the moon. I hate, hate, **hate** this pricing scheme. Let me try to explain the pricing in words so that you can understand why: - It costs $11 per server plus $2 per website. - Except if you have more than 10 servers it costs $8 per server plus $2 per website. - Except if you have more than 50 servers it costs $7 per server plus $2 per website. **This is very complicated and does not align pricing with customer success. **Why not? ## Pricing Scaling Linearly When Customer Value Scales Exponentially Is A Poor Decision Dave at Server Density explained to me that their core, sweet-spot customer has approximately 7 servers, but that the per-server pricing was chosen to be cheap to brand-new single-server customers. They were very concerned with competing with free. Regardless of whether this wins additional $13 accounts, it clearly under-values the service for 7 server accounts, because their mission-critical server monitoring software in charge of paging the $10,000 a month on-call sysadmin to stop thousands of dollars of losses per minute *only costs $79*. You don’t get 7x the value from server monitoring if you increase your server fleet by 7x, you get (at least) 50x the value. After you get past hobby project you quickly get into the realms of a) serious revenue being directly dependent on the website, b) serious hard costs like fully-loaded developer salaries for doing suboptimal “cobble it together ourselves from monit scripts” solutions, and c) serious career/business reputational risks if things break. Let’s talk about those $13 accounts for a moment. Are $13 accounts for server monitoring likely to be experienced sysadmins doing meaningful work for businesses who will solve their own problems and pay without complaint every month? No, they’re going to be the worst possible pathological customers. They’ll be hobbyists. Their servers are going to break all the time. They’re going to misconfigure Server Density and then blame it for their server breaking all the time. **They’ll complain that Server Density** **costs infinity percent more than OSS, **because they value their own time at zero, not having to e.g. pay salaries or account for a budget or anything. My advice to Dave was that Server Density switch to a SaaS pricing model with 3~4 tiers segmented loosely by usage, and break with the linear charging. The advantages: **Trivial to buy**for non-technical stakeholders: name the plans correctly and they won’t even need to count servers to do things correctly. (“We’re an enterprise! Of course we need the Enterprise plan!”)**Predictable pricing**. You know that no matter what the sysadmins do this month, you’re likely to end up paying the same amount.**Less decisions.**Rather than needing to do capacity planning, gather data internally, and then use a custom-built web application to determine your pricing, you can just read the grid and make a decision in 30 seconds.**More alignment with business goals.**Unless you own a hosting company, “number of servers owned” is not a metric your CEO cares about. It only tends to weakly proxy revenue. Yes, in general, a company with 10 servers tends to have more commercial success than a company with 1 server, but there are plenty of single-server companies with 8 figures of revenue. (Speaking of custom-built web applications to determine pricing, the best product with the worse pricing strategy is Heroku. Enormously successful, but I’m pretty sure they could do better, and have been saying so for years. All Heroku would have to do is come up with four tiers of service, attach reasonable dynos/workers/databases to them, and make that the core offering for 90% of new accounts. You could even keep the actual billing model entirely intact: make the plans an abstraction over sensible defaults picked for the old billing model, and have the Spreadsheet Samurai page somewhere where power users and the sales team can find it.) ## Ditching Linear Scaling In Favor Of A Plan Model After thinking on my advice, Server Density came up with this redesign: **I love this.** - The minimum buy-in for the service is now $99 a month, which will segment away customers who are less serious about their server uptime. - You now only need to make one decision, rather than needing to know two numbers (which you might not have access to at many of their customers). - The segmentation on users immediately triples the price for serious businesses using the service, irrespective of the size of their server fleet. This is good because **serious businesses generate a lot of money no matter how many servers they have**. - Phone support will be an absolute requirement at many companies, and immediately pushes them into the $500 a month bucket. My minor quibbles: - I still think it is underpriced at the top end. Then again I say that about everything. - Did you notice the *real*Enterprise pricing? (Bottom right corner, titled “More than 100?”) Like many SaaS services, Server Density will quote you a custom plan if you have higher needs. Given that these customers are extraordinarily valuable to the business both for direct sales and for social proof, I might make this one a little more prominent. ## Results From Testing: 100% Increase In Revenue Server Density implemented an A/B test of the two pricing strategies using VWO. At this point, there’s someone in the audience saying “That’s illegal!” That person is just plain wrong. There is no carbon in a water molecule, and price testing is not illegal. What if the fact of the price testing were discovered? **Not really that problematic**: you can always offer to switch someone to the most advantageous pricing model for them. Since most existing customers would pay less under variable pricing than they would under the above pricing grid, simply grandfathering them in on it removes any problem from people who have an actual stake in the business. For new customers who get the new pricing grid but really, really feel that they should be a $13 a month account, you can always say “Oh, yep, we were testing. I’ll give you the $13 pricing if you want it.” (David from Server Density says that this is in fact what they did, three times, and had no lasting complaints.) Most customers will not react like this because **most customers do not care about price**. (Those that do are disproportionately terrible customers. To quote David from Server Density, “We had the occasional complaint that pricing was too high but this was from users with either just a single server or very low cost VPSs where the cost of monitoring (even at $10/m) was more than the cost of the server.”) Anyhow, where were we? Oh yeah, making Server Density piles of money. They requested that I not disclose the interval the test was conducted over, to avoid anyone reasoning back to their e.g. top-line revenues, but were OK with publishing exact stats otherwise. **Variable pricing**: 150 free trial signups / 2161 visitors **Pricing plans**: 113 free trial signups / 2153 visitors At this point, variable pricing is **clobbering** the pricing plans (they get 25% less signups and pricing plans being inferior at maximizing trials has a confidence over 99%)… but let’s wait until this cohort reaches the end of the trial period, shall we? Server Density does not make credit card capture mandatory. (I might suggest revising that decision as another test.) **Variable pricing**: 23 credit cards added / 2161 visitors **Pricing plans**: 18 credit cards added / 2153 visitors That’s a fairly similar attachment rate for credit cards. But collecting credit cards doesn’t actually keep the lights on — the important thing is how much you successfully charge them, and that is highly sensitive to the prices. **Variable pricing**: $420 monthly revenue added / 2161 visitors (~$0.19 a visitor) **Pricing plans**: $876 monthly revenue added / 2153 visitors (~$0.41 a visitor) **+100% revenue** (and revenue-per-visitor) for that cohort. Pretty cool. (P.S. Mathematically inclined readers might get puzzled at the exact revenue numbers — how do you get $876 from summing $99, $299, and $499? Long story short: Server Density is a UK company and there are conversion issues from GBP to USD and back again. They distort the exact revenue numbers a wee bit, but it comes out in the wash statistically.) ## We Doubled Revenue?! Can We Trust That Result? Visual Website Optimizer displays on the dashboard that it is 93% confident that there was indeed a difference between the two. (The reported confidence intervals are $0.19 +/- 0.08 and $0.41 +/- $0.16. How to read that? Well, draw your bell curves and do some shading, but for a qualitative description, “Our best guess is that we doubled performance, but there’s some room for error in these approximations. What would those errors look like? Well, calculus happens, here we go: it is more likely that the true performance improvement is more than ~3x than it is that there was, in fact, no increase in performance.”) Truth be told, I don’t know if I trust that confidence in improvement or not, because I don’t understand the stats behind it. I understand the reported confidence intervals and what they purport to measure, I just don’t know of a defensible way to get the data to that point. The ways I’m aware of for generating confidence intervals for averages/aggregates of a particular statistic (like, say, “Average monthly revenue per visitor of all visitors who would ever sign up under the pricing plan”) all have to assume something about the population distribution. One popular assumption is “Assume normality”, but that’s known to be clearly wrong — no plausible arrangement of numbers makes X% $99, Y% 299, Z% $499 into a normal distribution. **Even in absence of a rigorous test for statistical confidence**, though, there’s additional information that can’t be put in this public writeup which causes me to put this experiment in the “highly probable win” column. (If my Stats 102 is failing me and there’s a simple test I am neglecting, feel free to send me an email or drop a mention in the comments.) Note that since this is a SaaS business that is **monthly revenue added**. Increasing your monthly revenue from a particular cohort by $450 increases your predicted revenue over the next year by in excess of $4,000. (The calculation is dependent on your churn rate. I’m just making a wild guess for Server Density’s, biased to be conservative and against their interests.) Now, in the real world, SaaS customers’ value can change over time via plan upgrades and downgrades, and one would ideally collect many months of cohort analyses to see how things shook out. Unfortunately, in the equally real world which we actually live in, sometimes we have to reason from incomplete data. If you saw a win this dramatic in your business and were wondering whether you could “take your winnings” now by adopting the new pricing across all new accounts, I would suggest informing that decision with what you previously know about customer behavior vis-a-vis number of servers over time. My naive guess is that once a server goes into service it gets taken out of service *quite rarely indeed* and, as a consequence, most Server Density accounts probably have roughly static value and the few that change overwhelmingly go up. And what about the support load? Well, true to expectations, it has largely been from paid experts at larger companies, rather than from hobbyists complaining that they don’t get the moon and stars for their $13 a month. Dave was particularly impressed how many were happy to hop on a phone call to talk about requirements (which helps learn about the customer segments and develop the future development and marketing roadmaps) — meanwhile, the variable pricing customers largely a) don’t want to talk about things and b) need a password reset *right now* WTF is taking so long. Server Density expects that their plan customers will be much less aggravating to deal with in the future, but it is still early days yet and they don’t have firm numbers to back up that impression. ## Testing Pricing Can Really Move The Needle For Your Business Virtually no one gets pricing right on the first try. (When I wrote the pricing grid for Appointment Reminder I snuck a $9 plan in there, against my own better judgment, and paid for that decision for a year. I recently nixed it and added a $199 plan instead. Both of those decisions changes been nothing but win.) Since you probably don’t have optimum pricing, strongly consider some sort of price testing. If I can make one concrete recommendation, consider more radical “packaging” restructurings rather than e.g. keeping the same plan structure and playing around with the plan prices +/- $10. (This means that, in addition to tweaking numbers, you find some sort of differentiation in features or the consolidated offering that you can use to segment a particular group of customers into a higher plan than they would otherwise be at numerically.) For more recommendations, again, you probably want to be on my mailing list. You’ll get an email today with a link to a 45 minute video about improving your app’s first run experience, the email about SaaS pricing tomorrow, and then an email about weekly or biweekly about topics you’ll find interesting. Server Density is not the only company telling me that those emails have really been worth people’s time, but if they don’t appeal to your taste, feel free to unsubscribe (or drop me an email to tell me what you’d rather read) at any time. *Disclosure*: Server Density is not a client, which is very convenient for me, because I’m not ordinarily at liberty to talk about doubling a client’s revenue.
true
true
true
null
2024-10-12 00:00:00
2012-08-13 00:00:00
null
null
kalzumeus.com
kalzumeus.com
null
null
5,642,189
http://strangerthanwecanimagine.blogspot.com/2013/05/the-sands-of-time_1.html
The Sands of Time
null
null
true
true
false
Crystals. Ice, diamond, salt, precious stones, etc. What do they have in common? If you said they are all made of molecules or at...
2024-10-12 00:00:00
2013-05-01 00:00:00
https://blogger.googleus…-no-nu/bonds.gif
null
blogspot.com
strangerthanwecanimagine.blogspot.com
null
null
9,180,179
http://www.njneer.com/binary-watch/
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
22,306,996
https://autocode.com/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
5,673,507
http://mashable.com/2013/05/07/nyu-health-tech-student-inventors/
3 Brilliant Health Technologies From Student Inventors
null
Having a good idea isn't enough to make a difference in the world. You also have to be able to sell that idea to everyone else. That was the task at New York University's yearly Entrepreneurs' Challenge last week, where students pitch ideas before a panel of judges from NYU's Stern School of Business. On Friday, the four finalists for the Challenge's Technology Venture Competition presented their pitches: a bio-inspired catheter, a diabetes management app, a device that can diagnose head trauma by tracking users' eye movements and an app to help researchers organize their data. They had to prove that their projects were more than useful: They had to be profitable, too. A Safer Catheter "Hospitals are no place for sick people," said biomedical student Manuskha Vaidya, explaining to the panel of judges that catheters in particular are a frequent festering ground for infectious bacteria that can seriously weaken or even kill a patient. Most current catheters combat infection with antibacterial chemicals, but the downside of this method is that each antibacterial agent has to be targeted to a specific type of bacteria, and even then, the bacteria will eventually develop a resistance, forcing scientists to develop a new preventative. The catheter that Vaidya and her colleagues at Bioinspired Devices, LLC, developed doesn't use antibacterials. Rather, the catheter is made of a polymer that "sheds" as it's used, eliminating the collected bacteria inside the tube before it can infect the patient. This shedding method was inspired by the human skin, which eliminates potential infectants by casting off its surface layer. FitBit for Diabetes Databetes helps patients with diabetes manage their blood sugar. The app works sort of like FitBit, a popular health management app that prompts users to enter data about the food they eat and the exercise they undertake. Databetes' management system is specifically tailored to the needs of diabetic users, however, and can sync with other Bluetooth-equipped devices, such as blood sugar monitors. Co-developer Doug Kanter is also a lifelong Type-1 diabetic. To sell the judges on his and his teammates' pitch, he showed the audience a graph of his personal health data gathered over the past 10 years. Since he began using Databetes one year ago, Kanter said, he's been the healthiest he's been in his life. Faster Concussion Diagnosis Oculogica is able to diagnose people with concussions and other head trauma quickly and without subjecting them to costly CT scans or MRIs. Patients simply sit in front of a screen and watch a small moving image around its edges. A camera atop the screen tracks patients' eye movement as they follow the image, which indicates whether they have suffered a concussion or several other types of brain trauma, including nonstructural brain trauma, which is more difficult to identify because it doesn't show up on radiographic scans. Oculogica co-founder and engineering student Robert Ritlop, said that an Oculogica scan costs only $500, compared with the $1,000 or more necessary per CT scan, adding that the device is highly applicable in the world of professional sports.
true
true
true
3 Brilliant Health Technologies From Student Inventors
2024-10-12 00:00:00
2013-05-08 00:00:00
https://helios-i.mashabl….v1647022331.jpg
article
mashable.com
Mashable
null
null
3,818,762
http://www.wired.com/epicenter/2012/04/facebook-buys-instagram/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,734,950
https://www.bbc.com/news/business-54463211
Coronavirus: 'My bank account request disappeared down a hole'
Kevin Peachey
# Coronavirus: 'My bank account request disappeared down a hole' **Start-up firms are being "stifled" at birth as they find it almost impossible to open a business bank account.** Major banks are closed to new applications for business current accounts, or warning of long delays. Entrepreneurs have told BBC News of how requests for a new account "disappear down a black hole". One lobby group said such firms were fundamental to any economic recovery, but banks say they are busy processing coronavirus loan support schemes. ## 'Literally impossible' Sophia Murday has set up a greengrocer in a previously-empty shop in Glasgow just yards from her flat, marking a career shift from working on sustainability projects. The 39-year-old is selling fruit, vegetables, milk and eggs to locals. "It serves a purpose and is useful to the community, particularly now, as people want to shop local," she told the BBC. She had been a customer of one High Street bank for 20 years, but her application for a business current account was delayed, then rejected. Ms Murday then realised nearly all of the biggest banks were closed to new business customers. "It is literally impossible to start a business if you can't process cash flow through a bank account, take card payments and run books through connected software," she said. "There will be a lot of people, like me, who are looking to put savings and redundancy funds into new ventures in an environment where it is more or less impossible to find a job." A scan of the UK's main High Street banks makes the issue clear. Many state on their websites that they not taking any new business account requests, as they are concentrating on existing customers. Some are warning of weeks of delays for the same reason. UK Finance, which represents the UK's banking sector, said: "Bank staff are working incredibly hard to meet these significant levels of demand [from existing customers], meaning some new customers may currently face some delays when applying to open a business account." The spokesman added that businesses were encouraged to shop around a "wide range of providers" to find the best product for them. Some, smaller, challenger banks are open for business, but the lack of branches to deposit cash, as well as some reputational and practical questions put Ms Murday off. Bruce Jacobs is also reflecting on months of frustration as he tries to revive a business. He runs an adventure sailing holiday business - Rubicon 3 - in the travel sector, which has been hit harder than most during the pandemic. "Small business owners have taken a kicking," he said. "To help to rebuild the economy, we have to be provided with the most basic of tools. A bank account is as basic as it gets." He, too, found few options, describing some applications as "disappearing down a black hole". He said he understood the pressures banks faced, and did not blame them for the difficulties, but said there was little appreciation of the problems that were created when people are unable to access an account. ## Heart of the recovery Mike Cherry, national chairman of the Federation of Small Businesses (FSB), said that start-ups and sole traders were fundamental to the UK's recovery from the last recession in 2009. For a repeat, they would need access to commercial bank accounts, that are vital in maintaining a clear distinction between personal and professional finances. "We appreciate banks were swamped with bounce-back applications, but refusing to open business accounts for new customers will stifle start-ups just at the moment we need them most," he said. "We need to see those who are starting-up and seeking to open a commercial account, and those who are established and seeking a bounce back, given guaranteed routes through which they can make applications that will be assessed swiftly." Earlier this week, BBC News revealed how some businesses risk closing as they find it hard to access coronavirus support schemes. In September, the chancellor extended the deadline for the government's coronavirus loan schemes to the end of November. Bounce-back loans allow small firms to borrow up to £50,000 over nine years at preferential rates, with the loans 100% guaranteed by the government.
true
true
true
Start-up firms are being "stifled" as they find it almost impossible to open a business bank account.
2024-10-12 00:00:00
2020-10-08 00:00:00
https://ichef.bbci.co.uk…_whatsubject.jpg
reportagenewsarticle
bbc.com
BBC News
null
null
29,315,345
https://www.google.com/search?q=orthogonal+coil&tbm=isch
orthogonal coil
null
ALL IMAGES VIDEOS NEWS a) Orthogonal coils with... www.researchgate.net Three orthogonal circular... www.researchgate.net a) Transmitter with three... www.researchgate.net The simplified three... www.researchgate.net The setup of three pairs of... www.researchgate.net Attitude definition of... www.researchgate.net The largest orthogonal set... www.ki.si The basic diagram of the... www.researchgate.net Integration of orthogonal... www.researchgate.net Orthogonal Coils Interact... www.shutterstock.com Schematic of the orthogonal... www.researchgate.net Orthogonal coiled coils... pubs.rsc.org a) Orthogonal coils with... www.researchgate.net 41598_2017_Article_BFsrep42... www.nature.com 41598_2017_Article_BFsrep42... www.nature.com Three orthogonal coils... www.researchgate.net Spherical magnetic energy... www.semanticscholar.org A side view of the driving... www.researchgate.net The section view of the... www.researchgate.net Attitude definition of... www.researchgate.net Next > Virginia - From your IP address - Learn more Sign in Settings Privacy Terms
true
true
true
null
2024-10-12 00:00:00
2017-01-01 00:00:00
null
null
null
null
null
null
30,520,057
https://start.paloaltonetworks.com/code-to-cloud-summit.html
Code to Cloud Cybersecurity Summit: On Demand Sessions
null
null
true
true
false
Learn actionable DevOps and SecOps insights from the 2023 Code to Cloud Cybersecurity Summit with these on-demand sessions.
2024-10-12 00:00:00
2023-01-01 00:00:00
https://cdn.pathfactory.…435c44124576.png
website
paloaltonetworks.com
Palo Alto Networks
null
null
11,729,455
http://www.scmp.com/news/china/policies-politics/article/1947376/revealed-digital-army-making-hundreds-millions-social
Revealed: the digital army making hundreds of millions of social media posts singing praises of the Communist Party
Li Jing
# Revealed: the digital army making hundreds of millions of social media posts singing praises of the Communist Party US researchers carry out first deep analysis of China’s government-backed internet warriors known as the ‘50-cent gang’ **2 minutes** It’s an open secret that China employs a veritable army of internet commentators to sing the government’s praises and attack its critics, but researchers at Harvard University in the United States say they not only have evidence this is the case, but also what Beijing’s motive is. The team headed by Dr Gary King, one of America’s most distinguished political scientists, carried out what they describe as “the first large-scale empirical analysis” of online comments by the notorious “50-cent gang” (*wumao dang)* – so called in the popular but mistaken belief that this is the amount they are paid for each online post made in defence of the government. The team examined a trove of more than 2,000 leaked emails from a district government internet propaganda office in Ganzhou, Jiangxi province, dating from February 2013 to November 2014, to begin “reverse engineering online censorship in China”. Most messages were communications between authorities and the 50-centers on their assignments and work reports. Over a year, the researchers identified nearly 43,800 online messages posted accordingly, finding virtually all of them – more than 99 per cent – were generated by employees at more than 200 government agencies.
true
true
true
US researchers carry out first deep analysis of China’s government-backed internet warriors known as the ‘50-cent gang’
2024-10-12 00:00:00
2016-05-19 00:00:00
https://cdn.i-scmp.com/s…B2u&v=1463711985
article
scmp.com
South China Morning Post
null
null
28,762,610
https://wtamu.edu/~cbaird/sq/2020/02/11/do-blind-people-dream-in-visual-images/
Do blind people dream in visual images?
null
# Do blind people dream in visual images? Category: Biology Published: February 11, 2020 By: Christopher S. Baird, author of The Top 50 Science Questions with Surprising Answers and Associate Professor of Physics at West Texas A&M University Yes, blind people do indeed dream in visual images. For people who were born with eyesight and then later went blind, it is not surprising that they experience visual sensations while dreaming. Dreams are drawn from memories that are stored in the brain as well as from brain circuitry that is developed while experiencing the outside world. Therefore, even though a person who lost his vision may be currently blind, his brain is still able to draw on the visual memories and on the related brain circuits that were formed before he went blind. For this reason, he can dream in visual images. What is more surprising is the discovery that people who were born blind also dream in visual images. The human experience of vision involves three steps: (1) the transformation of a pattern of light to electrical impulses in the eyes, (2) the transmission of these electrical impulses from the eyes to the brain along the optic nerves, and (3) the decoding and assembly of these electrical impulses into visual sensations experienced in the brain. If any one of these three steps is significantly impaired, blindness results. In the vast majority of cases, blindness results from problems in the eyes and in the optic nerves, and not in the brain. In the few cases where blindness results from problems in the brain, the person usually regains some amount of vision due to brain plasticity (i.e. the ability of the brain to rewire itself). Therefore, people who have been blind since birth still technically have the ability to experience visual sensations in the brain. They just have nothing sending electrical impulses with visual information to the brain. In other words, they are still capable of having visual experiences. It's just that these experiences cannot originate from the outside world. Dreams are an interesting area because dreams do not directly originate from the outside world. Therefore, from a plausibility standpoint, it is possible for people who have been blind since birth to dream in visual images. However, just because blind people have the neural capacity to experience visual sensations does not automatically mean that they actually do. Scientists had to carry out research studies in order to determine if people who have been blind since birth actually do dream in visual images. At this point, you may be wondering, "Why don't we just ask the people who have been blind since birth if they dream in visual images?" The problem is that when you ask such people this question, they will always answer no. They are not necessarily answering no because they actually do not have visual dreams. They are saying no because they do not know what visual images are. A girl with eyesight visually recognizes an apple because at some point in the past she saw the apple and ate it, and therefore is able to connect the image of an apple with the taste, smell, shape, and touch of an apple. She is also able to connect the image with the word "apple." In other words, the visual image of an apple becomes a trigger for all the memories and experiences she has previously had with apples. If a girl has never personally experienced the visual image of an actual apple, then the experience of seeing an image of an apple in a dream for the first time has no connection to anything in the real world. She would not realize that she is seeing an apple. As an analogy, suppose you have never tasted salt. No matter how much people describe salt to you, you do not know what the experience is really like until you experience it personally. Suppose you were all alone your whole life, cut off from all people and all of society, and you came across a bag of very salty potato chips for the first time. When you eat the chips, you would experience the taste of salt for the first time, but you would have no way to describe it, because you would have no other previous experiences or connections with it. Similarly, people who have been blind since birth have no experience of connecting visual sensations with external objects in the real world, or relating them to what sighted people describe as vision. Therefore, asking them about it is not useful. Instead, scientists have performed brain scans of people who have been blind since birth while they are sleeping. What scientists have found is that these people have the same type of vision-related electrical activity in the brain during sleep as people with normal eyesight. Furthermore, people who have been blind since birth move their eyes while asleep in a way that is coordinated with the vision-related electrical activity in the brain, just like people with normal eyesight. Therefore, it is highly likely that people who have been blind since birth do indeed experience visual sensations while sleeping. They just don't know how to describe the sensations or even conceptually connect in any way these sensations with what sighted people describe as vision. With that said, the brain scans during sleep of people who have been blind since birth are not identical to those of sighted people. While people who have been blind since birth do indeed dream in visual images, they do it less often and less intensely than sighted people. Instead, they dream more often and more intensely in sounds, smells, and touch sensations. We should keep in mind that a person who has been blind since birth has never had the experience of seeing images originating from the external world and therefore has never formed visual memories connected to the external world. The visual components of their dreams therefore cannot be formed from visual memories or the associated circuitry. Rather, the visual sensations must arise from the electrical fluctuations that originate within the brain. What this means is that people who have been blind since birth probably do not experience detailed visual images of actual objects such as apples or chairs while dreaming. Rather, they probably see spots or blobs of color floating around or flashing. The spots may even correlate meaningfully to the other senses. For instance, a dream of a police car siren sound traveling from the left to the right may be accompanied by the visual sensation of a spot of color traveling from the left to the right at the same speed. In summary, the current evidence suggests that people who have been blind since birth do indeed dream in images, but we do not know exactly what they see. On a related note, brain scans have found that all humans dream in visual images before they are born. Because the womb is in total darkness, and therefore none of us experienced actual vision before we were born, this means that we all experienced visual dreams before birth despite having no visual memories to draw from. Therefore, the visual dream experiences of a fetus are similar to those of an adult who has been blind since birth.
true
true
true
...
2024-10-12 00:00:00
2020-02-11 00:00:00
https://wtamu.edu/~cbair…ges/brain_f2.jpg
article
wtamu.edu
Science Questions with Surprising Answers
null
null
6,369,296
http://linuxgizmos.com/open-sbc-runs-android-and-linux-on-quad-core-rockchip/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
20,658,077
https://dthayerblog.wordpress.com/2019/08/08/performance-factor/
Performance Wiggle Room
Dougathayer
*“Premature optimization is the root of all evil“*: the famous Knuth-ism that we all know and… well, that we all know. It’s hard to go a day of reading programming blogs without *someone *referencing this and leaving their particular footnote on it. I suppose today I am that someone. So what’s my footnote? Well, everyone loves to pick apart either “premature” or “optimization,” and I’m no different. So today, I’m focusing on the term “*premature**.” *What counts as premature, anyway? For many people in the performance world, this quote is frustrating, because a lax interpretation of it can justify any amount laziness about optimization: “It’s slower than it could be, but I’ll optimize it when it becomes a real problem.” A prudent engineer would never say this about correctness. With correctness we are expected to obsess over corner cases that *shouldn’t normally be hit*, because as we all know they will *eventually* be hit, and the program better work correctly or *bad things ™* will happen. *Bad things* can mean a lot. If you’re writing code for a self-driving car, it means people could die. If you’re writing software that handles people’s money, it means they could lose a lot of money. Correctness can cause these kinds of bad things; it’s certainly harder for poor performance to do this. But the more banal, everyday kind of *bad thing* is just that the software doesn’t work the way that the user expects, and it frustrates them. It makes the user say *f*** this*, leave your program, and never come back. (Unless your software is named Photoshop and has a practical monopoly.) In this regard, correctness and performance are the same. Your program might be correct or performant enough today, but small problems propagate themselves out as new code consumes yours. And eventually something will, directly or indirectly, use your code in a way you didn’t intend, in a way for which your implementation is no longer “good enough.” imaginary readersays:“But are hard to find down the road. If the code just becomes too slow, I’ll just profile it and find the problem”correctnessbugs Sure, if you write all of the software by yourself, and profile every new piece of it by yourself, maybe that’s valid reasoning. But that’s rarely the case. We write software that other people consume. Should they test the performance of their code? Yes. Do they? Sometimes. Should you have performance tests to catch regressions? Yes. Do they cover everything? Certainly not, and they certainly don’t cover new end uses, which don’t have a baseline to measure against. But let’s say they do notice that performance isn’t quite up to snuff. And they do profile it. All they will see is that the component you wrote takes up… some arbitrary chunk of time? When you see that allocating memory shows up in your profile, do you think “I’m going to go make the allocator faster.” No, you assume that it’s already been optimized. That’s probably what this other engineer is going to think about your code. They don’t know that you have a whole list of optimizations that you’re planning to make “when it becomes a problem.” So performance suffers just a little bit, or they change their design to something more complex to accommodate for your slow code. And then the next system comes along, and uses their code. imaginary readersays:“But programs can be fully correct – they can’t be expected to be fullyperformant! If that were the case then we would have to write every program in assembly!” Well, for one, when’s the last time you wrote a program that did anything other than crash if it couldn’t allocate memory? Have you ever written a program that tries to check for bits being flipped by cosmic rays? We don’t write fully correct programs. Some things are unreasonable, and the same goes for performance. But it does bring up a valid point – when is enough performance enough? The common answer to this, and the one I am objecting to, is that you’ve optimized it enough when it stops being a problem. If it’s no longer the heaviest code in a profile, maybe, or if the latency of a request is below some target value, or some other line in the sand. If you are not building a system that anything else will ever depend on, then this is fine. But as we mentioned above, other code will use what you built. Other code which does not exist at the time that you wrote yours. Other code which wants to use yours in ways that you didn’t exactly plan for. In engineering more broadly, there is a concept called the “safety factor” of a system. This is a measure of how much stronger something is than it needs to be to support its intended load. *Stronger than its intended load.* A rope bridge has to be able to support more weight than it’s ever intended to hold in practice, because people will misuse it, or it will degrade. We don’t want the ropes to snap just because it’s a bit old or because teenagers were swinging it back and forth. The same ought to be true of performance – we should design systems to perform *better* than they need to for their intended use case, because other engineers are going to misuse it, or it’s going to get old, and little additions here and there will creep in and make it a little less performant than it was at the start, or *something.* So when is premature? If you’re building a new system, and you’re trying to determine if you’ve optimized it enough, try going for a “performance factor” of two. I.e., if it is “fast enough” (as fast as an existing system, or something) at 6ms, try to shoot for 3ms. This is a rule of thumb, though. The important thing is trying to shoot for some kind of buffer between what is *acceptable* and what is *actual.* Because the *actual* performance is only going to suffer over time. Don’t get me wrong, premature optimization is still possible and even likely with this measure. *Often times we prematurely micro-optimize,* wasting our time on the performance of cold, relatively inconsequential bits of code when we could be spending that time optimizing hotter, more significant pieces. But if you’re working on a whole system, which is bordering on being slow enough to impact a user, and you’re trying to determine when to stop optimizing, just remember to give the performance some wiggle room.
true
true
true
“Premature optimization is the root of all evil”: the famous Knuth-ism that we all know and… well, that we all know. It’s hard to go a day of reading programming blogs witho…
2024-10-12 00:00:00
2019-08-08 00:00:00
https://imgs.xkcd.com/co…optimization.png
article
wordpress.com
Dthayer Blog
null
null
37,065,484
https://www.wired.com/story/a-crucial-early-warning-system-for-disease-outbreaks-is-in-jeopardy/
A Crucial Early Warning System for Disease Outbreaks Is in Jeopardy
Maryn McKenna
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED Internal dissent within the mostly volunteer disease-news network known as ProMED—which alerted the world to the earliest cases of Covid, Middle East Respiratory Syndrome (MERS), and SARS—has broken out into the open and threatens to take down the internationally treasured network unless an external sponsor can be found. The struggle for the future of the low-tech site, which also sends out each piece of content on a no-reply email list with 20,000 subscribers, has been captured in dueling posts to its front page. On July 14, a post by ProMED’s chief content officer, a veterinarian and infectious-disease expert named Jarod Hanson, announced that ProMED is running out of money. Because it is being undermined by data-scraping and reselling of its content, Hanson wrote, ProMED would turn off its RSS and Twitter feeds, limit access to its decades of archives to the previous 30 days, and introduce paid subscriptions. Hanson is at the top of ProMED’s masthead, and the post was signed “the ProMED team,” which gave the announced changes the feeling of a united action. That turned out not to be the case. Very early on August 3, a post addressing “Dear friends and readers of ProMED” appeared on the site’s front page. The open letter was signed by 21 of its volunteer and minimally paid moderators and editors, all prominent physicians and researchers, and it makes clear that no unity existed. “Although the [July post] was signed by ‘The ProMED Team,’ we the undersigned want to assure you that we had no prior knowledge,” the open letter stated. “With great sadness and regret … we, the undersigned, are hereby suspending our work for ProMED.” The letter was taken off the site within a few hours, but the text had already been pushed to email subscribers. (WIRED’s copy is here.) On Friday, signers of the open letter said they had been locked out of the site’s internal dashboard. The site’s regular rate of posting slowed Friday and Saturday, but appeared to pick up again on Sunday. Maybe this sounds like a small squabble in a legacy corner of the internet—but to public health and medical people, ProMED falling silent is deeply unnerving. For more than 20 years, it has been an unmissable daily read, ever since it received an emailed query in February 2003 about chat-room rumors of illnesses near Hong Kong. As is the site’s practice, that initial piece of intel was examined by several volunteer experts and cross-checked against a separate piece of news they found online. In its post, which is not currently accessible, ProMED reproduced both the email query and the corroborating information, along with a commentary. That post became the first news published outside China of the burgeoning epidemic of SARS viral pneumonia, which would go on to sweep the world that spring and summer—and which was acknowledged by the regional government less than 24 hours afterward. Using the same system of tips and local news sources, combined with careful evaluation, ProMED published the first alerts of a number of other outbreaks, including two more caused by novel coronaviruses: MERS and Covid, which was detected via two online articles published by media in China on December 30, 2019. Such alerts also led the World Health Organization to reconsider what it will accept as a trustworthy notice of the emergence of epidemics. When the organization rewrote the International Health Regulations in the wake of SARS, committing member nations to a public health code of conduct, it included “epidemic intelligence from open sources” for the first time. On the surface, the dispute between ProMED’s moderators and its leadership team—backed by the professional organization that hosts the project, the International Society for Infectious Diseases (ISID)—looks like another iteration of a discussion that has played out online for years: how to keep publishing news if no one wants to pay for it. But while that is an enduring problem, the question posed by the pause in ProMED’s operations is bigger than subscriptions. It looks more like this: How do you make a case for the value of human-curated intelligence in a world that prefers to pour billions into AI? “ProMED might not always be the fastest, but it always provides important context that would not come through a news report,” says John Brownstein, an epidemiologist and chief innovation officer at Boston Children’s Hospital, who cofounded the automated online outbreaks database HealthMap and has collaborated with ProMED. “It’s the anti-social media, in a way. It’s a trusted voice.” “ProMED possesses trained scientists that are able to discern what is really a problem and what is fake news,” says Scott J. N. McNabb, a research professor at Emory University’s Rollins School of Public Health and a former chief of public health surveillance at the US Centers for Disease Control and Prevention. “That's a tremendous advantage. So the reports are not coming from uninformed individuals, they're coming from professionals that have the medical expertise and public health expertise to really discern: ‘Is this genuine or not?’” To understand why the fate of ProMED feels momentous, it helps to know a little bit about its history. The deliberately Web 1.0 site—it has no comment section and runs no graphics, so as not to stress the bandwidth in low-income nations—was created in 1994 and began being hosted by ISID in 1999. Its chief founder was the late John Payne Woodall, an entomologist and virologist who had a hand in most of the post-World War II build-up of global public health infrastructure, working for the Rockefeller Foundation, the WHO, and the CDC. (Cofounders were Stephen Morse, a professor of epidemiology at Columbia University’s Mailman School of Public Health, and Barbara Hatch Rosenberg, a biological weapons expert and former professor of microbiology at SUNY Purchase.) Woodall believed that everyday people’s reports of events could constitute important early warnings of looming problems. The value of open source intelligence might seem obvious today, with every movement of the war in Ukraine, for instance, tracked on Twitter. But at the time, Woodall was challenging the accepted view that official public health surveillance data, gathered by governments and nongovernmental entities, was the key to understanding the havoc diseases can create. And he was leveraging early civilian access to the internet to do it. The site was established and scored its momentous early success exposing SARS all before Twitter launched and Facebook opened to the general public in 2006, and before the first iPhone went on sale in 2007. The arrival of social media made it possible to instantly share information on all kinds of emergencies, from the Arab Spring to the Fukushima nuclear disaster to the Ebola epidemic in West Africa, and many accounts built audiences and monetization by conveying real-time alerts on arrays of subjects, often with no sourcing attached. The speed of Twitter (now known as X) could make ProMED’s careful verification look slow; though the organization pushed its content to its own Twitter account, those posts would often arrive after a breaking-news account got there first. That left the site in a battle for funding. Its online host ISID lost revenue during the pandemic, as professionals stayed home from the conferences it had hosted. Internally, money began to dry up. One of the complaints aired in the open letter is that payments to the moderators, who receive a small stipend, are already in arrears and will be delayed for two more months. “We are really up against a challenge of long-term, sustainable funding for operating ProMED,” says ISID’s CEO, Linda MacKinnon, who described the organization as sipping from project grants to keep core operations going on a budget of about $1 million a year. (MacKinnon spoke after the July 14 announcement but before the open letter was published.) “We have sort of grown organically over the years, from project to project that were all cobbled together, but not stepping back and putting together a business case of how to sustain, how to innovate. We have found ourselves partnering or collaborating with hospitals and academic institutions, but [not finding a] long-term home.” Hanson, who published the post outlining the changes, told WIRED by email Friday evening: “Not a single entity has stepped forward with any type of funding to allow ProMED to continue to operate as it has for 28 years, which is to say open and free to the public. We have diligently pursued funding and messaged our plan for the past year and made our financial situation widely known.” He added, “Funders are primarily interested in the latest and greatest tools, so from their strategic perspective there's little value in ProMED—an aging but highly effective email surveillance system—to show to their leadership/benefactors.” A deeper look at recent history shows that the battle over ProMED is not only about money, no matter how critical that might be. After Woodall, ProMED was led for years by Larry Madoff, an infectious disease physician and professor at the University of Massachusetts Medical School who built a wide network of collaborators. Madoff was abruptly dismissed by ISID two years ago, and both he and moderators who worked with him say they do not know why. MacKinnon will only say the action was “a board decision.” Madoff held the title of editor in chief; according to the site’s team listings, that title is not now in use. Asked for comment, Madoff said this: “ProMED is a public good, and it ought to be freely available and remain so. It would be a shame if it lost large segments of its audience, many of whom are in low-resource settings, because of a failure of the parent organization to adequately provide resources.” He added: “Losing subscribers will hobble ProMED because it depends on its readers to send information. By not having that big subscriber base, by restricting your base through a paid model, you’re going to lose that.” In 2020, Madoff told WIRED the site’s subscriber base was about 83,000 addresses (which cannot now be confirmed), whereas MacKinnon’s team set the current count at about 20,000. The open letter, among other demands, asks ISID to “ensure the administrative and editorial independence of ProMED's content and subscription policies. This could largely be accomplished by restoring a true Editor-in-Chief position with independent executive authority.” WIRED reached out to all 21 signatories of the open letter to verify that it is authentic. As of Monday, six—Marjorie P. Pollack, Leo Liu, Maria Jacobs, Martin Hugh-Jones, Pablo M. Beldomenico, and Thomas Yuill—confirmed that it represents their views. “In the first six months of Covid, I averaged three hours of sleep a night,” says Pollack, an infectious disease physician who first joined the site in 1997, and wrote the December 30, 2019 post that first flagged the emergence of Covid. “We have been a collegial group of people, and we understood that ProMED represented a labor of love.” Asked for comment on the open letter, MacKinnon emailed a statement Friday afternoon that was also posted to the ProMED site. ”We recognize that members of the ProMED community are concerned about the continuation of the platform,” it says in part. “We also know that we could have communicated changes more clearly to the community and apologize for any confusion and distress caused.” The statement went on to enumerate the site’s financial challenges and ended with a plea for “unrestricted operational funding and investment.” The irony is that ProMED may have run out of runway just as past obstacles to its growth begin to evaporate. Unmeasured but apparently substantial portions of medical and public health users have departed Twitter following the shifts in its ownership and politics, and none of its replacements—Mastodon, Bluesky, Threads, or others—have accumulated the velocity or density to match. That could allow a revived ProMED to reestablish its utility as a trusted source that deserves broader support. But as even ProMED insiders acknowledge, that will need some kind of partner. Last week, supporters said they were hoping to lobby major public health graduate schools, or even the WHO’s new Hub for Pandemic and Epidemic Intelligence, based in Berlin, to lend ProMED financial support or a new institutional home. “ProMED needs to be saved,” McNabb says. “We can move it into an academic setting, or into a broader context like a WHO collaborating center—wherever people feel comfortable and it can be funded appropriately. But we need to save it. It’s too important for public health.” *Updated 8-7-2023 6:15 PM ET: This story was updated to add to the tally of open letter signatories who confirmed their participation to WIRED.*
true
true
true
The ProMED website and listserv, which broke the news of the arrival of Covid, SARS, and MERS, is now caught between financial shortfalls and staff turmoil.
2024-10-12 00:00:00
2023-08-07 00:00:00
https://media.wired.com/…d-1484922323.jpg
article
wired.com
WIRED
null
null
18,311,988
https://blogs.microsoft.com/on-the-issues/2018/10/26/technology-and-the-us-military/
Technology and the US military - Microsoft On the Issues
Brad Smith
Over the last few months there has been a debate in our industry about when and how technology companies should work with the government, and specifically whether companies should supply digital technology to the military, including here in the United States. Yesterday, Satya Nadella and I addressed this issue in a conversation with our employees at the company’s monthly Q&A session. Given public interest in this question, we want to be transparent both internally and externally on where Microsoft stands on these issues. As we explained at our Q&A session, our work as a company in this space is based on three straightforward convictions. First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft. Second, we appreciate the important new ethical and policy issues that artificial intelligence is creating for weapons and warfare. We want to use our knowledge and voice as a corporate citizen to address these in a responsible way through the country’s civic and democratic processes. Third, we understand that some of our employees may have different views. We don’t ask or expect everyone who works at Microsoft to support every position the company takes. We also respect the fact that some employees work in, or may be citizens of, other countries, and they may not want to work on certain projects. As is always the case, if our employees want to work on a different project or team – for whatever reason – we want them to know we support talent mobility. Given our size and product diversity, we often have open jobs across the company and we want people to look for the work they want to do, including with help from Microsoft’s HR team. Because these are complex issues, we want to provide our employees (and the public) additional context and some of our thinking in more detail. To begin, we’ve worked with the U.S. Department of Defense (DOD) on a longstanding and reliable basis for four decades. You’ll find Microsoft technology throughout the American military, helping power its front office, field operations, bases, ships, aircraft and training facilities. We are proud of this relationship, as we are of the many military veterans we employ. Recently Microsoft bid on an important defense project. It’s the DOD’s Joint Enterprise Defense Infrastructure cloud project – or “JEDI” – which will re-engineer the Defense Department’s end-to-end IT infrastructure, from the Pentagon to field-level support of the country’s servicemen and women. The contract has not been awarded but it’s an example of the kind of work we are committed to doing. We readily decided this summer to pursue this project, given our longstanding support for the Defense Department. All of us who live in this country depend on its strong defense. The people who serve in our military work for an institution with a vital role and critical history. Of course, no institution is perfect or has an unblemished track record, and this has been true of the U.S. military. But one thing is clear. Millions of Americans have served and fought in important and just wars, including helping to free African-Americans who were enslaved until the Civil War and liberate nations that had been subjected to tyranny across Western Europe in World War II. Today the citizens in our military risk their lives not only as the country’s first line of defense, but often as the nation’s first line of assistance around the world in hurricanes, floods, earthquakes and other disasters. We want the people of this country and especially the people who serve this country to know that we at Microsoft have their backs. They will have access to the best technology that we create. At the same time, we appreciate that technology is creating new ethical and policy issues that the country needs to address in a thoughtful and wise manner. That’s why it’s important that we engage as a company in the public dialogue on these issues. Artificial intelligence, augmented reality and other technologies are raising new and profoundly important issues, including the ability of weapons to act autonomously. As we have discussed these issues with governments, we’ve appreciated that no military in the world wants to wake up to discover that machines have started a war. But we can’t expect these new developments to be addressed wisely if the people in the tech sector who know the most about technology withdraw from the conversation. We also believe it’s important for people across the tech sector to recognize that ethical issues are not new to the military. Deliberations about just wars literally date back millennia, including to Cicero and ancient Rome. New technologies have created important ethical and policy issues for the United States since the middle of the 1800s. The U.S. military has dealt with issues that have included chemical weapons, biological weapons, nuclear weapons and most recently cyber weapons. Public policies and laws that govern the use of weapons technology repeatedly have proven to be vital not just for this country, but for the world. In the United States, the military is controlled by civilian authorities, including the executive branch, the Congress and the courts. No tech company has been more active than Microsoft in addressing the public policy and legal issues raised by new technology, especially government surveillance and cyber weapons. In a similar way, we’ll engage not only actively but *proactively* across the U.S. government to advocate for policies and laws that will ensure that AI and other new technologies are used responsibly and ethically. Already, we’re talking with experts to help inform us and address these issues. As a company, Microsoft was founded and is headquartered in the United States, and we’ve prospered throughout our 43 years from the many benefits that this country offers. We also recognize that we have a global mission, global customers and a global responsibility. We’ll need to work through these issues in other countries, and we’ll work to do so in an appropriate and thoughtful manner. But when it comes to the U.S. military, as a company, Microsoft will be engaged. We believe that the debate about the role of the tech sector and the military in this country has sometimes missed two fundamental points. First, we believe that the people who defend our country need and deserve our support. And second, to withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way. We are not going to withdraw from the future. In the most positive way possible, we are going to work to help shape it.
true
true
true
Over the last few months there has been a debate in our industry about when and how technology companies should work with the government, and specifically whether companies should supply digital technology to the military, including here in the United States. Yesterday, Satya Nadella and I addressed this issue in a conversation with our employees...
2024-10-12 00:00:00
2018-10-26 00:00:00
https://blogs.microsoft.…top-1024x682.jpg
article
microsoft.com
Microsoft On the Issues
null
null
19,765,461
https://github.com/vyapp/vy
GitHub - vyapp/vy: A vim-like in python made from scratch.
Vyapp
A powerful modal editor written in python. vy is a modal editor with a very modular architecture. vy is built on top of Tkinter which is one of the most productive graphical toolkits; It permits vy to have such a great programming interface for plugins. Python is such an amazing language; it turns vy such a powerful application because its plugin API is high level naturally. In vy it is easy to create modes like it is in emacs, modes that support programming languages, provide all kind of functionalities that varies from accessing irc or email checking. The set of keys used in vy was carefully chosen to be handy although it is possible to make vy look like vim or emacs. The syntax highlighting plugin is very minimalistic and extremely fast. It supports syntax highlighting for all languages that python-pygments supports. The source code of the syntax highlighting plugin is about 120 lines of code. It is faster than the syntax highlighting plugins of both vim and emacs. :) It is possible to easily implement new syntax highlighting themes that work for all languages because it uses python pygments styles scheme. There is a simple and consistent terminal-like plugin in vy that turns it possible to talk to external processes. Such a feature is very handy when dealing with interpreters. One can just drop pieces of code to an interpreter then check the results. vy implements a Python debugger plugin and auto completion that permits debugging Python code easily and in a very cool way. One can set break points, remove break points, run the code then see the cursor jumping to the line that is being executed and much more. It is possible to open multiple vertical/horizontal panes to edit different files. Such a feature makes it possible to edit multiple files in a given tab. vy supports multiple tabs as well with a handy scheme of keys to switch focus between tabs and panes. There is a vyrc file written in Python that is very well documented and organized to make it simple to load plugins and set stuff at startup. You can take the best out of vy with no need to learn some odd language like vimscript or Emacs Lisp; since vy is written in Python, you use Python to develop for it. All built-in functions are well documented, which simplifies the process of plugin development as well as personalizing stuff. The plugins are documented: the documentation can be accessed from vy by dropping Python code to the interpreter. - **Python PDB Debugger** - **Golang Delve Debugger** - **GDB Debugger** - **Nodejs inspect Debugger** - **Rope Refactoring Tools** - **Fuzzy Search** - **Incremental Search** - **Python Pyflakes Integration** - **Tabs/Panes** - **Self documenting** - **HTML Tidy Integration** - **Powerful plugin API** - **Syntax highlighting for 300+ languages** - **Handy Shortcuts** - **Ycmd/YouCompleteMe Auto Completion** - **Easily customizable (vyrc in python)** - **Quick Snippet Search** - **Smart Search with The Silver Searcher** - **File Manager** - **Python Static Type Checker** - **Terminal-like** - **Irc Client Plugin** - **Find Function/Class Definition** - **Python Vulture Integration** - **Python Auto Completion** - **Ruby Auto Completion** - **Golang Auto Completion** - **Javascript Auto Completion** The github organization https://github.com/vyapp is meant to hold vy related projects. **Note:** vy requires Python3 to run, python2 support is no longer available. ``` cd /tmp/ pip download vy tar -zxvf vy-* cd vy-*/ pip install -r requirements.txt python setup.py install ``` **Note:** As vy is in development there may occur some changes to the vyrc file format, it is important to remove your ~/.vy directory before a new installation in order to upgrade to a new version. The vy docs may be outdated sometimes, i struggle to do my best to keep it all fine. There also exists many features which weren't documented yet.
true
true
true
A vim-like in python made from scratch. Contribute to vyapp/vy development by creating an account on GitHub.
2024-10-12 00:00:00
2014-11-07 00:00:00
https://opengraph.githubassets.com/eab1643484f4b0979bbbe1a71d4fbdbd93a198d651204f72d4a043ff9d2e0cc9/vyapp/vy
object
github.com
GitHub
null
null
9,059,439
http://www.forbes.com/sites/thomasbrewster/2015/02/10/microsoft-windows-flaw-survives-for-a-year/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,292,148
http://www.rackspace.com/cloud/blog/2011/11/28/why-devops-is-the-next-big-shift-in-the-it-department/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,350,961
http://sg.finance.yahoo.com/news/parents-seek-us-probe-sons-180817197.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,747,087
http://dx.doi.org/10.1109%2fCSNT.2012.184
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,050,095
http://timesofindia.indiatimes.com/tech/tech-news/hardware/Google-Glass-used-by-Indian-doctors-for-surgery/articleshow/28742511.cms
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
58,961
http://www.readwriteweb.com/archives/demofall_2007_preview_-_companies_to_watch.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,634,165
https://www.cnbc.com/2023/02/02/alphabet-googl-earnings-q4-2022.html
Alphabet misses on earnings and revenue as YouTube falls short
Jennifer Elias
Alphabet missed on both top and bottom lines when it reported fourth quarter earnings after the bell Thursday. The company's stock dropped nearly 4% after hours, erasing some of the 7.28% it gained in normal trading hours. Here's how the numbers stacked up: **Earnings per share (EPS)**: $1.05 vs $1.18 per share expected, according to Refinitiv**Revenue:**$76.05 billion vs. $76.53 billion expected, according to Refinitiv**YouTube advertising revenue:**$7.96 billion vs.**Google Cloud revenue:**$7.32 billion vs. $7.43 billion expected, according to StreetAccount estimates**Traffic acquisition costs (TAC):**$12.93 billion vs. The company said it would take a charge of between $1.9 billion and $2.3 billion, mostly in the first quarter of 2023, related to the layoffs of 12,000 employees it announced in January. It also expects to incur costs of about $500 million related to reduced office space in Q1, and warned that other real-estate charges are possible going forward. CFO Ruth Porat said during the company's earnings call that Alphabet added 3,455 people during the quarter, the majority of which were technical roles. Porat told CNBC's Deirdre Bosa that the company is meaningfully slowing the pace of hiring in an effort to deliver long-term profitable growth, and blamed the YouTube slowdown on a pullback in both planned and direct response advertising in a challenging economic climate. YouTube advertising revenue fell short of analyst expectations to $7.96 billion — down 8% from $8.63 billion the year prior. In December, the National Football League announced YouTube will pay roughly $2 billion a year for the residential rights of the “Sunday Ticket." The deal runs for seven years. In addition to the overall pullback in ad spending, YouTube is also facing heightened competition from TikTok in short-form videos. YouTube shorts now has 50 billion daily views, CEO Sundar Pichai said in a call with investors Thursday. Google Cloud brought in $7.32 billion — less than analysts expected, although it was a 32% increase from the year prior. It also cut its losses dramatically, from $890 million a year ago to $480 million in Q4. Google’s Search and Other revenue came in at $42.60 billion, down 2% from the year prior, the report showed. Executives said it saw further pullback in spend by some advertisers in Q4 over Q3. Google's Other Revenues, which includes hardware and non-advertising YouTube revenue, came in at $8.8 billion, up 8% from the year prior. Operating expenses shot up 10% to $22.50 billion, driven by headcount growth, charges for legal matters and lower ad spend, executives said Thursday. The company also said it lost $1.49 billion on equity securities during the quarter. Revenue in Alphabet’s Other Bets segment, which includes self-driving car unit Waymo as well as some health-tech projects and the company’s venture arms, rose to $226 million — up from $181 million from a year earlier. The unit lost $1.63 billion during the quarter, that's up from a year prior at $1.45 billion. Executives said starting in the first quarter, artificial intelligence subsidiary DeepMind will no longer be reported in Other Bets, but will be reported as part of Alphabet's corporate costs. Executives on the call reiterated the company is focused on AI. CEO Sundar Pichai said "Very soon, people will be able to interact directly with our newest, most powerful language models as a companion to Search, in experimental and innovative ways." CNBC previously reported that Google is internally experimenting with several potential products that could influence its search business. The company is feeling pressure from the popularity of AI-based chatbot ChatGPT, launched late last year by Microsoft-backed OpenAI. Executives previously teased that the company may introduce a similar product to the public at some point this year.
true
true
true
Google's core business is mired in a period of slow growth as businesses reel in ad spending.
2024-10-12 00:00:00
2023-02-02 00:00:00
https://image.cnbcfm.com…95&w=1920&h=1080
article
cnbc.com
CNBC
null
null
11,694,833
https://github.com/plasma-umass/browsix
GitHub - plasma-umass/browsix: Browsix is a Unix-like operating system for the browser.
Plasma-Umass
While standard operating systems like Unix make it relatively simple to build complex applications, web browsers lack the features that make this possible. This project is Browsix, a JavaScript-only framework that brings the essence of Unix to the browser. Browsix makes core Unix features available to web applications (including pipes, processes, signals, sockets, and a shared file system) and extends JavaScript runtimes for C, C++, Go, and Node.js programs so they can run in a Unix-like environment within the browser. Browsix also provides a POSIX-like shell that makes it easy to compose applications together for parallel data processing via pipes. *For more details, check out our tech report (PDF)*. Another way to think about this is that modern web applications are multi-process by nature - the client and some of the application logic lives in the browser, and some of it lives in the cloud, often implemented as microservices. Browsix lets you rethink the boundary between code executing in the browser vs. server-side, while taking advantage of the multi-core nature of modern computing devices. Browsix enables you to compose the in-browser part of your web applications out of processes. Processes behave as you would expect coming from Unix: they run in parallel with the main browser thread, can communicate over pipes, sockets, or the filesystem, and can create subprocesses. This process model is implemented on top of existing browser APIs, like web workers, so it works in all modern browsers. Browsix applications can be served statically or over a CDN. As a proof of concept, we've implemented a POSIX-like shell on top of Browsix, along with an implementation of a number of standard Unix utilities (`cat` , `tee` , `echo` , `sha1sum` , and friends). The utilities are all standard node programs that will run directly under node, or in the browser under Browsix. Individual commands are executed in their own workers, and piping works as expected: Try it out here: live demo! Browsix is useful for more than web terminals. With Browsix, you can run Go microservices directly in the browser! As an example, we have implemented a meme creator, that lets you create memes (sometimes known as image macros) with (hopefully) humorous text on top of several images. We wrote this as a standard REST service in Go, accepting the text and image type as parameters, and returning a PNG. We used our modified GopherJS compiler to compile the Go service (including all dependencies, such as the TrueType font renderer and image manipulation libraries) to JavaScript, and Browsix to run this JavaScript as a process in a background Web Worker. We then dynamically route requests to either this in-browser server or a remote server depending on user agent and network connectivity. Browsix currently supports running node.js, Go, and C/C++ programs. It supports Go with a modified GopherJS compiler (requires a host Go 1.6 install for now), and C/C++ with modifications to Emscripten. Browsix supports executing SPEC CPU2006 and SPEC CPU2017 benchmarks using Browsix-SPEC interface. There are two parts to Browsix: build tooling (the modified Go + C compilers) and runtime support (the kernel + Browsix APIs). Get browsix through npm: ``` $ npm install --save browsix ``` Browsix requires **nodejs 4.3.0** or later, which is more recent than the version packaged in Ubuntu Wiley. To get a recent version of node, follow the instructions on the node.js website. If you don't know whether you should choose node 4.x or 5.x, choose 4.x (it is the long-term support branch). Browsix has three other dependencies: `git` , `npm` (usually installed along with node), and `make` , and builds on OSX and Linux systems. Once you have those dependencies: ``` $ git clone --recursive https://github.com/plasma-umass/browsix $ cd browsix $ make test-once serve ``` This will pull the dependencies, build the runtime and all the utilities, run a number of tests in either Firefox or Chrome, and then launch a copy of the shell served locally. ``` $ ./docker/build.sh .... root@3695ed0cdf45:~/browsix# make test-once serve TEST [13:07:00] Using gulpfile ~/browsix/gulpfile.js [13:07:00] Starting 'copy-node-kernel'... [13:07:00] Starting 'copy-node'... [13:07:00] Starting 'lint-kernel'... [13:07:00] Starting 'lint-browser-node'... [13:07:00] Starting 'lint-bin'... [13:07:00] Starting 'lint-syscall-api'... [13:07:00] Finished 'copy-node-kernel' after 82 ms [13:07:02] Finished 'lint-syscall-api' after 1.61 s [13:07:04] Finished 'lint-kernel' after 3.72 s [13:07:05] Finished 'lint-browser-node' after 4.46 s [13:07:05] Finished 'lint-bin' after 5.08 s [13:07:05] Starting 'build-bin'... [13:07:06] Finished 'copy-node' after 5.16 s [13:07:06] Starting 'build-kernel'... [13:07:06] Starting 'build-browser-node'... ... ``` After building Browsix, build Browsix-SPEC through make: ``` make browsix-spec ``` Follow the instructions in browsix-spec.md. Browsix's `browser-node` implementation has an important to understand limitation: **you must explicitly call process.exit()**. Without this, utilities will work under real-node, but appear to hang under `browser-node` . This is not an intrinsic limitation, but it is a hairy implementation detail -- node exits when the event loop is empty, and there are no active timers or network callbacks. For us to do the same thing means we need to hook `setTimeout` and any other functions that take callbacks to ensure we don't exit early.For a high-level overview of the system design and architecture, please see this document. You're interested in contributing? That's great! The process is similar to other open-source projects hosted on github: - Fork the repository - Make some changes - Commit your changes with a descriptive commit message - Open a pull request If you have questions or problems, please open an issue on this repository (plasma-umass/browsix). This project is licensed under the MIT license, but also incorporates code from other sources. Browsix uses BrowserFS for its filesystem, which is primarily MIT licensed. browser-node's `nextTick` implementation comes from the acorn project, released under the MIT license. A large portion of browser-node is the node standard library, which is MIT licensed. Functions to convert buffers to utf-8 strings and back are derivative of browserify implementations (ported to TypeScript), MIT licensed as well.
true
true
true
Browsix is a Unix-like operating system for the browser. - GitHub - plasma-umass/browsix: Browsix is a Unix-like operating system for the browser.
2024-10-12 00:00:00
2016-02-17 00:00:00
https://opengraph.githubassets.com/a1f1ca987377a7235e715249ca9c4be10f05b36ee44486c3038e0351c17d4c8b/plasma-umass/browsix
object
github.com
GitHub
null
null
7,060,385
http://joshsymonds.com/blog/2014/01/14/rails-consulting-for-fun-and-profit/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
34,318,473
https://blog.pragmaticengineer.com/advice-for-junior-software-engineers/
Advice for Less Experienced Software Engineers in the Current Tech Market
Gergely Orosz
*In October 2022 I wrote about the **Big Tech hiring slowdown** for subscribers of **The Pragmatic Engineer**, predicting the slowdown will hit new grads, hard . In December 2022, the New York Times reported on new grads struggling getting positions at Big Tech in the article **Computer Science students face a shrinking Big Tech job market**. This article, written in August 2022, when the job market for new grads was already very tough, is advice I have for new grads. Subscribe to **The Pragmatic Engineer** to keep up to date of the tech industry changing.* I mostly cover insights about the software engineering industry for more experienced software engineers and engineering managers. This article is a break from these topics. In the second half of 2021, we were in the middle of the most heated tech job market of all time. I wrote about why this market reached all-time highs in a 'perfect storm'. However, while there was huge demand for software engineers, one thing stood out. **Even in 2021, the market for less experienced software engineers was already chilly. **For experienced engineers, job opportunities were plenty, and large compensation increases were common. However, the opposite was true especially for entry-level software engineers: demand for these folks did not increase, neither did their compensation. In October 2021, I already shared advice for this group. A year later, the market has cooled down for experienced software engineers. Where does this leave engineers with less experience? The market for these people is even worse than it was in 2021. There's far more competition - bootcamps and universities have not stopped in graduating new entrants to the market - however, companies are still more likely to hire for experienced engineers, and they are more likely be able to afford these people versus in 2021 when the market was on fire. **This article is advice I can offer to entry-level software engineers. **To be clear, I am not selling hope. We are likely to be in the middle one of the most difficult year to break into software engineering in the past decade. This is especially true if you lack pedigree - e.g. you did not graduate from a well-known school, you have not done an internship at a well-known tech company or you do not have a strong network where people can refer you entry-level positions at the company they work at. All of the below advice is what I tell people who ask me what they could do to maximise chances of getting their first - or second - jobs as software engineers. *Note that none of the below links are affiliate links or sponsored. I am not affiliated in any way with any of the resources I recommend save for **The Tech Resume Inside Out book** that I wrote and is **free for anyone without a job**. See **my ethics statement** on the lack of advertisements, sponsorships or affiliate links.* ## The reality of the 2022 tech market **Know that it will be very hard to get that first job this time. **Bootcamps oversell how easy it is to get job in tech: because they need applicants to come, so they make money. Success stories from people getting jobs without experience both have survivor bias, and those are usually stories from years ago, when the job market was not so hostile to entry-level engineers. The reality is that the job market is cooling off for experienced engineers as well. VC-funded companies and even some traditional companies are freezing hiring or laying off. Those hiring with a limited headcount are more likely to prioritize experienced engineers. In what has never happened before: Meta did not issue new grad return offers for its interns. For new grads, the current tech climate can safely be compared to how challenging it likely was to get a first job in tech in 2008 - following the financial crisis - or in 2001 - after the Dotcom Bust. **Consider joining a support group** if you are not part of one already. Try to find a place where there are other people in your shoes. This is where having education at a college or being part of the bootcamp is helpful, as you already have such a group. Alternatively, look for new grad Discord channels or communities with low fees like Scrimba and other, similar ones. It's easier to figure out what works, and what doesn't, and get motivation as part of a group. For free resources, look for Discord communities like CS Career Hub. Join r/cscareers on Reddit to learn about how others are starting out on their careers. ## Forget about only applying to 'amazing' companies **Aim wide when applying.** Don't only apply to the best-known companies, or ones offering full-remote. Those companies will be getting hundreds, if not thousands of applications to entry-level roles. In 2018, when I was a hiring manager at Uber, in Amsterdam, we opened internships for software engineering students. Within three days of advertising this position, we received 500 qualified applications - meaning people who checked the boxes on what we were asking for. We had four headcount to fill. This was in 2018, when the market was not nearly as challenging. While I'm not saying to not apply to well-known places, know that without references, your chances of even hearing back will likely be slim. Find smaller, lesser-known companies. These can be startups who are struggling to find any applicants, and businesses who will not spend the budget to advertise on job boards like LinkedIn, but you can find job adverts on job aggregators like Indeed. **Apply to less competitive companies, including 'unsexy' ones.** Look for local, non-tech companies, and ones who are not offering full-remote positions. Not only do these positions get fewer applicants: if they are onsite or hybrid, they'll also hire more juniors. This is because they can onboard junior people better. **Apply to local companies, not just remote ones. **Full-remote roles will get much more applications than local ones. Those full-remote roles are also far more likely to hire someone with past experience, as the hiring manager is more likely to see such a hire as a lower risk one. Know that you'll have far better chances if applying locally, especially if it is for a position where being in the office - at least a few days a week - is a requirement. Many of the experienced people won't apply, neither will people outside the area. When getting your first or your second job, you should consider the competition you might have, and try to apply to places that will have less of this. **Apply to consultancies/developer agencies as well. **The software consultancy business model requires hiring and training junior developers. They also give exposure with different environments and technologies. Agencies are a great stepping stone into the industry, and one many people move onto higher paying opportunities a few years later. Be aware that some agencies have poor working practices: if you land in one of these, try to move on, instead of being stuck for too long. **Know that almost no company will sponsor relocation with visas for entry-level positions. **For people who need relocation, some companies do sponsor visas: but they do these for key positions that they cannot hire for locally. New grad positions almost never fall into this category. There are a handful of exceptions - like some of Big Tech will offer visas for interns to return from strategic locations they want to hire for. There are, however, several companies that sponsor visas for university new graduates already in the country. For example, in the US, it's common enough for tech companies and startups willing to sponsor students on OPT with STEM extension, and the same is true in the UK. The difference is that these are students who are already in the country, and graduates of a locally known university or college. However, you can safely assume that for any position which is in a different country and needs a visa for you to work at: you will not hear back if applying if you are an early-career software engineer, or someone looking for your first job. You might as well save an application that will not go anywhere. ## Improve your resume while you keep applying **Tailor your resume to each position** you apply to. If you don't have a job - yet - you request a copy of my book, The Tech Resume Inside Out for free. More than 1,000 people have done so: I approve all non-spam requests. **Build your experience while you are job hunting.** Which person is more likely to be hired the next 12 months: one who spends 12 months applying nonstop, or the one who spends time applying, but also built a side project that anyone can try out, contributed to an open-source project, and did a contract gig on one of the popular freelancer marketplaces? It will be the latter. Balance time between applying, and between making your profile stand out more. **Contribute to open source in non-trivial ways.** Most people you compete with will have similar, non-production-grade projects on their resume. Those who contribute to popular open source libraries used by thousands of people and companies in production really stand out. Look for projects like Awesome First PR opportunities and explore open-source projects you use. This route *will* be hard: much harder than just applying to jobs all day. This is why you stand out from other applicants if you persevere, and start contributing. **Read and apply** How to Be a Kickass New Software Engineer from bootcamp grad turned senior engineer Raymond Gan. Also read the featured articles in Raymond Gan's LinkedIn profile which are all first-hand advice pieces on what does, and what does not work for bootcamp grads, in Raymond's experience. **Consider taking on short projects for little payment or for free. **If you are unable to land a fulltime job, it might be because you lack experience of shipping something in the real world. One way to get this experience by doing shorter term projects, where you might be losing money on your time spent, but you ship something in production. You could build a website or a mobile app for a friend or someone you know who needs something like this but cannot afford to pay market rate. You could build your own such app as well. You can also connect with strangers for projects: but this latter is the one I'd suggest the least, as there's a slippery slope between having your skills exploited, versus getting references on real-world work, even if you are not paid market rates. When I started out, I did several freelance projects while I was at university where I charged below market rates. Those projects served as good references later, and helped me stand out from candidates who only had classroom projects and the usual CRUD app to showcase. **Not all new grads will get job offers. How will you stand out? **The new grads software engineer market is vey much an employers market: meaning there are fewer open positions than people applying to those positions. This means not all new grads will succeed in securing a job. Knowing this: you need to stand out. What are ways that you will do this, knowing your competition? Standing out can be done in several ways: **Pedigree**. The most obvious one and the hardest to get. Graduate at a well-known school, intern at a known company, have references who refer you to their workplaces.**Depth**. Bring more depth in a field or two than your peers. Are you already an expert in programming language, having read the 'in depth' books and have a GitHub repository using advanced features of the language? Do you contribute to core projects in the space: something mostly experienced engineers do?**Breadth**. Do you have experience shipping a web app, a mobile app and a backend service, even if a small one? Most new grads lack such breadth.**Non-trivial projects.**Have you shipped things well beyond your curriculum, which all of your peers have? As a hiring manager, it catches my eyes when I see people who have built more complex solutions - that I can take a look at - that are outside the CRUD apps that most bootcamp grads and new grads showcase as part of their college work.**Papers and in-depth blog posts.**Have you published about your experiences and learnings either as an academic paper or on a professional blog?**Motivation**. Are you motivated to grow in the field, and have some ways to prove this is not just words? It could be anything of the above, or others as well.**Putting in extra effort.**When applying to a company, do you put in any extra effort that very few - or no other - applicants does? For example, when applying to a startup which as a public API to use, did you build a project that uses this API, and add it to your resume on the first line? You bet almost no one did it.**When aiming to break into DevRel**(developer relations): here is advice from Harry Tormey and from Nader Dabit. The above ones are some of many ways to stand out. Putting in the effort to stand out might not get results immediately. However, without standing out from a crowd of applicants, you are far less likely to see success with your applications. ## Don't be picky with offers **If you only have one offer: take it**. You'll read advice about how to negotiate compensation between different offers, and how hot the market is in tech. Ignore this: as most of this applies to people with much experience behind their back. I've personally had a pretty good career, eventually making it to places like Skype and Uber: but, when starting out, I just took the first job that I was able to get in Hungary - as my first job. For my second job, when moving to the UK, getting a first offer at a company I was not excited to work at - long commute, not interesting domain - luckily, resulted in other companies calling me back and I got two more offers. Without this, I would have absolutely taken that one job offer I had. **It's more important that you get started over getting a perfect start.** You can course adjust as you go. It took me about 8 years to work my way up to work at Uber. I freelanced during university shipping a variety of projects. My first fulltime job was a consultany in Hungary, then a consultancy in the UK, and only then did I get my first "bigger name" of JP Morgan on my resume. From there on, it was much easier to have better-known companies to notice me and about five years into my career I got a call from Skype, which was the first widely-known tech company I worked at. Getting started in the industry, and taking that first opportunity with the local Hungarian company was far more important for my career, than taking a perfect start. And I'm still grateful for all that I learned during two years at the company called Sense/Net you've probably never heard of. **If you're a bootcamp grad**: know that some of the "learn to code in X months" bootcamps don't do a good enough job in giving you the skills needed to get a software engineering job. Consider programs like Launch School that take much longer than a bootcamp, is not a bootcamp-like approach, but their graduates get offers even in this heated market. ## A note to hiring managers For hiring managers and engineering managers reading this article: be aware of the current market dynamics. While as a new grad software engineer it is very hard to find that first job, as a hiring manager, it's never been easier to hire very motivated and talented new grads. If you have headcount to fill, consider opening up at least a few new grad positions, once you have the seniority ratio in-place to support these people. You'll save budget by hiring these people, bring enthusiasm, and you could change the career trajectories for every such hire you make. If you hire new grads, see my advice on growing a junior-heavy team and on onboarding engineers to your team. ## Know it will be challenging Getting your foot in the industry is very hard.** **Much of the online resources of 'how I got 5 offers in 2 weeks' are all about survival bias and won't reflect the reality of most people, or how challenging it is to get started. As a ray of hope: once you'll make it in, it will only get easier with every passing year. Good luck, and it is an especially challenging time to get started in the industry. *After you land that first position, you might find **advice to myself when starting out as a software developer** relevant.* Subscribe to my weekly newsletter to get articles like this in your inbox. It's a pretty good read - and the #1 tech newsletter on Substack.
true
true
true
We could well be seeing one of the most difficult times to break into software engineering. Here is my advice to maximise chances of getting that first software engineering job.
2024-10-12 00:00:00
2022-08-06 00:00:00
null
article
pragmaticengineer.com
The Pragmatic Engineer
null
null
33,599,027
https://uk.finance.yahoo.com/news/paris-london-stock-market-largest-europe-120718587.html
Paris just overtook London to become Europe’s biggest stock market
Pedro Goncalves
# Paris overtakes London to become Europe’s biggest stock market London has lost its crown as Europe’s biggest stock market to Paris as economic growth concerns weigh on UK assets. Paris has taken the top spot after the combined market capitalisation of its major share exchanges overtook those in the UK capital, according to an index compiled by Bloomberg. **Read more: ****UK business confidence hits lowest level since 2009** Domestic-focused UK shares have slumped this year, while French luxury goods-makers like LVMH (MC.PA) and Gucci owner Kering (KER.PA) have recently been boosted by optimism over a potential easing of China’s zero COVID policy. London loses its crown as Europe’s biggest stock market to Paris https://t.co/XnDNmL2J1X — Bloomberg (@business) November 14, 2022 Currency movements have also helped Paris, with the pound (GBPUSD=X) down 13% against the dollar this year, while the euro (EURUSD=X) has only lost 9%. Investors rejected an announcement by the Liz Truss government in late September that it would slash taxes while ramping up borrowing in a bid to produce faster growth, citing concerns that the plan would push up inflation just as the Bank of England wants to bring it down. Fears also crept in about the sustainability of government debt at a time of rapidly rising interest rates. The pound crashed to a record low against the US dollar, while bond prices slumped, sending yields soaring. That pushed mortgage rates higher, and brought some pensions funds close to default. The mini-budget which eventually paved the way for Truss' downfall may have cost the country's economy £30bn, according to the Resolution Foundation think tank. **Read more: ****FTSE 100 and European stocks higher ahead of UK autumn budget** Bloomberg added that the market cap gap between the UK and French stock markets has been narrowing from about $1.5tn since the Brexit vote in 2016. UK equities are now worth about $2.821tn compared with about $2.823tn for French equities, by Bloomberg’s calculations.
true
true
true
London was overtaken by Paris as economic growth concerns hit UK assets.
2024-10-12 00:00:00
2022-11-14 00:00:00
https://s.yimg.com/ny/api/res/1.2/2G8DXXaqTrnAItR_R_F8bg--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD02NDY-/https://s.yimg.com/os/creatr-uploaded-images/2022-11/a5d8e4b0-6411-11ed-9d42-fca351031e1e
article
yahoo.com
Yahoo Finance
null
null
36,744,888
https://www.geocities.ws/oldternet/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,796,215
https://uploadvr.com/jigspace-arkit-gives-ar-manuals-just-anything/
JigSpace With ARKit Gives You AR Manuals For Just About Anything
Jamie Feltham
We’ve seen how AR can thrive as an instructional platform before, but what if there was a singular AR app that could teach you how to work with practically anything? JigSpace wants to be that app. You can think of JigSpace as a sort of ‘How To’ manual that aims to cover practically any product and device on the planet with a little help from Apple’s ARKit. Say you just bought a new chair, for example, and wanted a little more help setting it up than the traditional paper instructions. JigSpace would present a 3D model of that chair unpacked and then provide a step-by-step guide to setting it up in real time, allowing you to view virtual construction from any angle and then replicate that in real life. Check out the video above which uses some simple object recognition to instantly present instructions for removing SIM cards in phones or, in other cases, give you a detailed rundown of how an espresso machine works. Crucially, JigSpace works off of user-generated content. In fact, a wide range of its 3D models and instructions are already available via web browser and smartphone app; ARKit integration is simply bringing it all into the real world through your iPhone’s screen for much greater ease of use. Imagine what the platform could one day do inside AR headsets too. JigSpace CEO Zac Duff explained to me over email that the company plans to grow its library through key three factors: partnerships, applying new technologies, and continued focus on a creative community. He explained that JigSpace is already working with others on partnerships that will be announced in the coming months. “There are some amazing and novel approaches to generating both models and accompanying instructions that we’re researching,” Duff added. “This is exciting, it means a lot of content can be accurately generated. Combine that with existing knowledge bases and a compelling solution is possible. “I think ultimately though, the key is an engaged community. Look at what Jack Herrick has done with WikiHow, or Wikipedia. They have amazing communities of passionate users that take pride in creating, sharing, and protecting the integrity of knowledge. This is what we’re building.” If Duff and co pull it off, then JigSpace could be one of the most important AR apps on the horizon.
true
true
true
We’ve seen how AR can thrive as an instructional platform before, but what if there was a singular AR app that could teach you how to work with practically anything? JigSpace wants to be that app. You can think of JigSpace as a sort of ‘How To’ manual that
2024-10-12 00:00:00
2017-07-17 00:00:00
https://www.uploadvr.com…7/07/Jispace.png
article
uploadvr.com
UploadVR
null
null
13,449,519
https://tresorit.com/
Tresorit – secure file exchange & collaboration made easy
null
# Tresorit – secure file exchange & collaboration made easy Protect and optimize your digital work and life while taking control of your data—with one zero-knowledge end-to-end encrypted platform.- Securely store, share, scan, and sign sensitive files in one place - Engage with teams, clients, and partners, ensuring confidentiality - Stay productive using our integrations - Google, Outlook and more Try for free 14-day without limitation 4.9 on Capterra 4.5 on G2.com ## Trusted by 12,000+ organizations worldwide ## How our customers use Tresorit Try for free 14-day without limitation ## Manage sensitive collaboration with one platform - Tresorit SecureCloudStore, sync, and share your most precious files in a secure encrypted cloud where you have full control. - Tresorit FileSharingWhen you already have storage but still need to share files securely with end-to-end control. - Tresorit eSign ADD-ONSign files in just a few clicks with your digital signature and manage your entire document life-cycle efficiently. - Tresorit EmailEncryption ADD-ONShare confidential information and send attachments securely with just one click. ## Secure your digital world with Tresorit Start storing and exchanging files with ease and control. Try for free 14-day without limitation ## Stay on top of security trends Learn how to guarantee information security with end-to end encryption
true
true
true
Share files securely with anyone using encrypted cloud storage. Get the highest standard of data security in the cloud.
2024-10-12 00:00:00
2022-01-01 00:00:00
https://cdn.tresorit.com…663cb6264907.png
null
tresorit.com
tresorit.com
null
null
4,172,779
http://info.zetta.net/mac-backup-guide-small-business/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,798,734
https://datree.io/git-commands/#git-diff-between-branches
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,960,827
http://en.wikipedia.org/wiki/Paul_Erd%C5%91s
Paul Erdős - Wikipedia
Authority control databases
# Paul Erdős Paul Erdős | | ---|---| Born | Budapest, Austria-Hungary | 26 March 1913 Died | 20 September 1996 Warsaw, Poland | (aged 83) Nationality | Hungarian | Alma mater | Royal Hungarian Pázmány Péter University | Known for | Namesakes A very large number of results and conjectures (more than 1,500 articles), and a very large number of coauthors (more than 500) | Awards | Wolf Prize (1983/84) AMS Cole Prize (1951) | Scientific career | | Fields | Pure mathematics | Institutions | | Doctoral advisor | Lipót Fejér | Doctoral students | **Paul Erdős** (Hungarian: *Erdős Pál* [ˈɛrdøːʃ ˈpaːl]; 26 March 1913 – 20 September 1996) was a Hungarian mathematician. He was one of the most prolific mathematicians and producers of mathematical conjectures[2] of the 20th century.[3] Erdős pursued and proposed problems in discrete mathematics, graph theory, number theory, mathematical analysis, approximation theory, set theory, and probability theory.[4] Much of his work centered around discrete mathematics, cracking many previously unsolved problems in the field. He championed and contributed to Ramsey theory, which studies the conditions in which order necessarily appears. Overall, his work leaned towards solving previously open problems, rather than developing or exploring new areas of mathematics. Erdős published around 1,500 mathematical papers during his lifetime, a figure that remains unsurpassed.[5] He firmly believed mathematics to be a social activity, living an itinerant lifestyle with the sole purpose of writing mathematical papers with other mathematicians. He was known both for his social practice of mathematics, working with more than 500 collaborators, and for his eccentric lifestyle; *Time* magazine called him "The Oddball's Oddball".[6] He devoted his waking hours to mathematics, even into his later years—indeed, his death came at a mathematics conference in Warsaw.[7] Erdős's prolific output with co-authors prompted the creation of the Erdős number, the number of steps in the shortest path between a mathematician and Erdős in terms of co-authorships. ## Life [edit]Paul Erdős was born on 26 March 1913, in Budapest, Austria-Hungary,[8] the only surviving child of Anna (née Wilhelm) and Lajos Erdős (né Engländer).[9][10] His two sisters, aged three and five, both died of scarlet fever a few days before he was born.[11] His parents, both Jewish, were high school mathematics teachers. His fascination with mathematics developed early. He was raised partly by a German governess[12] because his father was held captive in Siberia as an Austro-Hungarian prisoner of war during 1914–1920,[10] causing his mother to have to work long hours to support their household. His father had taught himself English while in captivity, but mispronounced many words. When Lajos later taught his son to speak English, Paul learned his father's pronunciation, which he continued to use for the rest of his life.[13] He taught himself to read through mathematics texts that his parents left around in their home. By the age of five, given a person's age, he could calculate in his head how many seconds they had lived.[12] Due to his sisters' deaths, he had a close relationship with his mother, with the two of them reportedly sharing the same bed until he left for college.[14][15] When he was 16, his father introduced him to two subjects that would become lifetime favourites—infinite series and set theory. In high school, Erdős became an ardent solver of the problems that appeared each month in *KöMaL*, the "Mathematical and Physical Journal for Secondary Schools".[16] Erdős began studying at the University of Budapest when he was 17 after winning a national examination. At the time, admission of Jews to Hungarian universities was severely restricted under the *numerus clausus*.[13][17] By the time he was 20, he had found a proof for Chebyshev's theorem.[17] In 1934, at the age of 21, he was awarded a doctorate in mathematics.[17] Erdős's thesis advisor was Lipót Fejér, who was also the thesis advisor for John von Neumann, George Pólya, and Paul (Pál) Turán. He took up a post-doctoral fellowship at Manchester, as Jews in Hungary were suffering oppression under the authoritarian regime. While there he met Godfrey Harold Hardy and Stan Ulam.[13] Because he was Jewish, Erdős decided Hungary was dangerous and left the country, relocating to the United States in 1938.[17] Many members of Erdős's family, including two of his aunts, two of his uncles, and his father, died in Budapest during World War II. His mother was the only one that survived. He was living in America and working at the Institute for Advanced Study in Princeton at the time.[17][18] However, his fellowship at Princeton only got extended by 6 months rather than the expected year due to Erdős not conforming to the standards of the place; they found him "uncouth and unconventional".[13] Described by his biographer, Paul Hoffman, as "probably the most eccentric mathematician in the world," Erdős spent most of his adult life living out of a suitcase.[19] Except for some years in the 1950s, when he was not allowed to enter the United States based on the accusation that he was a Communist sympathizer, his life was a continuous series of going from one meeting or seminar to another.[19] During his visits, Erdős expected his hosts to lodge him, feed him, and do his laundry, along with anything else he needed, as well as arrange for him to get to his next destination.[19] Ulam left his post at the University of Wisconsin–Madison in 1943 to work on the Manhattan Project in Los Alamos, New Mexico with other mathematicians and physicists. He invited Erdős to join the project, but the invitation was withdrawn when Erdős expressed a desire to return to Hungary after the war.[13] On 20 September 1996, at the age of 83, he had a heart attack and died while attending a conference in Warsaw.[20] These circumstances were close to the way he wanted to die. He once said, I want to be giving a lecture, finishing up an important proof on the blackboard, when someone in the audience shouts out, 'What about the general case?'. I'll turn to the audience and smile, 'I'll leave that to the next generation,' and then I'll keel over. [20] Erdős never married and had no children.[9] He is buried next to his mother and father in the Jewish Kozma Street Cemetery in Budapest.[21] For his epitaph, he suggested "I've finally stopped getting dumber." (Hungarian: *"Végre nem butulok tovább"*).[22] Erdős's name contains the Hungarian letter "ő" ("o" with double acute accent), but is often incorrectly written as *Erdos* or *Erdös* either "by mistake or out of typographical necessity".[23] ## Career [edit]In 1934, Erdős moved to Manchester, England, to be a guest lecturer. In 1938, he accepted his first American position as a scholarship holder at the Institute for Advanced Study, Princeton, New Jersey, for the next ten years. Despite outstanding papers with Mark Kac and Aurel Wintner on probabilistic number theory, Pál Turán in approximation theory, and Witold Hurewicz on dimension theory, his fellowship was not continued, and Erdős was forced to take positions as a wandering scholar at UPenn, Notre Dame, Purdue, Stanford, and Syracuse.[24] He would not stay long in one place, instead traveling among mathematical institutions until his death. As a result of the Red Scare and McCarthyism,[25][26][27] in 1954, the Immigration and Naturalization Service denied Erdős, a Hungarian citizen, a re-entry visa into the United States.[28] Teaching at the University of Notre Dame at the time, Erdős could have chosen to remain in the country. Instead, he packed up and left, albeit requesting reconsideration from the U.S. Immigration Services at periodic intervals. At some point he moved to live in Israel, and was given a position for three months at the Hebrew University in Jerusalem, and then a "permanent visiting professor" position at the Technion. Hungary at the time was under the Warsaw Pact with the Soviet Union. Although Hungary limited the freedom of its own citizens to enter and exit the country, in 1956 it gave Erdős the exclusive privilege of being allowed to enter and exit the country as he pleased. In 1963, the U.S. Immigration Service granted Erdős a visa, and he resumed teaching at and traveling to American institutions. Ten years later, in 1973, the 60-year-old Erdős voluntarily left Hungary.[29] During the last decades of his life, Erdős received at least fifteen honorary doctorates. He became a member of the scientific academies of eight countries, including the U.S. National Academy of Sciences and the UK Royal Society.[30] He became a foreign member of the Royal Netherlands Academy of Arts and Sciences in 1977.[31] Shortly before his death, he renounced his honorary degree from the University of Waterloo over what he considered to be unfair treatment of colleague Adrian Bondy.[32][33] ### Mathematical work [edit]Erdős was one of the most prolific publishers of papers in mathematical history, comparable only with Leonhard Euler; Erdős published more papers, mostly in collaboration with other mathematicians, while Euler published more pages, mostly by himself.[34] Erdős wrote around 1,525 mathematical articles in his lifetime,[35] mostly with co-authors. He strongly believed in and practiced mathematics as a social activity,[36] having 511 different collaborators in his lifetime.[37] In his mathematical style, Erdős was much more of a "problem solver" than a "theory developer" (see "The Two Cultures of Mathematics"[38] by Timothy Gowers for an in-depth discussion of the two styles, and why problem solvers are perhaps less appreciated). Joel Spencer states that "his place in the 20th-century mathematical pantheon is a matter of some controversy because he resolutely concentrated on particular theorems and conjectures throughout his illustrious career."[39] Erdős never won the Fields Medal (the highest mathematical prize available during his lifetime), nor did he coauthor a paper with anyone who did,[40] a pattern that extends to other prizes.[41] He did win the 1983/84 Wolf Prize, "for his numerous contributions to number theory, combinatorics, probability, set theory and mathematical analysis, and for personally stimulating mathematicians the world over".[42] In contrast, the works of the three winners after were recognized as "outstanding", "classic", and "profound", and the three before as "fundamental" or "seminal". Of his contributions, the development of Ramsey theory and the application of the probabilistic method especially stand out. Extremal combinatorics owes to him a whole approach, derived in part from the tradition of analytic number theory. Erdős found a proof for Bertrand's postulate which proved to be far neater than Chebyshev's original one. He also discovered the first elementary proof for the prime number theorem, along with Atle Selberg. However, the circumstances leading up to the proofs, as well as publication disagreements, led to a bitter dispute between Erdős and Selberg.[43][44] Erdős also contributed to fields in which he had little real interest, such as topology, where he is credited as the first person to give an example of a totally disconnected topological space that is not zero-dimensional, the Erdős space.[45] ### Erdős's problems [edit]Erdős had a reputation for posing new problems as well as solving existing ones – Ernst Strauss called him "the absolute monarch of problem posers".[7] Throughout his career, Erdős would offer payments for solutions to unresolved problems.[46] These ranged from $25 for problems that he felt were just out of the reach of the current mathematical thinking (both his and others) up to $10,000[47] for problems that were both difficult to attack and mathematically significant. Some of these problems have since been solved, including the most lucrative – Erdős's conjecture on prime gaps was solved in 2014, and the $10,000 paid.[48] There are thought to be at least a thousand remaining unsolved problems, though there is no official or comprehensive list. The offers remained active despite Erdős's death; Ronald Graham was the (informal) administrator of solutions, and a solver could receive either an original check signed by Erdős before his death (for memento only, cannot be cashed) or a cashable check from Graham.[49][ needs update] British mathematician Thomas Bloom started a website dedicated to Erdős's problems in 2024. [50] Perhaps the most mathematically notable of these problems is the Erdős conjecture on arithmetic progressions: If the sum of the reciprocals of a sequence of integers diverges, then the sequence contains arithmetic progressions of arbitrary length. If true, it would solve several other open problems in number theory (although one main implication of the conjecture, that the prime numbers contain arbitrarily long arithmetic progressions, has since been proved independently as the Green–Tao theorem). The payment for the solution of the problem is currently worth US$5,000.[51] The most familiar problem with an Erdős prize is likely the Collatz conjecture, also called the 3*N* + 1 problem. Erdős offered $500 for a solution. ### Collaborators [edit]Erdős' most frequent collaborators include Hungarian mathematicians András Sárközy (62 papers) and András Hajnal (56 papers), and American mathematician Ralph Faudree (50 papers). Other frequent collaborators were the following:[52] - Richard Schelp (42 papers) - C. C. Rousseau (35 papers) - Vera Sós (35 papers) - Alfréd Rényi (32 papers) - Pál Turán (30 papers) - Endre Szemerédi (29 papers) - Ron Graham (28 papers) - Stefan Burr (27 papers) - Carl Pomerance (23 papers) - Joel Spencer (23 papers) - János Pach (21 papers) - Miklós Simonovits (21 papers) - Ernst G. Straus (20 papers) - Melvyn B. Nathanson (19 papers) - Jean-Louis Nicolas (19 papers) - Richard Rado (18 papers) - Béla Bollobás (18 papers) - Eric Charles Milner (15 papers) - András Gyárfás (15 papers) - John Selfridge (14 papers) - Fan Chung (14 papers) - Richard R. Hall (14 papers) - George Piranian (14 papers) - István Joó (12 papers) - Zsolt Tuza (12 papers) - A. R. Reddy (11 papers) - Vojtěch Rödl (11 papers) - Pál Révész (10 papers) - Zoltán Füredi (10 papers) For other co-authors of Erdős, see the list of people with Erdős number 1 in List of people by Erdős number. ## Erdős number [edit]Because of his prolific output, friends created the Erdős number as a tribute. An Erdős number describes a person's degree of separation from Erdős himself, based on their collaboration with him, or with another who has their own Erdős number. Erdős alone was assigned the Erdős number of 0 (for being himself), while his immediate collaborators could claim an Erdős number of 1, their collaborators have Erdős number at most 2, and so on. Approximately 200,000 mathematicians have an assigned Erdős number,[53] and some have estimated that 90 percent of the world's active mathematicians have an Erdős number smaller than 8 (not surprising in light of the small-world phenomenon). Due to collaborations with mathematicians, many scientists in fields such as physics, engineering, biology, and economics also have Erdős numbers.[54] Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers.[55] For example, the roughly 268,000 mathematicians with a known Erdős number have a median value of 5.[56] In contrast, the median Erdős number of Fields Medalists is 3.[57] As of 2015, approximately 11,000 mathematicians have an Erdős number of 2 or less.[58][59] Collaboration distances will necessarily increase over long time scales, as mathematicians with low Erdős numbers die and become unavailable for collaboration. The American Mathematical Society provides a free online tool to determine the Erdős number of every mathematical author listed in the Mathematical Reviews catalogue.[60] The Erdős number was most likely first defined by Casper Goffman,[61] an analyst whose own Erdős number is 2; Goffman co-authored with mathematician Richard B. Darst, who co-authored with Erdős.[62] Goffman published his observations about Erdős's prolific collaboration in a 1969 article titled "And what is your Erdős number?"[63] Jerry Grossman has written that it could be argued that Baseball Hall of Famer Hank Aaron can be considered to have an Erdős number of 1 because they both autographed the same baseball (for Carl Pomerance) when Emory University awarded them honorary degrees on the same day.[64] Erdős numbers have also been proposed for an infant, a horse, and several actors.[65] ## Personality [edit]Another roof, another proof. — Paul Erdős[66] Possessions meant little to Erdős; most of his belongings would fit in a suitcase, as dictated by his itinerant lifestyle. Awards and other earnings were generally donated to people in need and various worthy causes. He spent most of his life traveling between scientific conferences, universities and the homes of colleagues all over the world. He earned enough in stipends from universities as a guest lecturer, and from various mathematical awards, to fund his travels and basic needs; money left over he used to fund cash prizes for proofs of "Erdős's problems" (see above). He would typically show up at a colleague's doorstep and announce "my brain is open", staying long enough to collaborate on a few papers before moving on a few days later. In many cases, he would ask the current collaborator about whom to visit next. His colleague Alfréd Rényi said, "a mathematician is a machine for turning coffee into theorems",[67] and Erdős drank copious quantities; this quotation is often attributed incorrectly to Erdős,[68] but Erdős himself ascribed it to Rényi.[69] After his mother's death in 1971 he started taking antidepressants and amphetamines, despite the concern of his friends, one of whom (Ron Graham) bet him $500 that he could not stop taking them for a month. Erdős won the bet, but complained that it impacted his performance: "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month."[70] After he won the bet, he promptly resumed his use of Ritalin and Benzedrine.[71] He had his own idiosyncratic vocabulary; although an agnostic atheist,[72][73] he spoke of "The Book", a visualization of a book in which God had written down the best and most elegant proofs for mathematical theorems.[74] Lecturing in 1985 he said, "You don't have to believe in God, but you should believe in *The Book*." He himself doubted the existence of God.[75][76] He playfully nicknamed him the SF (for "Supreme Fascist"), accusing him of hiding his socks and Hungarian passports, and of keeping the most elegant mathematical proofs to himself. When he saw a particularly beautiful mathematical proof he would exclaim, "This one's from *The Book*!" This later inspired a book titled *Proofs from the Book*. Other idiosyncratic elements of Erdős's vocabulary include:[71] - Children were referred to as "epsilons" (because in mathematics, particularly calculus, an arbitrarily small positive quantity is commonly denoted by the Greek letter (ε)). - Women were "bosses" who "captured" men as "slaves" by marrying them. Divorced men were "liberated". - People who stopped doing mathematics had "died", while people who died had "left". - Alcoholic drinks were "poison". - Music (except classical music) was "noise". - To be considered a hack was to be a "Newton". - To give a mathematical lecture was "to preach". - Mathematical lectures themselves were "sermons". [77] - To give an oral exam to students was "to torture" them. He gave nicknames to many countries, examples being: the U.S. was "samland" (after Uncle Sam)[71] and the Soviet Union was "joedom" (after Joseph Stalin).[71] He claimed that Hindi was the best language because words for old age (*bud̩d̩hā*) and stupidity (*buddhū*) sounded almost the same.[78] ### Signature [edit]Erdős signed his name "Paul Erdos P.G.O.M." When he became 60, he added "L.D.", at 65 "A.D.", at 70 "L.D." (again), and at 75 "C.D."[78] - P.G.O.M. represented "Poor Great Old Man" - The first L.D. represented "Living Dead" - A.D. represented "Archaeological Discovery" - The second L.D. represented "Legally Dead" - C.D. represented "Counts Dead" [79][80] ## Legacy [edit]### Books and films [edit]Erdős is the subject of at least three books: two biographies (Hoffman's *The Man Who Loved Only Numbers* and Schechter's *My Brain is Open*, both published in 1998) and a 2013 children's picture book by Deborah Heiligman (*The Boy Who Loved Math: The Improbable Life of Paul Erdős*).[81] He is also the subject of George Csicsery's biographical documentary film *N is a Number: A Portrait of Paul Erdős,*[82] made while he was still alive. ### Astronomy [edit]In 2021 the minor planet (asteroid) 405571 (temporarily designated 2005 QE87) was formally named "Erdőspál" to commemorate Erdős, with the citation describing him as "a Hungarian mathematician, much of whose work centered around discrete mathematics. His work leaned towards solving previously open problems, rather than developing or exploring new areas of mathematics."[83] The naming was proposed by "K. Sárneczky, Z. Kuli" (Kuli being the asteroid's discoverer). ## See also [edit]- List of topics named after Paul Erdős – including conjectures, numbers, prizes, and theorems - Box-making game - Covering system – collection of finitely many residue classes whose union contains every integer - Dimension (graph theory) – property of undirected graphs related to their representations in spaces - Even circuit theorem – theorem that an 𝑛-vertex graph that does not have a simple cycle of length 2𝑘 can only have O(𝑛¹⁺¹ꜘᵏ) edges - Friendship graph – Graph of triangles with a shared vertex - Minimum overlap problem - Probabilistic method – Nonconstructive method for mathematical proofs - Probabilistic number theory – Subfield of number theory - The Martians (scientists) – Group of prominent Hungarian scientists ## References [edit]**^**"Mathematics Genealogy Project". Retrieved 13 August 2012.**^**"The Sum-Product Problem Shows How Addition and Multiplication Constrain Each Other".*Quanta Magazine*. 6 February 2019. Retrieved 6 October 2019.**^**Hoffman, Paul (8 July 2013). "Paul Erdős". "Encyclopædia Britannica.**^**"Paul Erdos - Hungarian mathematician".*Britannica.com*. Retrieved 2 December 2017.**^**According to "Facts about Erdös Numbers and the Collaboration Graph"., using the Mathematical Reviews data base, the next highest article count is roughly 823.**^**Lemonick, Michael D. (29 March 1999). "Paul Erdos: The Oddball's Oddball".*Time*. Archived from the original on 6 January 2012.- ^ **a**Kolata, Gina (24 September 1996). "Paul Erdos, 83, a Wayfarer In Math's Vanguard, Is Dead".**b***The New York Times*. pp. A1 and B8. Retrieved 29 September 2008. **^**"Erdos biography". Gap-system.org. Archived from the original on 7 June 2011. Retrieved 29 May 2010.- ^ **a**Baker, A.; Bollobas, B. (1999). "Paul Erdős 26 March 1913 – 20 September 1996: Elected For.Mem.R.S. 1989".**b***Biographical Memoirs of Fellows of the Royal Society*.**45**: 147–164. doi:10.1098/rsbm.1999.0011. - ^ **a**Chern, Shiing-Shen; Hirzebruch, Friedrich (2000).**b***Wolf Prize in Mathematics*. World Scientific. p. 294. ISBN 978-981-02-3945-9. **^**"Paul Erdős". Retrieved 11 June 2015.- ^ **a**Hoffman 1998, p. 66**b** - ^ **a****b****c****d**"Paul Erdős - Biography".**e***Maths History*. Retrieved 6 July 2022. **^**Hoffman, Paul (1 July 2016). ""Paul Erdős: The Man Who Loved Only Numbers" video lecture".*YouTube*. The University of Manchester. Retrieved 17 March 2017.**^**Alexander, James (27 September 1998). "Planning an Infinite Stay".*The New York Times*. Retrieved 6 May 2022.**^**Babai, László. "Paul Erdős just left town". Archived from the original on 9 June 2011.- ^ **a****b****c****d**Bruno 2003, p. 120**e** **^**Csicsery, George Paul (2005).*N Is a Number: A Portrait of Paul Erdős*. Berlin; Heidelberg: Springer Verlag. ISBN 3-540-22469-6.- ^ **a****b**Bruno 2003, p. 121**c** - ^ **a**Bruno 2003, p. 122**b** **^**"Erdős Pál sírja - grave 17A-6-29".*agt.bme.hu*. Archived from the original on 4 April 2016. Retrieved 2 December 2017.**^**Hoffman 1998, p. 3.**^**The full quote is "Note the pair of long accents on the "ő," often (even in Erdos's own papers) by mistake or out of typographical necessity replaced by "ö," the more familiar German umlaut which also exists in Hungarian.", from Erdős, Paul; Miklós, D.; Sós, Vera T. (1996).*Combinatorics, Paul Erdős is eighty*.**^**Bollobás 1996, pp. 4.**^**"The wandering mathematician: Paul Erdos".*TheArticle*. 28 July 2023. Retrieved 9 September 2023.**^**"Paul Erdős - Biography".*Maths History*. Retrieved 9 September 2023.**^**Sack, Harald (20 September 2018). "What's your Erdös Number? – The bustling Life of Mathematician Paul Erdös | SciHi Blog". Retrieved 10 September 2023.**^**"Erdos biography". School of Mathematics and Statistics, University of St Andrews, Scotland. January 2000. Retrieved 11 November 2008.**^**Babai, László; Spencer, Joel. "Paul Erdős (1913–1996)" (PDF).*Notices of the American Mathematical Society*.**45**(1). American Mathematical Society.**^**Baker, A.; Bollobás, B. (1999). "Paul Erdõs. 26 March 1913 — 20 September 1996".*Biographical Memoirs of Fellows of the Royal Society*.**45**. The Royal Society: 147–164. doi:10.1098/rsbm.1999.0011. ISSN 0080-4606. S2CID 123517792.**^**"P. Erdös (1913 - 1996)". Royal Netherlands Academy of Arts and Sciences. Archived from the original on 28 July 2020.**^**Erdős, Paul (4 June 1996). "Dear President Downey" (PDF). Archived from the original (PDF) on 15 October 2005. Retrieved 8 July 2014.With a heavy heart I feel that I have to sever my connections with the University of Waterloo, including resigning my honorary degree which I received from the University in 1981 (which caused me great pleasure). I was very upset by the treatment of Professor Adrian Bondy. I do not maintain that Professor Bondy was innocent, but in view of his accomplishments and distinguished services to the University I feel that 'justice should be tempered with mercy.' **^**Transcription of October 2, 1996, article from University of Waterloo Gazette (archive) Archived November 23, 2010, at the Wayback Machine**^**Hoffman 1998, p. 42.**^**Grossman, Jerry. "Publications of Paul Erdös". Retrieved 1 February 2011.**^**Krauthammer, Charles (27 September 1996). "Paul Erdos, Sweet Genius".*The Washington Post*. p. A25. "?".**^**"The Erdős Number Project Data Files". Oakland.edu. 29 May 2009. Retrieved 29 May 2010.**^**Gowers, Timothy (2000). "The Two Cultures of Mathematics" (PDF). In Arnold, V. I.; Atiyah, Michael; Lax, Peter D.; Mazur, Barry (eds.).*Mathematics: Frontiers and Perspectives*. American Mathematical Society. ISBN 978-0821826973.**^**Spencer, Joel (November–December 2000). "Prove and Conjecture!".*American Scientist*.**88**(6). This article is a review of*Mathematics: Frontiers and Perspectives***^**"Paths to Erdős - The Erdős Number Project- Oakland University".*oakland.edu*. Retrieved 2 December 2017.**^**From "trails to Erdos" Archived 2015-09-24 at the Wayback Machine, by DeCastro and Grossman, in*The Mathematical Intelligencer*, vol. 21, no. 3 (Summer 1999), 51–63: A careful reading of Table 3 shows that although Erdos never wrote jointly with any of the 42 [Fields] medalists (a fact perhaps worthy of further contemplation)... there are many other important international awards for mathematicians. Perhaps the three most renowned...are the Rolf Nevanlinna Prize, the Wolf Prize in Mathematics, and the Leroy P. Steele Prizes. ... Again, one may wonder why KAPLANSKY is the only recipient of any of these prizes who collaborated with Paul Erdös. (After this paper was written, collaborator Lovász received the Wolf prize, making 2 in all).**^**"Wolf Foundation Mathematics Prize Page". Wolffund.org.il. Archived from the original on 10 April 2008. Retrieved 29 May 2010.**^**Goldfeld, Dorian (2003). "The Elementary Proof of the Prime Number Theorem: an Historical Perspective".*Number Theory: New York Seminar*: 179–192.**^**Baas, Nils A.; Skau, Christian F. (2008). "The lord of the numbers, Atle Selberg. On his life and mathematics" (PDF).*Bull. Amer. Math. Soc*.**45**(4): 617–649. doi:10.1090/S0273-0979-08-01223-8.**^**Henriksen, Melvin. "Reminiscences of Paul Erdös (1913–1996)". Mathematical Association of America. Retrieved 1 September 2008.**^**"Math genius left unclaimed sum".*Edmonton Journal*. Archived from the original on 18 January 2011. Retrieved 16 July 2020.**^**"Prime Gap Grows After Decades-Long Lull". 10 December 2014./**^**KEVIN HARTNETT (5 June 2017). "Cash for Math: The Erdős Prizes Live On".**^**Seife, Charles (5 April 2002). "Erdös's Hard-to-Win Prizes Still Draw Bounty Hunters".*Science*.**296**(5565): 39–40. doi:10.1126/science.296.5565.39. PMID 11935003. S2CID 34952867.**^**"Erdős Problems".*Erdős Problems*. Retrieved 23 April 2024.**^**p. 354, Soifer, Alexander (2008);*The Mathematical Coloring Book: Mathematics of Coloring and the Colorful Life of its Creators*; New York: Springer. ISBN 978-0-387-74640-1**^**List of collaborators of Erdős by number of joint papers Archived 2008-08-04 at the Wayback Machine, from the Erdős number project website.**^**"From Benford to Erdös".*Radio Lab*. Episode 2009-10-09. 30 September 2009. Archived from the original on 18 August 2010. Retrieved 6 February 2016.**^**Grossman, Jerry. "Some Famous People with Finite Erdös Numbers". Retrieved 1 February 2011.**^**De Castro, Rodrigo; Grossman, Jerrold W. (1999). "Famous trails to Paul Erdős" (PDF).*The Mathematical Intelligencer*.**21**(3): 51–63. CiteSeerX 10.1.1.33.6972. doi:10.1007/BF03025416. MR 1709679. S2CID 120046886. Archived from the original (PDF) on 24 September 2015. Original Spanish version in*Rev. Acad. Colombiana Cienc. Exact. Fís. Natur.***23**(89) 563–582, 1999, MR1744115.**^**"Facts about Erdös Numbers and the Collaboration Graph - The Erdös Number Project- Oakland University".*OU-Main-Page*. Retrieved 6 October 2019.**^**"Erdös Numbers in Finance".**^**"Erdos2".**^**"Paths to Erdös - The Erdös Number Project- Oakland University".*OU-Main-Page*. Retrieved 6 October 2019.**^**"mathscinet/collaborationDistance".*ams.org*. Retrieved 2 December 2017.**^**Michael Golomb. "Obituary of Paul Erdös at Purdue".*www.math.purdue.edu*. Retrieved 4 May 2022.**^**from the Erdos Number Project**^**Goffman, Casper (1969). "And what is your Erdős number?".*American Mathematical Monthly*.**76**(7): 791. doi:10.2307/2317868. JSTOR 2317868.**^**Grossman, Jerry. "Items of Interest Related to Erdös Numbers".**^**"The Extended Erdös Number Project".*harveycohen.net*. Retrieved 2 December 2017.**^**Chern, Shiing-Shen; Hirzebruch, Friedrich, eds. (2 September 2023).*Wolf Prize in Mathematics*. Vol. 1. World Scientific. p. 293. ISBN 9789814723930.**^**J.J. O'Connor; E.F. Robertson (December 2008). "Biography of Alfréd Rényi".*Maths History*. Retrieved 4 May 2022.**^**Schechter 1998, pp. 155.**^**Erdős, Paul (1995). "Child Prodigies" (PDF).*Mathematics Competitions*.**8**(1): 7–15. Archived from the original (PDF) on 24 March 2012. Retrieved 17 July 2012.**^**Hill, J. Paul Erdos, Mathematical Genius, Human (In That Order)- ^ **a****b****c**Paul, Hoffman. "1. The Story of Paul Erdös and the Search for Mathematical Truth".**d***The Man Who Loved Only Numbers*. Retrieved 4 May 2022. **^**Mulcahy, Colm (26 March 2013). "Centenary of Mathematician Paul Erdős – Source of Bacon Number Concept".*Huffington Post*. Retrieved 13 April 2013.In his own words, "I'm not qualified to say whether or not God exists. I kind of doubt He does. Nevertheless, I'm always saying that the SF has this transfinite Book that contains the best proofs of all mathematical theorems, proofs that are elegant and perfect...You don't have to believe in God, but you should believe in the Book.". **^**Huberman, Jack (2008).*Quotable Atheist: Ammunition for Nonbelievers, Political Junkies, Gadflies, and Those Generally Hell-Bound*. Nation Books. p. 107. ISBN 9781568584195.I kind of doubt He [exists]. Nevertheless, I'm always saying that the SF has this transfinite Book ... that contains the best proofs of all theorems, proofs that are elegant and perfect.... You don't have to believe in God, but you should believe in the Book. **^**Nathalie Sinclair, William Higginson, ed. (2006).*Mathematics and the Aesthetic: New Approaches to an Ancient Affinity*. Springer. p. 36. ISBN 9780387305264.Erdös, an atheist, named 'the Book' the place where God keeps aesthetically perfect proofs. **^**Schechter 1998, pp. 70–71.**^**Raman, Varadaraja (2005).*Variety in Religion And Science: Daily Reflections*. iUniverse. p. 256. ISBN 9780595358403.**^**Strick, Heinz. "Paul Erdős" (PDF).- ^ **a**Bollobás 1996, pp. 6.**b** **^**Schechter 1998, pp. 41.**^**Paul Erdös: N is a number on YouTube, a documentary film by George Paul Csicsery, 1991.**^**Silver, Nate (12 July 2013). "Children's Books Beautiful Minds 'The Boy Who Loved Math' and 'On a Beam of Light'".*The New York Times*. Retrieved 29 October 2014.**^**Csicsery, George Paul,*N Is a Number: A Portrait of Paul Erdös*(Documentary, Biography), retrieved 4 May 2022**^**Working Group on Small Body Nomenclature of the International Astronomical Union (14 May 2021). "WGSBN Bulletin" (PDF).*WGSBN Bulletin*.**1**(1): 29. Retrieved 16 May 2021. ## Sources [edit]- Bruno, Leonard C. (2003) [1999]. *Math and mathematicians : the history of math discoveries around the world*. Baker, Lawrence W. Detroit, Mich.: U X L. ISBN 978-0787638139. OCLC 41497065. - Schechter, Bruce (1998). *My Brain is Open: The Mathematical Journeys of Paul Erdős*. New York: Simon & Schuster. ISBN 978-0-684-84635-4. - Bollobás, Béla (December 1996). "A Life of Mathematics – Paul Erdos, 1913-1996" (PDF). *Focus*.**16**(6). Washington, D.C.: Mathematical Association of America: 4. Retrieved 6 May 2022. - Hoffman, Paul (1998). *The Man Who Loved Only Numbers: The Story of Paul Erdős and the Search for Mathematical Truth*. London: Fourth Estate Ltd. ISBN 978-1-85702-811-9. ## Further reading [edit]- Aigner, Martin; Ziegler, Günther (2014). *Proofs from THE BOOK*. Berlin; New York: Springer. doi:10.1007/978-3-662-44205-0. ISBN 978-3-662-44204-3. ## External links [edit]*.* **Paul Erdős**- Erdős's Google Scholar profile - Searchable collection of (almost) all papers of Erdős - Database of problems proposed by Erdős - O'Connor, John J.; Robertson, Edmund F., "Paul Erdős", *MacTutor History of Mathematics Archive*, University of St Andrews - Paul Erdős at the Mathematics Genealogy Project - Jerry Grossman at Oakland University. *The Erdös Number Project* - The Man Who Loved Only Numbers – Royal Society public lecture by Paul Hoffman (video) - Radiolab: Numbers, with a story on Paul Erdős - Fan Chung, "Open problems of Paul Erdős in graph theory" - Paul Erdős - 1913 births - 1996 deaths - 20th-century Hungarian mathematicians - Mental calculators - Hungarian agnostics - Hungarian atheists - Jewish atheists - Jewish agnostics - Hungarian Jews - Combinatorialists - Graph theorists - Set theorists - Number theorists - Network scientists - Probability theorists - Mathematicians from Austria-Hungary - Mathematicians from Budapest - Foreign members of the Royal Society - Members of the Hungarian Academy of Sciences - Members of the Royal Netherlands Academy of Arts and Sciences - Eötvös Loránd University alumni - Institute for Advanced Study visiting scholars - Academics of the Victoria University of Manchester - Stanford University faculty - Syracuse University faculty - University of Pennsylvania faculty - Mathematicians at the University of Pennsylvania - University of Notre Dame faculty - Purdue University faculty - Princeton University faculty - Foreign associates of the National Academy of Sciences - Wolf Prize in Mathematics laureates - Burials at Kozma Street Cemetery
true
true
true
null
2024-10-12 00:00:00
2004-04-17 00:00:00
https://upload.wikimedia…28cropped%29.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
12,564,057
https://medium.com/@fagnerbrack/how-to-create-a-hack-with-style-16e42b44d03
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,906,658
https://www.nccoe.nist.gov/news/nccoe-announces-technology-collaborators-demonstrate-zero-trust-architectures
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,279,981
https://github.com/observablehq/plot/blob/main/CHANGELOG.md
plot/CHANGELOG.md at main · observablehq/plot
Observablehq
Year: **Current (2024)** · 2023 · 2022 · 2021 The new waffle mark 🧇 displays a quantity (or quantitative extent) for a given category; unlike a bar, a waffle is subdivided into cells that allow easier counting, making waffles useful for reading and comparing exact quantities. Plot’s waffle mark is highly configurable: it supports stacking, positive and negative values, rounded corners, partial cells for fractional counts, automatic row or column size determination (with optional override), and more! ``` Plot.plot({ fx: {interval: 10}, color: {legend: true}, marks: [Plot.waffleY(olympians, Plot.groupZ({y: "count"}, {fill: "sex", sort: "sex", fx: "weight", unit: 10}))] }) ``` All marks now support GeoJSON data and GeoJSON property shorthand, making it easier to work with GeoJSON. For example, below the data `counties` is a GeoJSON FeatureCollection, and `unemployment` refers to a property on each feature; the **fill** option is thus shorthand for `(d) => d.properties.unemployment` . The geo mark now also supports the **tip** option (via an implicit centroid transform), making it easier to use Plot’s interactive tooltips. ``` Plot.plot({ projection: "albers-usa", color: { type: "quantile", n: 9, scheme: "blues", label: "Unemployment (%)", legend: true }, marks: [ Plot.geo(counties, { fill: "unemployment", title: (d) => `${d.properties.name} ${d.properties.unemployment}%`, tip: true }) ] }) ``` All marks now also support column name channel shorthand when using Apache Arrow tables as data, and we’ve added detection of Arrow date-type columns. (Arrow represents temporal data using BigInt rather than Date.) `Plot.dot(gistemp, {x: "Date", y: "Anomaly"}).plot() // gistemp is an Arrow Table!` The rect-like marks (rect, bar, cell, and frame) now support individual rounding options for each side (**rx1**, **ry1**, *etc.*) and corner (**rx1y1**, **rx2y1**, *etc.*). This allows you to round just the top side of rects. You can even use a negative corner radius on the bottom side for seamless stacking, as in the histogram of Olympic athletes below. ``` Plot.plot({ color: {legend: true}, marks: [ Plot.rectY(olympians, Plot.binX({y: "count"}, {x: "weight", fill: "sex", ry2: 4, ry1: -4, clip: "frame"})), Plot.ruleY([0]) ] }) ``` Plot now respects the projection **domain** when determining the default plot height. Previously, the map below would use a default square aspect ratio for the *conic-conformal* projection regardless of the specified **domain**, but now the map is perfectly sized to fit North Carolina. (Plot also now chooses a smarter default plot height when the ordinal *y* scale domain is empty.) ``` Plot.plot({ projection: {. type: "conic-conformal", parallels: [34 + 20 / 60, 36 + 10 / 60], rotate: [79, 0], domain: state }, marks: [ Plot.geo(counties, {strokeOpacity: 0.2}), Plot.geo(state) ] }) ``` The marker options now render as intended on marks with varying aesthetics, such as the spiraling arrows of varying thickness and color below. ``` Plot.plot({ inset: 40, axis: null, marks: [ Plot.line(d3.range(400), { x: (i) => i * Math.sin(i / 100 + ((i % 5) * 2 * Math.PI) / 5), y: (i) => i * Math.cos(i / 100 + ((i % 5) * 2 * Math.PI) / 5), z: (i) => i % 5, stroke: (i) => -i, strokeWidth: (i) => i ** 1.1 / 100, markerEnd: "arrow" }) ] }) ``` This release includes a few more new features, bug fixes, and improvements: The new **className** mark option specifies an optional `class` attribute for rendered marks, allowing styling of marks via external stylesheets or easier selection via JavaScript; thanks, @RLesser! Plot now reuses `clipPath` elements, when possible, when the **clip** mark option is set to *frame* or *projection*. The difference mark now supports a horizontal orientation via differenceX, and the shift transform now likewise supports shiftY. The Voronoi mark is now compatible with the pointer transform: only the pointed Voronoi cell is rendered; the Voronoi mark now also renders as intended with non-exclusive facets (as when using the *exclude* facet mode). The tip mark no longer displays channels containing literal color values by default. Changes the default categorical color scheme to *Observable10*. The group transform now preserves the input order of groups by default, making it easier to sort groups by using the **sort** option. The group and bin transforms now support the *z* reducer. Improves the accessibility of axes by hidding tick marks and grid lines from the accessibility tree. Upgrades D3 to 7.9.0. For earlier changes, continue to the 2023 CHANGELOG.
true
true
true
A concise API for exploratory data visualization implementing a layered grammar of graphics - observablehq/plot
2024-10-12 00:00:00
2020-10-29 00:00:00
https://repository-images.githubusercontent.com/308464842/f3d103d8-3ee2-44bc-a6d1-5dfa08376fec
object
github.com
GitHub
null
null
1,002,427
http://ozmm.org/posts/2009_open_source_top_ten.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,592,633
https://www.bloomberg.com/opinion/articles/2023-01-30/economists-have-failed-middle-class-americans-on-inflation
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
34,520,031
https://dlnews.com/articles/coinbase-chief-product-officer-surojit-chatterjee-gets-105-million-payday
News - DL News
null
Search results All News DeFi Explore All News DeFi Regulation Markets Web3 Snapshot People & culture Deals Llama U Opinion Events Submit Newsletters ETF tracker Bitcoin Ethereum Spotlight Reports Interviews Introducing Events Get in touch About us Contact us Editorial standards Who we are Work with us Follow us / A part of LATEST Mt. Gox payouts delayed until next year — no Bitcoin avalanche just yet Economic instability fuels crypto safe haven boom in Latin America Ethereum ETF options listing stalls as SEC pushes deadline to December Follow us / A part of Search results 404 Article Not Found The article you are looking for could not be found. If you think this is an error, let us know . Back to DLNews
true
true
true
null
2024-10-12 00:00:00
2023-03-08 00:00:00
https://www.dlnews.com/r…15e45&width=1200
article
dlnews.com
DL News
null
null
11,674,477
https://gist.github.com/larahogan/b681da601e3c94fdd3a6#gistcomment-1409228
Native app performance metrics
Larahogan
This is a draft list of what we're thinking about measuring in Etsy's native apps. Currently we're looking at how to measure these things with Espresso and Kif (or if each metric is even possible to measure in an automated way). We'd like to build internal dashboards and alerts around regressions in these metrics using automated tests. In the future, we'll want to measure most of these things with RUM too. **App launch time**- how long does it take between tapping the icon and being able to interact with the app?**Time to complete critical flows**- using automated testing, how long does it take a user to finish the checkout flow, etc.?**Battery usage**, including radio usage and GPS usage**Peak memory allocation** **Frame rate**- we need to figure out where we're dropping frames (and introducing scrolling jank). We should be able to dig into render, execute, and draw times.**Memory leaks**- using automated testing, can we find flows or actions that trigger a memory leak?**An app version of Speed Index**- visible completion of the above-the-fold screen over time.**Time it takes for remote images to appear on the screen****Time between tapping a link and being able to do something on the next screen****Average time looking at spinners** **API performance****Webview Performance** If you work out a way to get video off the device then this will calculate the SpeedIndex - https://github.com/WPO-Foundation/visualmetrics ADB can be used to get the video in Android but don't think the libimobile people have worked out how to do it for iOS8 yet
true
true
true
Native app performance metrics. GitHub Gist: instantly share code, notes, and snippets.
2024-10-12 00:00:00
2015-03-09 00:00:00
https://github.githubass…54fd7dc0713e.png
article
github.com
Gist
null
null
39,198,472
https://medicalxpress.com/news/2024-01-mouse-brain.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,988,948
http://www.macrumors.com/2013/12/30/nsa-was-able-to-capture-live-data-from-compromised-iphones-in-2008-including-live-camera-gps-and-more/
NSA Was Able to Capture Live Data From Compromised iPhones in 2008, Including Live Camera, GPS, and More
Jordan Golson
The U.S. National Security Agency could retrieve a vast array of data from compromised iPhones according to an NSA document from 2008 leaked by German magazine *Der Spiegel* and security researcher Jacob Appelbaum. (via *Forbes*) According to the report, the NSA could install special software onto iPhones as part of a program called DROPOUTJEEP, that provides significant access to user data and other relevant information. DROPOUTJEEP is a software implant for the Apple iPhone that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device. SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc. Command, control and data exfiltration can occur over SMS messaging or a GPRS data connection. All communications with the implant will be covert and encrypted The NSA in 2008 claimed a 100 percent success rate in installing the software on phones it had physical access to, and it's possible that the spy agency has improved its software so it can be installed remotely or via some sort of social engineering, something that was specifically mentioned in the documents. It's also possible that Apple has closed the security holes the NSA was using, making it more difficult to compromise iOS devices in this manner. A separate report says that American spy agencies have intercepted shipping packages -- something the NSA calls method interdiction -- containing new electronic devices destined for specific targets, installed special spy software on those devices, and then sent them on their way. One report calls the shipping disruptions some of the "most productive operations" conducted by the NSA. Appelbaum said in a talk at the Chaos Communication Congress this weekend that he believes Apple assisted the NSA in its spying efforts though he cannot prove it and he hopes Apple will clarify what assistance they do or do not give the NSA. In addition, the NSA has targeted and cracked a number of different smartphones including those running the Android and BlackBerry operating systems. The relevant portion of his talk begins at 44:30 in the below video. Earlier in December, Apple CEO Tim Cook and more than a dozen other tech executives met with President Obama to discuss NSA surveillance tactics, following an open letter that Apple and seven other technology companies sent to the President and Congress asking the Government to reform its surveillance tactics. Note: Due to the political nature of the discussion regarding this topic, the comment thread is located in our Politics, Religion, Social Issues forum. All MacRumors forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.
true
true
true
The U.S. National Security Agency could retrieve a vast array of data from compromised iPhones according to an NSA document from 2008 leaked by...
2024-10-12 00:00:00
2013-12-30 00:00:00
https://images.macrumors…/dropoutjeep.jpg
article
macrumors.com
MacRumors.com
null
null
22,703,201
https://remote.tools/trump-chatbot
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
1,831,703
http://abovethelaw.com/2010/10/mark-cuban-wants-to-pay-government-attorneys-to-get-off-their-ass/
Mark Cuban Wants to Pay Government Attorneys to Get Off Their Butts
Elie Mystal
# Mark Cuban Wants to Pay Government Attorneys to Get Off Their Butts I’d love for Mark Cuban to own my basketball team. He’s a self-made billionaire who focuses on the fans and (for all the bluster) leaves the basketball decisions to basketball people. Compare that to current Knicks Owner James Dolan — a man living off of his daddy’s success, who thinks he’s smarter than he really is, who has run the once-proud Knicks franchise into the ground, and who may be in romantic love with Isiah Thomas. You’d take Cuban any day of the week over little Jimmy. You’d probably take Cuban as a client as well. Stephen Best, the Dewey & LeBoeuf attorney currently representing Cuban in his SEC insider trading case, seems to be happy with his client. And we haven’t even seen his legal fees. But if you are one of Cuban’s adversaries, it must be brutal. To paraphrase Rory Breaker, if the milk’s sour, Mark Cuban ain’t the kind of pussy to drink it. NBA referees know that. And SEC attorneys are about to learn the same lesson… ### Learn How To Make A Difference For Survivors Of Intimate Partner Violence Frustrated by the snail’s pace of the SEC investigation into insider trading allegations, Cuban offered to pay government attorneys to work faster. The WSJ Law Blog reports: Cuban, the billionaire owner of the Dallas Mavericks, has offered to pay lawyers to help the SEC move forward its case against him… An attorney for Cuban, Dewey & LeBoeuf’s Stephen Best, told U.S. District Judge Reggie Walton in Washington last week that he was seeking a creative solution to moving the case along. My friends, that is what we used to call “balls.” *You wanna investigate me? Well bring it on and speed it up, you federal mutherf***ers*. Cuban should have been in that Machete movie. ### Sponsored ### New Report - Are Small Firms Achieving Their Legal Tech Goals? ### Learn How To Make A Difference For Survivors Of Intimate Partner Violence ### Ranking The Law Firms Lawyers Love ### Mitigating M&A Cyber Risk: Pre- & Post-Acquisition Due Diligence The SEC appeared caught off guard by the suggestion, releasing a tepid response emphasizing egalitarian concerns: Melinda Hardy, an SEC attorney, said in court that Cuban’s offer was unprecedented. She said the agency is concerned people with “deep pockets would come in and go to the front of the line.” So everybody, rich or poor, gets the same crappy service from the SEC. Amazing that this sentiment hasn’t found its way into a “Come Work for the SEC” brochure. And speaking of work, can you even call what the SEC does “work,” at least in the Biglaw sense of the word? The Cuban file is reportedly big. Very big. So voluminous, according to Bloomberg, that an attorney assigned to review it would have to spend more than eight months, assuming a pace of four pages a minute and eight-hour days without breaks, the SEC lawyers said. ### Sponsored ### The Questions To Make Fund Formation Your Competitive Advantage ### Mitigating M&A Cyber Risk: Pre- & Post-Acquisition Due Diligence Eight hours a day? *Eight hours a day?* Does that include the time SEC lawyers spend surfing the web for porn? I spend more than eight hours a day *on this blog*. Do you mean to tell me that the SEC lawyers don’t even have their own SeamlessWeb accounts? No wonder Cuban is trying to throw money at the problem; clearly “professional commitment” hasn’t been enough to motivate these people. What kind of weak-ass, lifestyle operation is the SEC running over there? I can’t get over it: you’ve got the high profile, Mark Cuban insider trading case sitting on your desk, and you’re working eight hours a day? Boy did I choose the wrong career path. SEC attorneys should be paying Mark Cuban for the right to work eight hours and get home in time to have dinner with their kids, while Cuban’s pressing legal matter is left to twist in the wind. I guess it would be a bad precedent to allow Cuban to somehow roll in there and pay government workers to perform like they’d have to in the private sector. But his point is well-taken. SEC attorneys: you’re making the government look bad (and let’s not even talk about how you missed Madoff). Throw some extra staff into the effort, find some ambitious types who are willing to come in on a Saturday, and get this baby done. Christ monkeys. When Biglaw types make jokes about their government friends, this is why. Man alive, there are Biglaw paralegals who regularly work more than eight hours a day. Mark Cuban’s New and Entirely Novel Defense Approach [WSJ Law Blog] **Earlier**: Securities Law Not Sexy Enough, So SEC Attorneys Turn To Porn
true
true
true
I’d love for Mark Cuban to own my basketball team. He’s a self-made billionaire who focuses on the fans and (for all the bluster) leaves the basketball decisions to basketball people. Compare that to current Knicks Owner James Dolan — a man living off of his daddy’s success, who thinks he’s smarter than he really is, who has run the once-proud Knicks franchise into the ground, and who may be in romantic love with Isiah Thomas. You’d take Cuban any day of the week over little Jimmy. You’d probably take Cuban as a client as well. Stephen Best, the Dewey & LeBoeuf attorney currently representing Cuban in his SEC insider trading case, seems to be happy with his client. And we haven’t even seen his legal fees. But if you are one of Cuban’s adversaries, it must be brutal. To paraphrase Rory Breaker, if the milk’s sour, Mark Cuban ain’t the kind of pussy to drink it. NBA referees know that. And SEC attorneys are about to learn the same lesson… Frustrated by the snail’s pace of the SEC investigation into insider trading allegations, Cuban offered to pay government attorneys to work faster. The WSJ Law Blog reports: Cuban, the billionaire owner of the Dallas Mavericks, has offered to pay lawyers to help the SEC move forward its case against him… An attorney for Cuban, Dewey & LeBoeuf’s Stephen Best, told U.S. District Judge Reggie Walton in Washington last week that he was seeking a creative solution to moving the case along. My friends, that is what we used to call “balls.” You wanna investigate me? Well bring it on and speed it up, you federal mutherf***ers. Cuban should have been in that Machete movie. The SEC appeared caught off guard by the suggestion, releasing a tepid response emphasizing egalitarian concerns: Melinda Hardy, an SEC attorney, said in court that Cuban’s offer was unprecedented. She said the agency is concerned people with “deep pockets would come in and go to the front of the line.” So everybody, rich or poor, gets the same crappy service from the SEC. Amazing that this sentiment hasn’t found its way into a “Come Work for the SEC” brochure. And speaking of work, can you even call what the SEC does “work,” at least in the Biglaw sense of the word? The Cuban file is reportedly big. Very big. So voluminous, according to Bloomberg, that an attorney assigned to review it would have to spend more than eight months, assuming a pace of four pages a minute and eight-hour days without breaks, the SEC lawyers said. Eight hours a day? Eight hours a day? Does that include the time SEC lawyers spend surfing the web for porn? I spend more than eight hours a day on this blog. Do you mean to tell me that the SEC lawyers don’t even have their own SeamlessWeb accounts? No wonder Cuban is trying to throw money at the problem; clearly “professional commitment” hasn’t been enough to motivate these people. What kind of weak-ass, lifestyle operation is the SEC running over there? I can’t get over it: you’ve got the high profile, Mark Cuban insider trading case sitting on your desk, and you’re working eight hours a day? Boy did I choose the wrong career path. SEC attorneys should be paying Mark Cuban for the right to work eight hours and get home in time to have dinner with their kids, while Cuban’s pressing legal matter is left to twist in the wind. I guess it would be a bad precedent to allow Cuban to somehow roll in there and pay government workers to perform like they’d have to in the private sector. But his point is well-taken. SEC attorneys: you’re making the government look bad (and let’s not even talk about how you missed Madoff). Throw some extra staff into the effort, find some ambitious types who are willing to come in on a Saturday, and get this baby done. Christ monkeys. When Biglaw types make jokes about their government friends, this is why. Man alive, there are Biglaw paralegals who regularly work more than eight hours a day. Mark Cuban’s New and Entirely Novel Defense Approach [WSJ Law Blog] Earlier: Securities Law Not Sexy Enough, So SEC Attorneys Turn To Porn
2024-10-12 00:00:00
2010-10-25 00:00:00
https://abovethelaw.com/…/Mark-Cuban.jpeg
article
abovethelaw.com
Above the Law
null
null
2,604,810
http://www.benhuh.com/2011/05/23/why-are-we-still-consuming-the-news-like-its-1899
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,089,971
http://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf
null
null
%PDF-1.5 %���� 958 0 obj << /Linearized 1 /L 17097583 /H [ 2243 381 ] /O 962 /E 1944699 /N 13 /T 17090605 >> endobj 959 0 obj << /Type /XRef /Length 132 /Filter /FlateDecode /DecodeParms << /Columns 5 /Predictor 12 >> /W [ 1 3 1 ] /Index [ 958 224 ] /Info 237 0 R /Root 960 0 R /Size 1182 /Prev 17090606 /ID [<260b3e9410fa1bb8808e81f06f8d072a><93ad48f9746542904db7da588ff15397>] >> stream x�cbd`�g`b``8 "9����TfO� ~ R�D2fI��@���s�N�|�"�%��&�H�� �uXd X�|i�H2������}2@��H��%�1a`b|�%GIj��s� ���'�� endstream endobj 960 0 obj << /Names 957 0 R /OpenAction 1122 0 R /Outlines 1081 0 R /PageMode /UseOutlines /Pages 1080 0 R /Type /Catalog >> endobj 961 0 obj << /Filter /FlateDecode /S 210 /O 332 /Length 293 >> stream x�c```b`�8���� ��A� �l�(K�WUaE_.ٶ���M2fƱ>ZĢ)���Qp�a���om��2�:�j�]oKKMpU�xEгT�Rd~��әe"�O�$��E�d�XǑ���tf[HP��(��̥@�f.���$��E���!DZ��i1 V�Lb�U8#x+�����]yt{�.fV��L*ΎՆ�җ��i��`�Z���x�����R���,��|N,�\VgZ�2\d�da��Х���]��l��d���a*} � eZ endstream endobj 962 0 obj << /Annots [ 1124 0 R 1123 0 R ] /Contents 964 0 R /MediaBox [ 0 0 612 792 ] /Parent 1008 0 R /Resources 1125 0 R /Type /Page >> endobj 963 0 obj << /BBox [ 0 0 500 179 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (/compile/images/teaser2.pdf) /PTEX.InfoDict 1129 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /Cs1 1131 0 R /Cs2 1130 0 R >> /ExtGState << /Gs1 1132 0 R /Gs2 1133 0 R >> /Font << /TT1 1134 0 R >> /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject << /Im1 966 0 R /Im2 970 0 R /Im3 968 0 R /Im4 969 0 R /Im5 967 0 R /Im6 971 0 R /Im7 965 0 R >> >> /Subtype /Form /Type /XObject /Length 7500 >> stream x��K�$WV���+bh��x?��`�J0@̥�n�n���g��Z{G��Z�U_�ܹ�'N����?#����0��˗ំ�>����������~&�߯o�����g~���*-����m��y��~ ����m_�i�˵-������ۗ?���އ_~�g��xN�2{�i���<��:n�4,(����x�:|�����?�y��_:�������t\�_����Z�]_:ė����s��iƐ��"���z��B[��m����������4�k~�o��&u�紬ò<�cZ��Z���,'�t{�;j��_�aG����s�&�e���Y��L{�R:��u^���1mg�mǗ��˄��/M{����A��<� _ʊa�V_z�^���c�a*�?��"��?N(x=�*����������a:c���;����ǧϟ1~������7�������o��۷����Ӡ�������o�o� �i�����y ���#��K��g����������L����7���6N�=���_����������?����/���ӛ�,<+�ð�g㸆�ׂsa�ph&L� g�2]���W'2��'>r�8��� -���ed_�1�����8�����9y�lz�9�Q���njpȺ�k����kY(�A;� N/�[�Td4�-<&��V1=rjIY���i���ކ�0�������ݐ�9�m��>/����n&�8�{���2����y�+Nn����a(C�}�Ց��" St}}����6�]%F�� Q�k��.\���
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
21,804,383
https://has-a.name
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,166,687
https://www.datadoghq.com/blog/how-datadog-manages-incidents/
How we manage incidents at Datadog
Laura de Vesine
Incidents put systems and organizations to the test. They pose particular challenges at scale: in complex distributed environments overseen by many different teams, managing incidents requires extensive structure and planning. But incidents, by definition, break structures and foil plans. As a result, they demand carefully orchestrated yet highly flexible forms of response. This post will provide a look into how we manage incidents at Datadog. We’ll cover our entire process, including: Along the way, we’ll provide insights into the tools we’ve developed for handling incidents, such as Datadog Incident Management, Teams, Service Catalog, and Workflow Automation—each of which plays an integral part in our own processes. ## Identifying incidents There are two core components to incident management at Datadog. One is our culture of resilience and blameless organizational accountability. These values are deeply rooted in our products and our sense of responsibility towards our customers, and we uphold them in part through regular incident training for all of our engineers, as well as through continual review of our incident management processes. We’ll cover building resilience and maintaining transparency later in this post. We’ll start where incidents themselves start, which brings us to the other core component of incident management at Datadog: monitoring our own systems. ### Monitoring our systems Datadog manages operations according to a “you build it, you own it” model. That means that every component of our systems is monitored via Datadog by the team that builds and manages it. Teams define service level objectives (SLOs), collect a wide range of telemetry data on their services, and configure monitors to alert them to potentially urgent events in that data around the clock. Many teams also rely on Datadog Real User Monitoring (RUM), Synthetic Monitoring, and Error Tracking to ensure fulfillment of their SLOs. Datadog Teams and Service Catalog, which help centralize information about our services and simplify their collective management, are essential for clarifying ownership and enabling collaboration among our teams. All services used in our production environment must be registered in the Service Catalog; their ownership must be registered using Teams. We use automated checks to guarantee this, as well as to verify that key data on each team and service (Slack channels, URLs, etc.) is valid and up to date. All of this helps ensure that the overall picture of our production services and their ownership stays complete and current. Because we monitor Datadog using Datadog, we also use some last-resort out-of-band monitoring tools in order to ensure that we are alerted in the exceptionally rare case of an incident that renders our platform broadly unavailable. ### Incident declaration and triage Once we’ve detected any potentially urgent issue—namely, anything that might impact our customers—we declare an incident. Our low threshold for declaring incidents means that we run through our incident management process frequently. This helps us refine our processes and keep our engineers up to date. We use our own Incident Management tool to declare incidents, which it allows you to do directly from many points within our UI, such as any monitor, dashboard, or Security Signal, as well as from our Slack integration. Our Incident Management tool plays a vital role throughout every incident at Datadog. At the outset, it enables us to quickly assign a severity level based on initial triage, set up communications channels, and designate and page first-line responders. Our goal in triaging incidents is to quickly gauge and communicate the nature and scale of their impact. Precision is a secondary concern, especially during initial triage: above all, mitigating customer impact is always our top priority. The point is to rapidly convey urgency and put a proportional response in motion. When in doubt, we use the highest severity level that might potentially apply, and we regularly reassess impact and adjust incident severity levels (we train our incident commanders to do so at least once an hour during incidents with major customer impact). We use a five-level severity scale for incidents, with SEV-1 designating critical incidents affecting many users and SEV-5 designating a minor issue. Severity | External Factors | Internal Factors | ---|---|---| SEV-1 | - Impacts a large number of customers or a broad feature- Warrants public and executive communications | - Threatens production stability or halts productivity- Blocks most teams | SEV-2 | - Major functionality unavailable | - Impacts most teams’ ability to work | SEV-3 | - Partial loss of functionality | - Blocks or delays many or most teams | SEV-4 | - Does not impact product usability but has the potential to | - Blocks or delays one or two teams | SEV-5 | - Cosmetic issue or minor bug- Planned operational tasks | - Planned operational work- Does not block any users | Initial triage helps determine our response team for each incident. When a high-severity incident is declared, our Incident Management tool automatically pages members of our on-call rotation for major incident response. This around-the-clock rotation comprises senior engineers in multiple time zones who specialize in incident command. A member of this rotation will step into the incident command role in case of a severe incident, in which the customer impact is extensive, or one in which many different teams are involved, making coordination especially complex. ## Coordinating incident response As a rule, we steer clear of ready-made recovery procedures, which are effectively impossible to maintain for dynamic, enterprise-scale systems such as our own. Instead, our incident management process is designed to help those who know our systems best guide remediation. We look to facilitate collaboration and enable responders to focus on containing customer impact, above all, as well as to investigate root causes (with the primary aim of preventing recurrence). To coordinate our incident response, we rely on incident commanders to drive the decision-making and manage communications both internally—among responders and with executives—and with our customers. We also rely on a range of tooling that helps us keep responders on the same page and paves the way for effective collaboration. ### Incident command Incident commanders steer our incident response by setting clear priorities and determining an appropriate overall approach to the incident. This may entail gauging risks and weighing them against impact—for example, deciding whether or not to bypass normal rollout safety mechanisms in order to expedite remediation, given the perceived safety of the fix and the severity of the impact. Steering incident response also means facilitating the work of responders. This means: **Assembling a response team**. Incident commanders must determine and page the right people for the response.**Resolving technical debates**. The perfect may be the enemy of the good during incident response. As guiding decision-makers, incident commanders help avoid prolonged debates and indecision, which can cost precious time.**Keeping stress levels down and preventing exhaustion**. Incident commanders are in charge of keeping the incident response even-keeled and sustainable. They are responsible for coordinating shifts and breaks in order to ensure that responders stay alert and don’t get fatigued or overwhelmed.**Providing status reports**. We’ll delve into how we maintain communications with diverse stakeholders later in this post. Incident commanders may also delegate various aspects of their work to auxiliary support roles, which can be integral to our response depending on the nature of the incident: **Workstream leads**help coordinate our incident response when it involves many responders operating on multiple fronts.**Communications leads**help manage internal communications and status updates.**Executive leads**are engineering executives who work alongside**customer liaisons**, managers from our customer support team, to manage communications with customers. ### Guiding remediation Once we declare an incident using our Incident Management tool, it automatically generates an incident timeline as well as a dedicated Slack channel. Incident timelines enable us to construct a chronology of key data pulled from across Datadog and our integrations, as well as the steps taken in our response. Each timeline automatically incorporates everything from changes in incident status to responders’ deliberation in dedicated Slack channels. When we page our responders, we use the notification templates provided by our Incident Management tool to automatically direct them to the relevant incident Slack channel. These channels help us maintain a focused, concerted response, keeping all responders on the same page. Whenever a responder joins an incident channel, our Bits AI copilot automatically provides a summary of the incident and our response so far, helping them quickly get up to speed. And since messages from these channels are automatically mirrored to our incident timelines, they also help us build a clear picture of the response after the fact, during postmortem analysis. Incident commanders use our Incident Management tool to define and delegate specific tasks for remediation and follow-up. When a task is created, this tool automatically notifies assignees. We also use our Workflow Automation tool to send regular reminders of tasks such as updating incident status pages and, later on, for completing follow-up items such as incident postmortems. Especially severe and complex incidents may necessitate multiple paths of response by teams of responders. Under these circumstances, we rely on the Workstreams feature of Datadog Incident Management, which enables us to clearly define and delegate various facets of our response. Incident Management Workstreams enable us to maintain an organized response while pursuing multiple avenues of mitigation—such as recovering separate services in cases where multiple services are compromised—and exploring different solutions in parallel, helping us contain impact faster. ### Communicating with stakeholders Maintaining communication with customers and executives is essential during high-severity incidents. Incident commanders, executive engineering leads, and customer liaisons manage this communication in order to ensure that responders can focus on investigating incidents and containing their impact. At Datadog, we are as transparent and proactively communicative as possible with our customers during and after incidents. As a rule, we notify customers of any incident affecting them without waiting for them to notify us. During major incidents, we provide them with regularly updated status pages. ### Declaring stabilization and resolution Once an incident’s impact on customers is completely contained, we declare it stable by updating its status with our Incident Management tool or via our Slack integration. This marks the end of customer impact on the incident timeline and automatically posts notifications to the associated Slack channels. In cases of high-severity incidents, we then notify our customers that the impact has been contained and that they can expect more information soon. Once the effects of an incident have been contained and its root causes are sufficiently well-understood to justify confidence that it will not immediately recur, we declare the incident resolved and our emergency response stands down. ## Building resilience and maintaining transparency We treat the resolution of every incident as an opportunity to take stock of and absorb its lessons through documentation and analysis. This is a moment to demonstrate our accountability to our customers, and often—in big ways or small—to update our engineering roadmap. ### Learning from incidents Our engineers use incident timelines and Datadog Notebooks, which allows you to incorporate real-time or historical graphs into Markdown documents, to write a detailed postmortem for every high-severity incident. Postmortems are an important way to maintain transparency with our customers. They are equally important as internal tools, helping us understand how and why our systems have failed and correct our course as we move ahead. We treat every incident as a systemic failure—never an individual one. Even if an incident is triggered by human error, we know that it has ultimately occurred because our systems could not prevent the issue in the first place. This philosophy is part of the bedrock of incident management at Datadog. In the short term, it helps our incident response: incidents are high-pressure situations, and eschewing personal blame helps to alleviate pressure on responders and encourage them to find creative solutions. In the long term, it helps us build resilience. Human error is inevitable, making blameless incident analysis the only true path to more reliable systems. ### Reinforcing resilience In order to maintain the culture of resilience that drives incident management at Datadog, we conduct incident trainings on an ongoing basis. All Datadog engineers are required to complete comprehensive training before going on call as responders, and follow up with refresher training sessions every six months. The purpose of our incident training is not to impose rigidly prescriptive recovery procedures. As we covered earlier in this post, incidents are inherently unpredictable, and such procedures tend to be difficult or impossible to maintain at scale. Instead, our incident trainings have several overarching goals: - To empower those who know our systems intimately—component by component, service by service—to guide mitigation. - To establish standards of availability in order to ensure a timely response to every incident. On-call responders are expected to make sure that they have cell service and can get to a keyboard quickly, respond to alerts within minutes, and hand off their work to subsequent responders as needed. - To delineate steps and guidelines for declaring and triaging incidents, as well as for declaring stabilization and resolution. - To establish our protocol for incident command and other coordinating roles. - To emphasize our blameless incident culture. - To clarify our priorities in incident remediation, which we’ll discuss in more detail in the next section of this post. ### Gauging success We prioritize several metrics in order to clarify our priorities and gauge the success of our incident management. Mean time to repair (MTTR) is often cited as a gauge of successful incident response. But we find that prioritization of MTTR risks motivating the wrong behavior by encouraging quick fixes that may not address an incident’s underlying causes. What’s more, in most cases, the sample size of incidents is too small and the variability of incidents too great to make MTTR a meaningful value. Here are the metrics we most value as indicators of successful incident management, and which we use to guide us in refining our process in the long term: **Low rates of recurrence**. These testify to the effectiveness of past remediation efforts.**Increasing levels of incident complexity**. These testify to the effectiveness of the cumulative safeguards developed in the course of managing previous incidents (consider the Swiss Cheese Model).**Decreased time to detection**. This testifies to the effectiveness of our monitoring and alerting.**Low rate of spurious alerts**. Together with decreased time to detection, this speaks to the quality of our monitoring and a lower potential for alert fatigue. We also rely on qualitative surveys for incident responders, which help us gauge engineers’ confidence in handling incidents and guide our incident training. ## Reinforcing reliability Effective incident management hinges on in-depth, real-time awareness of systems, consistent communication, and creative adaptability. At Datadog, we seek to meet these criteria through effective monitoring, the cultivation of a healthy and proactive culture around incident management, and the development of purpose-built tools. Complex systems like our own are always evolving. Our incident management process helps us respond to unexpected turns in this evolution, and ensure that we are steering our systems towards greater reliability. To manage incidents with Datadog, you can get started with Incident Management, Teams, Service Catalog, Notebooks, and any of the other tools discussed in this post today. If you’re new to Datadog, you can sign up for a 14-day free trial.
true
true
true
A look into our incident management process, from initial identification and triage through postmortem analysis.
2024-10-12 00:00:00
2023-11-06 00:00:00
https://imgix.datadoghq.…rop&w=1200&h=630
article
datadoghq.com
Datadog
null
null
37,478,569
https://arxiv.org/abs/2308.07870
Brain-Inspired Computational Intelligence via Predictive Coding
Salvatori; Tommaso; Mali; Ankur; Buckley; Christopher L; Lukasiewicz; Thomas; Rao; Rajesh P N; Friston; Karl; Ororbia; Alexander
# Computer Science > Artificial Intelligence [Submitted on 15 Aug 2023] # Title:Brain-Inspired Computational Intelligence via Predictive Coding View PDFAbstract:Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with the error backpropagation learning algorithm. However, the ubiquitous adoption of this approach has highlighted some important limitations such as substantial computational cost, difficulty in quantifying uncertainty, lack of robustness, unreliability, and biological implausibility. It is possible that addressing these limitations may require schemes that are inspired and guided by neuroscience theories. One such theory, called predictive coding (PC), has shown promising performance in machine intelligence tasks, exhibiting exciting properties that make it potentially valuable for the machine learning community: PC can model information processing in different brain areas, can be used in cognitive control and robotics, and has a solid mathematical grounding in variational inference, offering a powerful inversion scheme for a specific class of continuous-state generative models. With the hope of foregrounding research in this direction, we survey the literature that has contributed to this perspective, highlighting the many ways that PC might play a role in the future of machine learning and computational intelligence at large. ## Submission history From: Tommaso Salvatori [view email]**[v1]**Tue, 15 Aug 2023 16:37:16 UTC (3,053 KB) Current browse context: cs.AI ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with the error backpropagation learning algorithm. However, the ubiquitous adoption of this approach has highlighted some important limitations such as substantial computational cost, difficulty in quantifying uncertainty, lack of robustness, unreliability, and biological implausibility. It is possible that addressing these limitations may require schemes that are inspired and guided by neuroscience theories. One such theory, called predictive coding (PC), has shown promising performance in machine intelligence tasks, exhibiting exciting properties that make it potentially valuable for the machine learning community: PC can model information processing in different brain areas, can be used in cognitive control and robotics, and has a solid mathematical grounding in variational inference, offering a powerful inversion scheme for a specific class of continuous-state generative models. With the hope of foregrounding research in this direction, we survey the literature that has contributed to this perspective, highlighting the many ways that PC might play a role in the future of machine learning and computational intelligence at large.
2024-10-12 00:00:00
2023-08-15 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
23,873,710
https://www.youtube.com/watch?v=cyz3CKN-ViQ
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null