id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
34139733
https://en.wikipedia.org/wiki/Stephen%20Brobst
Stephen Brobst
Stephen Brobst (born September 21, 1962) is an American technology executive. Early life and education Stephen Brobst was born September 21, 1962 at the hospital on Stanford University campus where both of his parents did their undergraduate studies. In his childhood years he participated in chess tournaments sponsored by the United States Chess Federation (USCF) and in league competitions between high schools in Silicon Valley. He was president of his high school chess club. He graduated as valedictorian from Milpitas High School in 1980. For his undergraduate work he studied Electrical Engineering and Computer Science at University of California, Berkeley where he graduated in just three years and was bestowed the Bechtel Engineering Award as the highest honor for a graduating senior in the college of engineering for academic excellence and leadership. Brobst performed masters and PhD research at the Massachusetts Institute of Technology (MIT) at the Laboratory for Computer Science where his dissertation work focused on load balancing and resource allocation for massively parallel computing architectures. He also holds an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management. At MIT he was bestowed the William Stewart Award for contributions to student life during his nearly ten years as a graduate resident and tutor at the Baker House undergraduate dormitory. Career Early career After working at Lawrence Livermore National Laboratory, IBM Research Division in San Jose, and Hewlett-Packard Laboratories in Palo Alto, Brobst founded multiple start-up companies focused on data management products and services. He founded Strategic Technologies & Systems (STS) in 1983 while he was a graduate student at MIT. STS was acquired by NCR Corporation in 1999. From 1993 through 2000 he was co-founder and chief technology officer at Tanning Technology Corporation, a services firm focusing primarily on the implementation of Oracle databases for transaction processing. Tanning executed an initial public offering in 1999 and was later acquired by Platinum Technologies. He co-founded NexTek Software in 1994, a firm that created a software product for workload management for relational database management systems, as a spinoff from Tanning Technology Corporation. IBM acquired technology from NexTek in 1998 which provided the software foundation for early versions of the DB2 Query Patroller. Brobst was involved in the creation of eHealthDirect, a software start-up for automated claims adjudication using rule-based systems for the health care industry, between 1999 and 2002. eHealthDirect (later renamed to DeNovis) was acquired by HealthEdge in 2003. Teradata Simultaneous with the acquisition of Strategic Technologies & Systems in 1999, NCR Corporation created a separate division for the Teradata relational database management system. Brobst was appointed as Chief Technology Officer for the newly formed Teradata Division and continues to serve in this capacity today. Teradata was spun off as a separate company and went public on the New York Stock Exchange on October 1, 2007. PCAST During Barack Obama's first term Brobst was appointed to the United States President's Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD). As part of this work he co-authored a report, “Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology”, delivered to Obama and the United States Congress in December, 2010. This report recommended that all federal agencies should have a Big Data strategy and initiated government investment in this area. Brobst served as an advisor to the National Academy of Sciences in the area of IT workforce development in 1998 and 1999. Teaching Brobst lectured at Boston University in the computer science department between 1984 and 1992 while working toward his PhD at MIT. He taught undergraduate courses in operating system design, data structures and algorithms. He taught graduate courses in advanced database design as well as parallel computer architecture. Brobst has taught at the Data Warehouse Institute (later renamed Transforming Data With Intelligence) since 1996. In 2001 Brobst worked with a team of academics in Pakistan to develop a course curriculum for database design and analytics. He participates in the Girls Who Code initiative, teaching computer science concepts to high school girls. Recognition In 2014 Brobst was ranked by Advisory Cloud as the fourth best CTO in the United States. He is an elected member of the Eta Kappa Nu, Tau Beta Pi, and Sigma Pi engineering honor societies. He is also a nominated member of the New York Academy of Sciences. Publications and patents Brobst co-authored the chapter on big data for the Handbook of Computer Science (published by the Association for Computing Machinery in 2014). He also co-authored a report, “Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology”, delivered to President Barack Obama and the United States Congress in December, 2010. In addition, he co-authored “Building a Data Warehouse for Decision Support” (published by Prentice Hall PTR in both English and Polish in 1997 and 1999, respectively). Brobst authored journal and conference papers in the fields of data management and parallel computing environments. He was a contributing editor for Intelligent Enterprise Magazine and published technical articles in The International Journal of High Speed Computing, Communications of the ACM, The Journal of Data Warehousing, Enterprise Systems Journal, DM Review, Database Programming and Design, DBMS Tools & Techniques, DB2 Magazine, Oracle Magazine, Teradata Magazine and many others. Brobst holds patents in the area of advanced data management primarily in areas of workload management for database systems, advanced algorithms for cost-based optimization and SQL query re-writes, and health care analytics. References 1962 births American chief technology officers Living people MIT Sloan School of Management alumni NCR Corporation people Teradata UC Berkeley College of Engineering alumni
5272
https://en.wikipedia.org/wiki/Printer%20%28computing%29
Printer (computing)
In computing, a printer is a peripheral machine which makes a persistent representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers. History The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000. The first patented printing mechanism for applying a marking medium to a recording medium, or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966. The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson. The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints. The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace. The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. These devices are in their earliest stages of development and have not yet become commonplace. Types Personal printers are primarily designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners. Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm. A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user. A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs. A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper. Technology The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies. A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface. Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly. Modern print technology The following printing technologies are routinely found in modern printers: Toner-based printers A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor. Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum. Liquid inkjet printers Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers. Solid ink printers Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001. Dye-sublimation printers A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers. Thermal printers Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink"). Obsolete and special-purpose printing technologies The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Impact printers Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used. Typewriter-derived printers Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second. Teletypewriter-derived printers The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. Dot-matrix printers The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Ballistic wire printers Stored energy printers Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers Line printers print an entire line of text at a time. Four principal designs exist. Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported. Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer. Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443. A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers. Plotters Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. Other printers A number of other sorts of printers are important for historical reasons, or for special purpose uses. Digital minilab (photographic paper) Electrolytic printers Spark printer Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes Billboard / sign paint spray printers Laser etching (product packaging) industrial printers Microsphere (special paper) Attributes Connectivity Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device. More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly three-quarters of consumers who have access to those printers weren't taking advantage of the increased access to print from multiple devices according to the new Wireless Printing Study. Printer control languages Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers. Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster. Printing speed The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America. Printing mode The data received by a printer may be: A string of characters A bitmapped image A vector image A computer program written in a page description language, such as PCL or PostScript Some printers can process all four types of data, others not. Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots. Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four. Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today. Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it. Monochrome, colour and photo printers A monochrome printer can only produce monochrome images, with only shades of a single colour. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Page yield The page yield is number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors. For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield. Economics In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP). Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it. Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer. Printer steganography Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps. See also History of printing 3D printing Cardboard modeling List of printer companies Print (command) Printer driver Print screen Print server Label printer Printer friendly (also known as a printable version) Printer point Printer (publishing) Printmaking References External links Computer printers Office equipment Typography Articles containing video clips Network appliances
972842
https://en.wikipedia.org/wiki/Central%20Queensland%20University
Central Queensland University
Central Queensland University (alternatively known as CQUniversity) is an Australian public university based in central Queensland. CQUniversity is the only Australian university with a campus presence in every mainland state. Its main campus is at Norman Gardens in Rockhampton, however, it also has campuses in Adelaide (Wayville), Brisbane, Bundaberg (Branyan), Cairns, Emerald, Gladstone (South Gladstone and Callemondah), Mackay (central business district and Ooralea), Melbourne, Noosa, Perth, Rockhampton City, Sydney and Townsville. CQUniversity also has delivery sites to support distance education in Biloela, Broome, Busselton, Charters Towers, Karratha and Yeppoon, and partners with university centres in Cooma, Geraldton and Port Pirie. History CQUniversity began as the Queensland Institute of Technology (Capricornia) in 1967, and after two years under the name of the University College of Central Queensland, in 1992 became an official university named the University of Central Queensland. In 1994, it adopted the name Central Queensland University. In 2008, it became CQUniversity in recognition of the institutions' expansion beyond the Central Queensland region. Beginnings CQUniversity's antecedent institution, the Queensland Institute of Technology (Capricornia), was established in Rockhampton in 1967 as a regional branch of the Queensland Institute of Technology (Brisbane). However, the first steps to establish a university in Rockhampton were taken as early as the 1940s. In 1941, the Queensland Labor Premier, William Forgan Smith, introduced section 17 of the National Education Co-ordination and University of Queensland Amendment Act, which provided for the creation of university colleges outside Brisbane. In 1944 and 1945, a series of Rockhampton delegations lobbied the Queensland government for a university college, but after the University of Queensland established a network of provincial study centres in the late 1940s the issue became dormant. Rockhampton's university campaign resumed in the 1950s as Central Queensland became an emerging heavy industry base, with developing coal mines and Gladstone emerging as a light metals centre. In the Queensland parliament in November 1956, the local member for Rockhampton (H R Gardner) stated "more adequate facilities for technical education" were required for the region and, appealing to the philosophy of a "fair go", he urged that Rockhampton people be given "the same opportunities as those in Brisbane". In 1958, P J Goldston, an engineer (later, Commissioner for Railways,) mooted the possibility of a Central Queensland university with Rockhampton engineers and after further community discussion, the Rockhampton Mayor, Alderman R B J Pilbeam, called the first public meeting on 3 March 1959 at which the Central Queensland University Development Association (UDA) was constituted. The UDA presented university proposals to government and, in 1961, the Queensland government reserved 161 hectares (400 acres) of government land at Parkhurst (North Rockhampton) on the Bruce Highway near the Yeppoon turnoff as a tertiary education site. Establishment finally was resolved in March 1965, when the Commonwealth government's Martin Report (on expansion of tertiary education) was tabled in parliament by Prime Minister Menzies―who announced the foundation of a new style of tertiary institution at both Rockhampton and Toowoomba. The new institutes―Rockhampton's was named The Queensland Institute of Technology, Capricornia (QITC)―were affiliated with the main Queensland Institute of Technology campus in Brisbane and lacked the autonomy of universities, being controlled by the Queensland Education Department. When the QITC first opened in February 1967, there was no extensive campus to greet the handful of staff and initial intake of 71 full-time and part-time students. While building progressed at Parkhurst, the first classes held on the top floor of the Technical College in Bolsover Street were a makeshift affair with no laboratories, library facilities or stock. By 1969, most staff and students had transferred to the Parkhurst campus, still a bushland site in progress―in the summer months, the campus was often ringed by spectacular bush fires or deluged with torrential rain: cars slid in the mud or were bogged and the QITC's foundation Principal, Dr Allan Skertchly, ferried people in his 4WD across floodwaters. Some students slept temporarily on mattresses in the canteen while waiting for the first residential college to open. 1970s onwards After the passage of the amended Education Act in 1971, QITC became an autonomous, multi-functional college under the control of its own council and took the name of Capricornia Institute of Advanced Education (CIAE). Along with creating a traditional university campus experience in a natural setting, the CIAE also developed engineering and science projects. The CIAE became the first college in Australia to introduce a Bachelor of Science externally in 1974. By 1979, external enrolments at the CIAE had increased to 825 and by 1985 distance education had become a major campus operation, exceeding internal enrolments and offering 12 courses involving some 100 subjects and processing 23,980 study packages annually. Between 1978 and 1989, the CIAE established branch campuses in Central Queensland at Gladstone, Mackay, Bundaberg and Emerald. Expansion in the 1990s The CIAE became the University College of Central Queensland in 1990 and gained full university status in 1992. At that time it was known as the University of Central Queensland. The name was changed in 1994 to Central Queensland University. After the Australian government approved the enrolment of full-fee paying students in Australian institutions in 1986, the CIAE (and subsequently the university) began trans national education ventures with many countries, including Singapore, Hong Kong, Dubai and Fiji. Through a public-private partnership with CMS (which CQU fully acquired in 2011) the university opened its first international campus in Sydney in 1994, followed by international campuses in Melbourne in 1997, Brisbane in 1998 and the Gold Coast in 2001. 2000 onwards In 2001, the university appointed Queensland's first female Vice-Chancellor, Professor Glenice Hancock, who retired in 2004. From 2009 onward, CQUniversity launched a new strategic plan to grow student numbers and expand course offerings, especially within the health disciplines. New course offerings included physiotherapy, podiatry, occupational therapy, speech pathology, oral health sonography, and medical imaging. CQUniversity also delivers courses in discipline areas including apprenticeships, trades and training, business, accounting and law, creative, performing and visual arts, education and humanities, engineering and built environment, health, information technology and digital media, psychology, social work and community services, science and environment, and work and study preparation. In 2014, CQUniversity merged with CQ TAFE to establish Queensland's first dual sector university. CQUniversity is now the public provider of TAFE in the central Queensland region and also delivers vocational courses at other locations across Australia and online. Following the merger CQUniversity now delivers more than 300 courses from short courses through to PhDs. CQUniversity is the only Australian university to be accredited as a Changemaker Campus by global social innovation group Ashoka U. Medicine In March 2018 the university announced it was in talks to establish a Medical school at its Rockhampton and Bundaberg campuses. Discussions are with the Hospital and Health Services of Central Queensland and Wide Bay, the main physical organisation of Queensland Health in the two regions. Organisation and governance Governance CQUniversity is governed by the CQUniversity Council, comprising the Chancellor, Vice-Chancellor and various elected and appointed representatives. Operationally, CQUniversity is managed by the Vice-Chancellor and five Deputy Vice-Chancellors who oversee portfolios including: International and Services, Research, Tertiary Education, Student Experience and Governance, Engagement and Campuses, Strategic Development and Finance and Planning. The Vice-Chancellor is appointed by the University Council and reports to the Council through the Chancellor. Associate Vice-Chancellors manage the regions in which the university operates including Rockhampton, Mackay and Whitsunday, Wide Bay Burnett, Gladstone, Central Highlands, South East Queensland, Townsville and North West Queensland, Far North Queensland, Victoria, New South Wales, South Australia and Western Australia. Pro Vice-Chancellors manage the areas of learning and teaching, Indigenous engagement and vocational education. The Schools within the university are managed by Deans, within the Tertiary Education Division. Vice-Chancellor CQUniversity is led by Professor Nick Klomp who was appointed as Vice-Chancellor and President in 2018. He officially commenced his appointment on Monday, 4 February 2019. Professor Klomp is the university's sixth Vice-Chancellor, replacing Professor Scott Bowman who served in the role from 2009 - 2019. University Council The CQUniversity Council is the governing body of CQUniversity and was established under the Central Queensland University Act (1998). Mr John Abbott is the Chancellor of CQUniversity. Tertiary Education Division The Tertiary Education Division is led by the university's Provost and overseas the delivery of higher education and vocational education through the university's schools. Research The Research Division is led by the Deputy Vice-Chancellor, Research who is responsible for shaping and implementing the university's research strategy. International & Services Division The Senior Deputy Vice-Chancellor (International & Services) is responsible for oversight and strategic management of the facilities and services which support the overall operations of the university. The Vice President and Senior Deputy Vice Chancellor is responsible, as part of the Senior Executive for overall strategic planning, commercial operations and leadership of the business operations for the university. Within the University Services Portfolio lie the Directorates of Marketing, Facilities Management, People and Culture, Library Services, Information Technology, and Commercial Services.  The International Portfolio is responsible for management of the university's global operations including recruitment; delivery of programs; compliance; and government relations through embassies across the globe. Student Experience and Governance Division The Student Experience and Governance Division is led by the Deputy Vice-Chancellor (Student Experience & Governance) and is responsible for the management of governance processes within the university through the Council and sub-committees. The division is made up of three directorates including Governance, the Student Experience and Communications. The Governance Directorate has day to day carriage of governance activities. The Internal Audit Directorate operates as an independent appraisal function which forms an integral part of the university's internal control framework. The Student Experience and Communications Directorate is responsible for promoting, supporting and enhancing the university's reputation, activities and achievements, through strategic communications. Schools CQUniversity has six schools, each of which are managed by specialist Deans. The schools are: School of Education and the Arts School of Business & Law School of Engineering & Technology School of Medical and Applied Sciences School of Human, Health and Social Sciences School of Nursing and Midwifery Major areas of study CQUniversity runs programs in a wide range of disciplines, including apprenticeships, trades and training; business, accounting and law; creative, performing and visual arts; education and humanities; engineering and building environment; health; information technology and digital media; psychology, social work and community services, science and environment; and English (quality endorsed by NEAS Australia), work and study preparation. Campuses CQUniversity has the following campuses: CQUniversity Adelaide CQUniversity Brisbane CQUniversity Bundaberg CQUniversity Emerald CQUniversity Gladstone, City CQUniversity Gladstone, Marina CQUniversity Mackay, City CQUniversity Mackay, Ooralea CQUniversity Melbourne CQUniversity Noosa CQUniversity Rockhampton, North CQUniversity Rockhampton, City CQUniversity Sydney CQUniversity Cairns CQUniversity Townsville CQUniversity also operates delivery sites in Biloela, Cairns, Yeppoon, Cannonvale, Geraldton, Charters Towers and Edithvale (Melbourne) Rockhampton campuses Two campuses operate in the Rockhampton region: Rockhampton, City (formerly CQ TAFE) and Rockhampton, North. The Rockhampton City campus is centrally located and offers a wide range of study options from certificates and diplomas to undergraduate programs. It also offers short courses in a range of areas including business, hospitality and beauty. Key facilities include Wilby's Training Restaurant, Hair Essence Hair Salon, Engineering Technology Centre, Trade training workshops and an Adult Learning Centre. The Rockhampton North campus is the university's headquarters. The campus has facilities including an Engineering Precinct, Health Clinic, Student Residence, food court and Sports Centre. The Engineering Precinct has labs for fluids, thermodynamics, thermofluids, geotech, concrete and structures, and electronics. There is also a new lecture theatre, a postgraduate area, a materials-testing area, an acoustic test cell, a soils store, and a multi-purpose project-based learning lab. The public-access health clinic on campus caters for up to 160 clients per day. The clinic allows students to work with qualified health professionals in the areas of oral health, occupational therapy, physiotherapy, podiatry and speech pathology. Mackay campuses Two campuses operate in the Mackay region: CQUniversity Mackay, City (formerly CQ TAFE) and CQUniversity Mackay, Ooralea, including a Trades Training Centre. The Mackay City campus located on Sydney Street, in the Mackay CBD, delivers both vocational and academic courses. Facilities on the campus include 24-hour computer labs, training restaurants, hair dressing salon, beauty salon, canteen and library. The Mackay Ooralea campus is located on Mackay's southern outskirts and is about six kilometres from the city centre. The campus includes lecture theatres, a performance theatre, tutorial rooms, computer laboratories, a nursing laboratory, video-conference rooms, recording studios, student accommodation, a bookshop, a refectory and a library. On-site accommodation is provided at the Mackay Residential College. The Trade Training Centre caters for 1500 students doing apprenticeship programs in electrical, plumbing, carpentry, furnishing, metal fabrication, mechanical fitting and light and heavy automotive training, as well as skills training for the building, construction, mines, minerals and energy sectors. Bundaberg campus CQUniversity's Bundaberg campus is located on a 23-hectare site on Bundaberg's southern outskirts. The campus specialises in small class sizes and individually focused learning and teaching Campus facilities include a library, bookshop, campus refectory, a 200-seat and a 100‑seat lecture theatre, four computer laboratories, nursing clinical laboratories and videoconferencing rooms. In 2012, Bundaberg Regional Council and CQUniversity signed an accord as a formal expression of their commitment to have Bundaberg recognised as a 'University City'. The campus has an academic and research building which includes a 64-seat scientific laboratory, sound studio and multi-media and science research facilities. The campus also hosts a forensic crash lab to support learning for students enrolled in the Bachelor of Accident Forensics. From 2013, CQUniversity Bundaberg has also offer commercial pilot training through a partnership with the Australian Flight Academy. Gladstone campuses Two campuses operate in the Gladstone region: CQUniversity Gladstone, City (formerly CQ TAFE) and CQUniversity Gladstone, Marina. The Gladstone City campus is located in the CBD. It offers specialist training for the gas industry, instrumentation and business studies. Key facilities include a canteen, Engineering Technology Centre, computer labs, Adult Learning Centre, Hair Essence Hair Salon, beauty facilities and a sports oval. The Gladstone Marina campus is located within the Gladstone Marina precinct. It is home to the Gladstone Environmental Science Centre and the Gladstone Engineering Centre. Students at the campus use lecture theatre and training facilities, computer labs, the Cyril Golding Library, bookshop and a range of career counselling and support services. Noosa campus CQUniversity Noosa was first established in 2003 as a hub in the small Sunshine Coast village of Pomona, offering courses in Learning Management. In 2007, the Campus relocated to Goodchap Street, Noosaville and its capacity was doubled to accommodate 1200 students. The Noosa campus offers modern facilities and surrounds including clinical nursing laboratories, library and student resource centre facilities, state of the art collaborative learning spaces and is home to the Learning and Teaching Education Research Centre (LTERC). Emerald campus CQUniversity Emerald (formerly CQ TAFE) is located on the Capricorn Highway, 275 km west of Rockhampton, and delivers trade based apprenticeships. Campus facilities include workshops for apprenticeship training, student common room and an afterhours computer lab. Brisbane campus CQUniversity Brisbane is located in the heart of the CBD at 160 Ann Street, Brisbane. The campus comprises nine floors of facilities including lecture rooms, multimedia labs, bookshop, library and a student lounge. Sydney campus CQUniversity Sydney is located on 400 Kent Street. With over 2000 international students, Sydney campus has the largest student population. The campus comprises lecture theatres, multimedia labs, bookshop, library, café and a student lounge. In 2013 the basement of the campus building was renovated and is now used as a dedicated space for students to relax and socialise. Melbourne campus CQUniversity Melbourne is a city campus. the Campus comprises multimedia labs, CQUni Bookshop, library, student lounge, and presentation and audio-visual equipment. Adelaide Campus CQUniversity Adelaide is located in the south-west of the city in close proximity to the Adelaide Showgrounds. The Campus is home to The Appleton Institute, a multidisciplinary research hub formerly Adelaide's Centre for Sleep Research. The Institute specialises in research, teaching and community engagement in a wide range of areas including safety science, sleep and fatigue, human factors and safety management, applied psychology, human-animal interaction and cultural anthropology. Cairns Distance Education Study Centre A Cairns study centre was established in July 2012 to cater to around 350 CQUniversity distance education students in the Far North Queensland region. The centre allows students to form study groups, access e-library and internet resources, sit exams, lodge assignments, participate in live lectures broadcast via high-speed internet, and make academic enquiries. Other sites CQUniversity also operates distance education centres, hubs and sites in Charters Towers, Cooma, Cannonvale, Townsville, Perth, Karratha, Edithvale, and Geraldton. Academic profile Research centres & institutes CQUniversity has numerous research centres, institutes and groups including: Appleton Institute Collaborative Research Network – Health (CRN) Centre for Plant and Water Science Centre for Environmental Management Centre for Railway Engineering Centre for Intelligent and Networked Systems Process Engineering and Light Metals Centre (PELM) Learning and Teaching Education Research Centre Queensland Centre for Domestic and Family Violence Research (CDFVR) Centre for Physical Activity Studies (CPAS) Centre for Mental Health Nursing Innovation Centre for Longitudinal and Preventative Health Research Capricornia Centre for Mucosal Immunology Institute for Health and Social Science Research (IHSSR) Institute for Resources, Industry and Sustainability (IRIS) Power Engineering Research Group Business Research Group The university is also a partner in the Queensland Centre for Social Science Innovation (QCSSI) together with the Queensland State Government, University of Queensland (UQ), Griffith University (GU), Queensland University of Technology (QUT) and James Cook University (JCU). The QCSSI is based at the St Lucia campus of UQ. Engagement CQUniversity's stated aim is to be Australia's most engaged university. To this end, the university has appointed a Pro Vice-Chancellor (Community & Engagement) and encourages staff to record their engagement experiences in a comprehensive engagement database known as E-DNA. The university also runs an award ceremony known as the Opal Awards, which recognise staff for excellence in engagement. In March 2012, CQUniversity appointed former Queensland University of Technology and Monash University academic Bronwyn Fredericks to the role of Pro Vice-Chancellor (Indigenous Engagement). Professor Fredericks, a Murri woman, is also the inaugural BMA Chair in Indigenous Engagement, a position funded by coal mining group BMA. Her stated aim is to pursue engagement with the Central Queensland region's numerous Indigenous communities to improve education outcomes. CQUniversity is a partner of Indian charity Salaam Baalak Trust, which rescues, cares for and educates street children. The university provides higher education scholarships to Salaam Baalak children and sponsors the charity's City Walk program. As part of its commitment to engagement, CQUniversity implemented Ucroo's Digital Campus platform became a member of the Talloires Network. University art collection The university began collecting art in the 1970s and has since developed a collection of almost 600 art works, including international and Australian paintings, ceramics, prints and photographs. While there is not a gallery or museum space at the university, art works are displayed across the campus network and lent to other organisations, such as regional galleries and other universities, for display in temporary exhibitions. Rankings CQUniversity graduates were ahead of the national rate for graduate full-time employment according to figures compiled by Graduate Careers Australia (GCA). GCA published a full-time report of 71.3%, while a direct comparison had the CQUniversity graduate full-time employment rate at 81.1%. In 2013 CQUniversity was awarded five stars for online delivery, internationalisation and access in its first foray into the global university ratings QS Stars. It also scored a 4 for teaching and for facilities. In 2012, CQUniversity lifted its rankings in the Excellence in Research for Australia (ERA) audit from 28 (in 2010) to 21. The university picked up three five-star ratings in 2012, up from its 2010 result of just two three-star ratings. CQUniversity performed at or well above world standard in four areas of research according to ERA 2012, with nursing research continuing to perform at 'world standard', and research in applied mathematics, agriculture and land management, and other medical and health sciences deemed to be ranked at the highest levels of performance 'well above world standard'. Students As of 2014, CQUniversity had around 35,000 students enrolled across its various campuses as well as by distance education. International students can study at CQUniversity campuses located at Brisbane, Adelaide, Melbourne and Sydney, or at CQUniversity's regional campuses in Bundaberg, Gladstone, Noosa, Mackay or Rockhampton. Notable alumni Some of the notable alumni and past students of CQUniversity and its predecessor institutions include: Julian Assange, WikiLeaks founder Wayne Blair, Indigenous Australian filmmaker Martin Bowles, PSM, former Secretary of the Department of Health Tom Busby and Jeremy Marou of Australian rock duo Busby Marou Terry Effeney, chief executive officer of Energex Craig Foster Former Socceroo Captain, prominent analyst, commentator, writer and advocate for human rights. Alexander Horneman-Wren SC Anna Meares, Olympic gold medal-winning track cyclist William McInnes, actor and author Peter Saide, Broadway performer Paul Ettore Tabone, opera and musical theatre performer (The Ten Tenors) Carolyn Hardy, International Board Member at Amnesty International David Battersby, Vice-Chancellor of Federation University Craig Zonca, breakfast presenter at ABC Radio Brisbane. Yohani, Sri Lankan singer, songwriter and rapper. See also List of universities in Australia Education in Australia References External links Central Queensland University Universities in Queensland Educational institutions established in 1967 Rockhampton Buildings and structures in Rockhampton 1967 establishments in Australia Chiropractic schools in Australia Schools in Queensland Central Queensland
5348289
https://en.wikipedia.org/wiki/Task%20Manager%20%28Windows%29
Task Manager (Windows)
Task Manager, previously known as Windows Task Manager, is a task manager, system monitor, and startup manager included with Microsoft Windows systems. It provides information about computer performance and running software, including name of running processes, CPU and GPU load, commit charge, I/O details, logged-in users, and Windows services. Task Manager can also be used to set process priorities, processor affinity, start and stop services, and forcibly terminate processes. The program can be started in recent versions of Windows by pressing and then typing in taskmgr.exe, by pressing and clicking Start Task Manager, by pressing , by right-clicking on the Windows taskbar and selecting "Task Manager", or by typing taskmgr in the File Explorer address bar. Task Manager was introduced in its current form with Windows NT 4.0. Prior versions of Windows NT, as well as Windows 3.x, include the Task List application, are capable of listing currently running processes and killing them, or creating new processes. Windows 9x has a program known as Close Program which lists the programs currently running and offers options to close programs as well shut down the computer. Functionality Since Windows 8, Task Manager has two views. The first time Task Manager is invoked by a user, it shows in a simplified summary mode (described in the user experience as Fewer Details). It can be switched to a more detailed mode by clicking More Details. This setting is remembered for that user on that machine. Since at least Windows 2000, the CPU usage can be displayed as a tray icon in the task bar for quick glance. Summary mode In summary mode, Task Manager shows a list of currently running programs that have a main window. It has a "more details" hyperlink that activates a full-fledged Task Manager with several tabs. Right-clicking any of the applications in the list allows switching to that application or ending the application's task. Issuing an end task causes a request for graceful exit to be sent to the application. Processes and details The Processes tab shows a list of all running processes on the system. This list includes Windows Services and processes from other accounts. The Delete key can also be used to terminate processes on the Processes tab. By default, the processes tab shows the user account the process is running under, the amount of CPU, and the amount of memory the process is currently consuming. There are more columns that can be shown. The Processes tab divides the process into three categories: Apps: Programs with a main window Windows processes: Components of Windows itself that do not have a main window, including services Background process: Programs that do not have a main window, including services, and are not part of the Windows itself This tab shows the name of every main window and every service associated with each process. Both a graceful exit command and a termination command can be sent from this tab, depending on whether the command is sent to the process or its window. The Details tab is a more basic version of the Processes tab, and acts similar to the Processes tab in Windows 7 and earlier. It has a more rudimentary user experience and can perform some additional actions. Right-clicking a process in the list allows changing the priority the process has, setting processor affinity (setting which CPU(s) the process can execute on), and allows the process to be ended. Choosing to End Process causes Windows to immediately kill the process. Choosing to "End Process Tree" causes Windows to immediately kill the process, as well as all processes directly or indirectly started by that process. Unlike choosing End Task from the Applications tab, when choosing to End Process the program is not given a warning nor a chance to clean up before ending. However, when a process that is running under a security context different from the one which issued the call to Terminate Process, the use of the KILL command-line utility is required. Performance The Performance tab shows overall statistics about the system's performance, most notably the overall amount of CPU usage and how much memory is being used. A chart of recent usage for both of these values is shown. Details about specific areas of memory are also shown. There is an option to break the CPU usage graph into two sections: kernel mode time and user mode time. Many device drivers, and core parts of the operating system run in kernel mode, whereas user applications run in user mode. This option can be turned on by choosing Show kernel times from the View menu. When this option is turned on the CPU usage graph will show a green and a red area. The red area is the amount of time spent in kernel mode, and the green area shows the amount of time spent in user mode. The Performance tab also shows statistics relating to each of the network adapters present in the computer. By default, the adapter name, percentage of network utilization, link speed, and state of the network adapter are shown, along with a chart of recent activity. App history The App history tab shows resource usage information about Universal Windows Platform apps. Windows controls the life cycle of these apps more tightly. This tab is where the data that Windows has collected about them can be viewed. Startup The Startup tab manages software that starts with Windows shell. Users The Users tab shows all users that currently have a session on the computer. On server computers, there may be several users connected to the computer using Terminal Services (or the Fast User Switching service, on Windows XP). Users can be disconnected or logged off from this tab. History Task Manager was originally an external side project developed at home by Microsoft developer David Plummer; encouraged by Dave Cutler and coworkers to make it part of the main product "build", he donated the project in 1995. The original task manager design featured a different Processes page with information being taken from the public Registry APIs rather than the private internal operating system metrics. Windows 9x A Close Program dialog box comes up when is pressed in Windows 9x. Also, in Windows 9x, there is a program called Tasks (TASKMAN.EXE) located in the Windows directory. It is rudimentary and has fewer features. The System Monitor utility in Windows 9x contains process and network monitoring functionality similar to that of the Windows Task Manager. Also, the Tasks program is called by clicking twice on the desktop if Explorer process is down. Windows XP In Windows XP only, there is a Shutdown menu that provides access to Standby, Hibernate, Turn off, Restart, Log Off, and Switch User. Later versions of Windows make these options available through the start menu. On the Performance tab, the display of the CPU values was changed from a display mimicking a LED seven-segment display, to a standard numeric value. This was done to accommodate non-Arabic numeral systems, such as Eastern Arabic numerals, which cannot be represented using a seven-segment display. Prior to Windows XP, process names longer than 15 characters in length are truncated. This problem is resolved in Windows XP. The users tab is introduced by Windows XP. Beginning with Windows XP, the Delete key is enabled on the Processes tab. Windows Vista Windows Task Manager has been updated in Windows Vista with new features, including: A "Services" tab to view and modify currently running Windows services and start and stop any service as well as enable/disable the User Account Control (UAC) file and registry virtualization of a process. New "Image Path Name" and "Command Line", and "Description" columns in the Processes tab. These show the full name and path of the executable image running in a process, any command-line parameters that were provided, and the image file's "Description" property. New columns showing DEP and virtualization statuses. Virtualization status refers to UAC virtualization, under which file and registry references to certain system locations will be silently redirected to user-specific areas. By right-clicking on any process, it is possible to directly open the Properties of the process's executable image file or of the directory (folder) containing the process. The Task Manager has also been made less vulnerable to attack from remote sources or viruses as it must be operating under administrative rights to carry out certain tasks, such as logging off other connected users or sending messages. The user must go into the "Processes" tab and click "Show processes from other users" in order to verify administrative rights and unlock these privileges. Showing processes from all users requires all users including administrators to accept a UAC prompt, unless UAC is disabled. If the user is not an administrator, they must enter a password for an administrator account when prompted to proceed, unless UAC is disabled, in which case the elevation does not occur. By right-clicking on any running process, it is possible to create a dump. This feature can be useful if an application or a process is not responding, so that the dump file can be opened in a debugger to get more information. The Shutdown menu containing Standby, Hibernate, Turn off, Restart, Log Off and Switch User has been removed. This was done due to low usage, and to reduce the overall complexity of Task Manager. The Performance tab shows the system uptime. Windows 8 In Windows 8, Windows Task Manager has been overhauled and the following changes were made: Starting in Windows 8, the tabs are hidden by default and Task Manager opens in summary mode (Fewer details). This view only shows applications and their associated processes. Prior to Windows 8, what is shown in the summary mode was shown in the tab named "Applications". Resource utilization in the Processes tab is shown with various shades of yellow, with darker color representing heavier use. The Performance tab is split into CPU, memory, disk, ethernet, and wireless network (if applicable) sections. There are overall graphs for each, and clicking on one reaches details for that particular resource. This includes consolidating information that previously appeared in the Networking tab from Windows XP through Windows 7. The CPU tab no longer displays individual graphs for every logical processor on the system by default. It now can show data for each NUMA node. The CPU tab now displays simple percentages on heat-mapping tiles to display utilization for systems with many (64 up to 640) logical processors. The color used for these heat maps is blue, with darker color again indicating heavier utilization. Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID. A new Startup tab has been added that lists running startup applications. Previously, MSConfig was in charge of this task, or in Windows Vista only, the "Software Explorer" section of Windows Defender. The Windows Defender that shipped built-into Windows 7 lacked this option, and it was also not present in the downloadable Microsoft Security Essentials either. The Processes tab now lists application names, application status, and overall usage data for CPU, memory, hard disk, and network resources for each process. A new App History tab is introduced. The application status can be changed to suspended. The normal process information found in the older Task Manager can be found in the new Details tab. Windows 10 The Processes tab is divided into categories. Display GPU information in the Performance tab, if available. Weakness Task Manager is a common target of computer viruses and other forms of malware; typically malware will close the Task Manager as soon as it is started, so as to hide itself from users. Variants of the Zotob and Spybot worms have used this technique, for example. Using Group Policy, it is possible to disable the Task Manager. Many types of malware also enable this policy setting in the registry. Rootkits can prevent themselves from getting listed in the Task Manager, thereby preventing their detection and termination using it. See also Resource Monitor Process Explorer Taskkill Tasklist Windows Task Scheduler References External links How to use and troubleshoot issues with Windows Task Manager, Microsoft Help and Support Windows 8 Task Manager In-Depth, Gavin Gear, Blogging Windows All articles to be expanded Utilities for Windows Windows components Task managers
4710169
https://en.wikipedia.org/wiki/Comparison%20of%20BSD%20operating%20systems
Comparison of BSD operating systems
There are a number of Unix-like operating systems based on or descended from the Berkeley Software Distribution (BSD) series of Unix variant options. The three most notable descendants in current use are FreeBSD, OpenBSD, and NetBSD, which are all derived from 386BSD and 4.4BSD-Lite, by various routes. Both NetBSD and FreeBSD started life in 1993, initially derived from 386BSD, but in 1994 migrating to a 4.4BSD-Lite code base. OpenBSD was forked from NetBSD in 1995. Other notable derivatives include DragonFly BSD, which was forked from FreeBSD 4.8, and Apple Inc.'s iOS and macOS, with its Darwin base including a large amount of code derived from FreeBSD. Most of the current BSD operating systems are open source and available for download, free of charge, under the BSD License, the most notable exceptions being macOS and iOS. They also generally use a monolithic kernel architecture, apart from macOS, iOS, and DragonFly BSD which feature hybrid kernels. The various open source BSD projects generally develop the kernel and userland programs and libraries together, the source code being managed using a single central source repository. In the past, BSD was also used as a basis for several proprietary versions of UNIX, such as Sun's SunOS, Sequent's Dynix, NeXT's NeXTSTEP, DEC's Ultrix and OSF/1 AXP (which became the now discontinued Tru64 UNIX). Parts of NeXT's software became the foundation for macOS which, together with iOS, is among the most commercially successful BSD variants in the general market. Aims and philosophies FreeBSD FreeBSD aims to make an operating system usable for any purpose. It is intended to run a wide variety of applications, be easy to use, contain cutting edge features, and be highly scalable on very high load network servers. FreeBSD is free software, and the project prefers the FreeBSD license. However, they sometimes accept non-disclosure agreements (NDAs) and include a limited number of nonfree hardware abstraction layer (HAL) modules for specific device drivers in their source tree, to support the hardware of companies who do not provide purely libre drivers (such as HALs to program software-defined radios so that vendors do not share their nonfree algorithms). To maintain a high level of quality and provide good support for "production quality commercial off-the-shelf (COTS) workstation, server, and high-end embedded systems", FreeBSD focuses on a narrow set of architectures. A significant focus of development since 2000 has been fine-grained locking and SMP scalability. From 2007 on, most of the kernel was fine-locked and scaling improvements started to be seen. Other recent work includes Common Criteria security functionality, such as mandatory access control and security event audit support. Derivatives: TrueNAS/FreeNAS – a network-attached storage (NAS) operating system based on FreeBSD. FuryBSD – a FreeBSD-based operating system, founded after Project Trident decided to build on Void Linux instead of TrueOS. Discontinued in October 2020. GhostBSD – a FreeBSD-based operating system with OpenRC and OS packages. Junos OS – a FreeBSD-based nonfree operating system distributed with Juniper Networks hardware. NomadBSD – a persistent live system for USB flash drives, based on FreeBSD. ClonOS – virtual hosting platform/appliance based on FreeBSD. pfSense – an open source firewall/router computer software distribution based on FreeBSD. OPNsense – an open source firewall/router computer software distribution based on FreeBSD. BSDRP – BSD Router Project: Open Source Router Distribution based on FreeBSD. HardenedBSD – HardenedBSD is a security-enhanced fork of FreeBSD. StarBSD – is a Unix-like, server-oriented operating system based on FreeBSD for Mission-Critical Enterprise Environment. TrueOS (previously PC-BSD) – was a FreeBSD based server operating system, previously a desktop operating system. The project was officially discontinued in May 2020. XigmaNAS – a network-attached storage (NAS) server software with a dedicated management web interface. helloSystem - a GUI-focused system with a macOS interface. NetBSD NetBSD aims to provide a freely redistributable operating system that professionals, hobbyists, and researchers can use in any manner they wish. The main focus is portability, through the use of clear distinctions between machine-dependent and machine-independent code. It runs on a wide variety of 32-bit and 64-bit processor architectures and hardware platforms, and is intended to interoperate well with other operating systems. NetBSD places emphasis on correct design, well-written code, stability, and efficiency. Where practical, close compliance with open API and protocol standards is also aimed for. In June 2008, the NetBSD Foundation moved to a two-clause BSD license, citing changes at UCB and industry applicability. NPF is a project spawned by NetBSD. Derivatives: OS108 – system with graphical desktop environment based on NetBSD. OpenBSD OpenBSD is a security-focused BSD known for its developers' insistence on extensive, ongoing code auditing for security and correct functionality, a "secure by default" philosophy, good documentation, and adherence to strictly open source licensing. The system incorporates numerous security features that are absent or optional in other versions of BSD. The OpenBSD policy on openness extends to hardware documentation and drivers, since without these, there can be no trust in the correct operation of the kernel and its security, and vendor software bugs would be hard to resolve. OpenBSD emphasizes very high standards in all areas. Security policies include disabling all non-essential services and having sane initial settings; and integrated cryptography (originally made easier due to relaxed Canadian export laws relative to the United States), full public disclosure of all security flaws discovered; thoroughly auditing code for bugs and security issues; various security features, including the W^X page protection technology and heavy use of randomization to mitigate attacks. Coding approaches include an emphasis on searching for similar issues throughout the code base if any code issue is identified. Concerning software freedom, OpenBSD prefers the BSD or ISC license, with the GPL acceptable only for existing software which is impractical to replace, such as the GNU Compiler Collection. NDAs are never considered acceptable. In common with its parent, NetBSD, OpenBSD strives to run on a wide variety of hardware. Where licenses conflict with OpenBSD's philosophy, the OpenBSD team has re-implemented major pieces of software from scratch, which have often become the standard used within other versions of BSD. Examples include the pf packet filter, new privilege separation techniques used to safeguard tools such as tcpdump and tmux, much of the OpenSSH codebase, and replacing GPL licensed tools such as diff, grep and pkg-config with ISC or BSD licensed equivalents. OpenBSD prominently notes the success of its security approach on its website home page. , only two vulnerabilities have ever been found in its default install (an OpenSSH vulnerability found in 2002, and a remote network vulnerability found in 2007) in a period of almost 22 years. According to OpenBSD expert Michael W. Lucas, OpenBSD "is widely regarded as the most secure operating system available anywhere, under any licensing terms." OpenBSD has spawned numerous child projects such as OpenSSH, OpenNTPD, OpenBGPD, OpenSMTPD, PF, CARP, and LibreSSL. Many of these are designed to replace restricted alternatives. Derivatives: LibertyBSD – Aimed to be a 'deblobbed' version of OpenBSD. There are a number of reasons as to why blobs can be problematic, according to the project. LibertyBSD began going through the process to become Free Software Foundation FSDG certified, but ultimately never was accepted. LibertyBSD is no longer actively developed, and the project page directs people instead to HyperbolaBSD. DragonFly BSD DragonFly BSD aims to be inherently easy to understand and develop for multi-processor infrastructures. The main goal of the project, forked from FreeBSD 4.8, is to radically change the kernel architecture, introducing microkernel-like message passing which will enhance scaling and reliability on symmetric multiprocessing (SMP) platforms while also being applicable to NUMA and clustered systems. The long-term goal is to provide a transparent single system image in clustered environments. DragonFly BSD originally supported both the IA-32 and x86-64 platforms, however support for IA-32 was dropped in version 4.0. Matthew Dillon, the founder of DragonFly BSD, believes supporting fewer platforms makes it easier for a project to do a proper, ground-up symmetric multiprocessing implementation. Popularity In September 2005, the BSD Certification Group, after advertising on a number of mailing lists, surveyed 4,330 BSD users, 3,958 of whom took the survey in English, to assess the relative popularity of the various BSD operating systems. About 77% of respondents used FreeBSD, 33% used OpenBSD, 16% used NetBSD, 2.6% used Dragonfly, and 6.6% used other (potentially non-BSD) systems. Other languages offered were Brazilian and European Portuguese, German, Italian, and Polish. Note that there was no control group or pre-screening of the survey takers. Those who checked "Other" were asked to specify that operating system. Because survey takers were permitted to select more than one answer, the percentages shown in the graph, which are out of the number survey of participants, add up to greater than 100%. If a survey taker filled in more than one choice for "other", this is still only counted as one vote for other on this chart. Another attempt to profile worldwide BSD usage is the *BSDstats Project, whose primary goal is to demonstrate to hardware vendors the penetration of BSD and viability of hardware drivers for the operating system. The project collects data monthly from any BSD system administrators willing to participate, and currently records the BSD market share of participating FreeBSD, OpenBSD, NetBSD, DragonflyBSD, Debian GNU/kFreeBSD, TrueOS, and MirBSD systems. In 2020, a new independent project was introduced to collect statistics with the goal of significantly increasing the number of observed parameters. DistroWatch, well known in the Linux community and often used as a rough guide to free operating system popularity, publishes page hits for each of the Linux distributions and other operating systems it covers. As of 27 March 2020, using a data span of the last six months it placed FreeBSD in 21st place with 452 hits per day, GhostBSD in 51st place with 243 hits, TrueOS in 54th place with 182 hits per day, DragonflyBSD in 75th place with 180 hits, OpenBSD in 80th place with 169 hits per day and NetBSD in 109th place with 105 hits per day. Names, logos, slogans The names FreeBSD and OpenBSD are references to software freedom: both in cost and open source. NetBSD's name is a tribute to the Internet, which brought the original developers together. The first BSD mascot was the BSD daemon, named after a common type of Unix software program, a daemon. FreeBSD still uses the image, a red cartoon daemon named Beastie, wielding a pitchfork, as its mascot today. In 2005, after a competition, a stylized version of Beastie's head designed and drawn by Anton Gural was chosen as the FreeBSD logo. The FreeBSD slogan is "The Power to Serve." The NetBSD flag, designed in 2004 by Grant Bissett, is inspired by the original NetBSD logo, designed in 1994 by Shawn Mueller, portraying a number of BSD daemons raising a flag on top of a mound of computer equipment. This was based on a World War II photograph, Raising the Flag on Iwo Jima. The Board of Directors of The NetBSD Foundation believed this was too complicated, too hard to reproduce and had negative cultural ramifications and was thus not a suitable image for NetBSD in the corporate world. The new, simpler flag design replaced this. The NetBSD slogan is "Of course it runs NetBSD", referring to the operating system's portability. Originally, OpenBSD used the BSD daemon as a mascot, sometimes with an added halo as a distinguishing mark, but OpenBSD later replaced its BSD daemon with Puffy. Although Puffy is usually referred to as a pufferfish, the spikes on the cartoon images give him a closer likeness to the porcupinefish. The logo is a reference to the fish's defensive capabilities and to the Blowfish cryptography algorithm used in OpenSSH. OpenBSD also has a number of slogans including "Secure by default", which was used in the first OpenBSD song, "E-railed", and "Free, Functional & Secure", and OpenBSD has released at least one original song with every release since 3.0. The DragonFly BSD logo, designed by Joe Angrisano, is a dragonfly named Fred. A number of unofficial logos by various authors also show the dragonfly or stylized versions of it. DragonFly BSD considers itself to be "the logical continuation of the FreeBSD 4.x series." FireflyBSD has a similar logo, a firefly, showing its close relationship to DragonFly BSD. In fact, the FireflyBSD website states that proceeds from sales will go to the development of DragonFly BSD, suggesting that the two may in fact be very closely related. PicoBSD's slogan is "For the little BSD in all of us," and its logo includes a version of FreeBSD's Beastie as a child, showing its close connection to FreeBSD, and the minimal amount of code needed to run as a Live CD. A number of BSD OSes use stylized version of their respective names for logos. This includes macOS, TrueOS, GhostBSD, DesktopBSD, ClosedBSD, and MicroBSD. TrueOS's slogan is "Personal computing, served up BSD style!", GhostBSD's "A simple, secure BSD served on a Desktop." DesktopBSD's "A Step Towards BSD on the Desktop." MicroBSD's slogan is "The small secure unix like OS." MirOS's site collects a variety of BSD mascots and Tux, the Linux mascot, together, illustrating the project's aim of supporting both BSD and Linux kernels. MirOS's slogan is "a wonderful operating system for a world of peace." General information See also List of BSD operating systems BSD license Comparison of open source operating systems Comparison of operating systems Notes and references Other sources A semi-official download page. Comparison BSD operating systems
15167
https://en.wikipedia.org/wiki/ICQ
ICQ
ICQ New is a cross-platform instant messaging (IM) and VoIP client. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group (now VK) in 2010. The ICQ client application and service were initially released in November 1996, freely available to download. ICQ was among the first stand-alone instant messenger (IM) — while real-time chat was not in itself new (Internet Relay Chat [IRC] being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform. At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. Since 2013, ICQ has 11 million monthly users. In 2020, the Mail.Ru Group, which owns ICQ, decided to launch its new ICQ New software, based on its messenger. The updated messenger was presented to the general public on April 6, 2020. During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. Features Private chats are a conversation between two users. When logging into an account, the chat can be accessed from any device thanks to cloud synchronization. A user can delete a sent message at any time either in their own chat or in their conversation partner's, and a notification will be received instead indicating that the message has been deleted. Any important messages from group or private chats, as well as an unlimited number and size of media content, can be sent to the conversation with oneself. Essentially, this chat acts as a free cloud storage. These are special chats where chats can take place of up to 25 thousand participants at the same time. Any user can create a group. A user can hide their phone number from other participants; there is an advanced polling feature; there is the possibility to see which group members have read a message, and notifications can be switched off for messages from specific group members. An alternative to blogs. Channel authors can publish posts as text messages and also attach media files. Once the post is published, subscribers receive a notification as they would from regular and group chats. The channel author can remain anonymous and does not have to show any information in the channel description. A special API-bot is available and can be used by anyone to create a bot, i.e. a small program which performs specific actions and interacts with the user. Bots can be used in a variety of ways ranging from entertainment to business services. Stickers (small images or photos expressing some form of emotion) are available to make communication via the application more emotive and personalized. Users can use the sticker library already available or upload their own. In addition, thanks to machine learning the software will recommend a sticker during communication by itself. Masks are images that are superimposed onto the camera in real-time. They can be used during video calls, superimposed onto photos and sent to other users. A nickname is a name made up by a user. It can replace a phone number when searching for and adding user contact. By using a nickname, users can share their contact details without providing a phone number. Smart answers are short phrases that appear above the message box which can be used to answer messages. ICQ NEW analyzes the contents of a conversation and suggests a few pre-set answers. ICQ NEW makes it possible to send audio messages. However, for people who do not want to or cannot listen to the audio, the audio can be automatically transcribed into text. All the user needs to do is click the relevant button and they will see the message in text form. Aside from text messaging, users can call each other as well as arrange audio or video calls for up to five people. During the video call, AR-masks can be used. UIN ICQ users are identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user receives a UIN when first registering with ICQ. As of ICQ6 users are also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info is the UIN, although it is possible to search for other users using their associated e-mail address or any other detail they have made public by updating it in their account's public profile. In addition the user can change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000 ICQ and AIM users were able to add each other to their contact list without the need for any external clients. (The AIM service has since been discontinued.) As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen can sometimes be reclaimed. This applies only if (since 1999 onwards) a valid primary email address was entered into the user profile. History The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. They recognized that many people were accessing the internet through non-UNIX operating systems, such as Microsoft Windows, and those users were unfamiliar with established chat technologies, e.g. IRC. ICQ was one of the first text-based messengers to reach a wide range of users. The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. In 2002 AOL successfully patented the technology. After the purchase the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director. In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the U.S. and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010. In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase. In March 2016 the source code of the client was released under the Apache license on github.com. Development history ICQ 99a/b the first releases that were widely available. ICQ 2000 incorporated into Notes and Reminder features. ICQ 2001 included server-side storage of the contact list. This provided synchronization between multiple computers and enforced obtaining consent before adding UINs to the contact list by preventing clients from modifying the local contact list directly. On December 19, 2002, AOL Time Warner announced that ICQ had been issued a United States patent for instant messaging. ICQ 2002 was the last completely advertising-free ICQ version. ICQ Pro 2003b was the first ICQ version to use the ICQ protocol version 10. However, ICQ 5 and 5.1 use version 9 of the protocol. ICQ 2002 and 2003a used version 8 of the ICQ protocol. Earlier versions (ICQ 2001b and all ICQ clients before it) used ICQ protocol version 7. ICQ 4 and later ICQ 5 (released on Monday, February 7, 2005), were upgrades on ICQ Lite. One addition was Xtraz, which offers games and features intended to appeal to younger users of the Internet. ICQ Lite was originally an idea to offer the lighter users of instant messaging an alternative client which was a smaller download and less resource-hungry for relatively slow computers. ICQ 5 introduced skins support. There are few official skins available for the current ICQ 5.1 at the official website; however, a number of user-generated skins have been made available for download. ICQ 6, released on April 17, 2007, was the first major update since ICQ 4. The user interface has been redesigned using Boxely, the same rendering engine used in AIM Triton. This change adds new features such as the ability to send IMs directly from the client's contact list. ICQ has recently started forcing users of v5.1 to upgrade to version 6 (and XP). Those who do not upgrade will find their older version of ICQ does not start up. Although the upgrade to version 6 should be seen as a positive thing, some users may find that useful features such as sending multiple files at one time is no longer supported in the new version. At the beginning of July 2008, a network upgrade forced users to stop using ICQ 5.1 - applications that identified themselves as ICQ 5, such as Pidgin, were forced to identify themselves as ICQ 6. There seems to be no alternative for users other than using a different IM program or patching ICQ 5.1 with a special application. ICQ 7.0, released on January 18, 2010. This update includes integration with Facebook and other websites. It also allows custom personal status similar to Windows Live Messenger (MSN Messenger). ICQ 7.0 does not support traditional Chinese on standard installation or with the addition of an official language pack. This has made its adoption difficult with the established user base from Hong Kong and Taiwan where traditional Chinese is the official language. ICQ 8, released on February 5, 2012 - "Meet the new generation of ICQ, Enjoy free video calls, messages and SMS, social networks support and more." ICQ 10.0, released January 18, 2016. Newest update is 10.0 Build 12393, released on November 8, 2018. Criticism AOL pursued an aggressive policy regarding alternative ("unauthorized") ICQ clients. In July 2008 changes were implemented on ICQ servers causing many unofficial clients to stop working. These users received an official notification from "ICQ System". On December 9, 2008, another change to the ICQ servers occurred: clients sending Client IDs not matching ICQ 5.1 or higher stopped working. On December 29, 2008, the ICQ press service distributed a statement characterizing alternative clients as dangerous. On January 21, 2009, ICQ servers started blocking all unofficial clients in Russia and Commonwealth of Independent States countries. Users in Russia and Ukraine received a message from UIN 1: "Системное сообщение ICQ не поддерживает используемую вами версию. Скачайте бесплатную авторизованную версию ICQ с официального web-сайта ICQ. System Message The version you are using is not supported by ICQ. Download a free authorized ICQ version from ICQ's official website." On icq.com there was an "important message" for Russian-speaking ICQ users: "ICQ осуществляет поддержку только авторизированных версий программ: ICQ Lite и ICQ 6.5." ("ICQ supports only authorized versions of programs: ICQ Lite and ICQ 6.5.") On February 3, 2009, the events of January 21 were repeated. On December 27, 2018, ICQ announced it was to stop supporting unofficial clients, affecting many users who prefer a compact size using Miranda and other clients. On December 28, 2018, ICQ stopped working on some unofficial clients. In late March, 2019, ICQ stopped working on the Pidgin client, as initiated in December 2018. Cooperation with Russian intelligence services According to a Novaya Gazeta article published in May 2018, Russian intelligence agencies have access to online reading of ICQ users' correspondence. The article examined 34 sentences of Russian courts, during the investigation of which the evidence of the defendants' guilt was obtained by reading correspondence on a PC or mobile devices. Of the fourteen cases in which ICQ was involved, in six cases the capturing of information occurred before the seizure of the device. The reason for the article was the blocking of the Telegram service and the recommendation of the Advisor to the President of the Russian Federation Herman Klimenko to use ICQ instead. Clients AOL's OSCAR network protocol used by ICQ is proprietary and using a third party client is a violation of ICQ Terms of Service. Nevertheless, a number of third-party clients have been created by using reverse-engineering and protocol descriptions. These clients include: Adium: supports ICQ, Yahoo!, AIM, MSN, Google Talk, XMPP, and others, for macOS Ayttm: supports ICQ, Yahoo!, AIM, MSN, IRC, and XMPP bitlbee: IRC gateway, supports ICQ, Yahoo!, AIM, MSN, Google Talk, and XMPP centericq: supports ICQ, Yahoo!, AIM, MSN, IRC and XMPP, text-based climm (formerly mICQ): text-based Fire: supports ICQ, Yahoo!, AIM, MSN, IRC, and XMPP, for macOS Jimm: supports ICQ, for Java ME mobile devices Kopete: supports AIM, ICQ, MSN, Yahoo, XMPP, Google Talk, IRC, Gadu-Gadu, Novell GroupWise Messenger and others, for Unix-like Meetro: IM and social networking combined with location; supports AIM, ICQ, MSN, Yahoo! Miranda IM: supports ICQ, Yahoo!, AIM, MSN, IRC, Google Talk, XMPP, Gadu-Gadu, BNet and others, for Windows Naim: ncurses-based Pidgin (formerly Gaim): supports ICQ, Yahoo!, AIM, Gtalk, MSN, IRC, XMPP, Gadu-Gadu, SILC, Meanwhile, (IBM Lotus Sametime) and others QIP: supports ICQ, AIM, XMPP and XIMSS stICQ: supports ICQ, for Symbian OS Trillian: supports ICQ, IRC, Google Talk, XMPP and others AOL supported clients include: AOL Instant Messenger (discontinued in 2017) Messages/iChat: uses ICQ's UIN as an AIM screenname, for macOS See also Comparison of instant messaging clients Comparison of instant messaging protocols LAN messenger Online chat Windows Live Messenger Tencent QQ References External links Official ICQ Website Instant messaging clients 1996 software AIM (software) clients AOL BlackBerry software IOS software Symbian software 2010 mergers and acquisitions Mergers and acquisitions of Israeli companies Android (operating system) software Formerly proprietary software 1996 establishments in Israel
23954565
https://en.wikipedia.org/wiki/IAcademy
IAcademy
Information and Communications Technology Academy, better known as iAcademy (stylized as iACADEMY) is a private, non-sectarian educational institution in the Philippines. The college offers specialized senior high school and undergraduate programs in fields relating to computer science, game development, multimedia arts, animation, and business management. The college has two campuses, with the recent iACADEMY Nexus along Yakal and the previous Buendia Campus that is currently being renovated, both in the Central Business District of Makati. History Founded in 2002, iACADEMY offers specialized degree programs in BS Computer Science with specialization in Software Engineering, BS Computer Science with specialization in Cloud Computing Specialization Track in partnership with amazon Web Services, BS Computer Science with specialization in Data Science Specialization track, BS Information Technology with specialization in Web Development, BS Bachelor of Science in Entertainment and Multimedia Computing Game Development, BS Business Administration with specialization in Marketing Management, BS in Accountancy, BS in Real Estate Management, BA in Psychology, BA Fashion Design and Technology, BA Multimedia Arts and Design, BS in Animation, BA in Film and Visual Effects, and in 2007 iACADEMY was granted permission by the Commission on Higher Education, to offer the first Bachelor of Science animation program in the Philippines. is the one of the first college institutions in the Philippines offering BS Animation. iACADEMY's School of Continuing Education offers similar short courses aimed at working professionals. iACADEMY uses an industry-aligned curriculum in its degree programs that is focused in Computing, Business, and Design. The four-year programs culminate in a six-month, 960-hour internship program that the students have to go through before graduating. By 2014, iACADEMY moved to a new and bigger campus in Buendia to house its growing student population. On the same year, iACADEMY signed a Memorandum of Understanding with Aboitiz Weather Philippines to develop a website and application for a faster and more user-friendly experience on weather reports. iACADEMY was also able to lock down a study tour partnership with Polimoda, Italy's leading school of Fashion and Marketing, as well as a transfer program with De Paul University in Chicago, USA. The school also, together with the Animation Council of the Philippines (ACPI) also hosted the Animahenasyon 2014, the biggest Animation Festival in the Philippines, drawing thousands of students and professionals to the event. In 2018, due to its growing population, iACADEMY opens its second campus, iACADEMY Nexus, on Yakal Street, Makati. Partnership In 2009, iACADEMY became an Authorized Training Partner of Wacom, a Japanese company that produces graphics tablets and related products. In 2010, iACADEMY partnered with TV5 during the first automated elections in the Philippines. Together with DZRH Manila Broadcasting Radio, Manila Broadcasting Company (MBC), Legal Network for Truthful Elections (LENTE), Stratbase, Inc. Public Affairs and Research Consultancy Group, ePLDT, Inc., and Social Weather Stations (SWS), they worked to bring up-to-date election coverage. In the same year, the college was appointed the first IBM Software Center of Excellence in the ASEAN Region and the first Lotus Academic Institute. In 2011, iACADEMY was chosen by Solar Entertainment Corporation to be the official partner-school and workspace of the third season of Project Runway Philippines. In 2019, the school becomes the first and only Toon Boom Center of Excellence in Asia. Academics The college offers four Senior High School tracks, offering ten programs, and three schools that offers ten undergraduate degree programs in computer sciences, business managements, and arts. These programs offered include fields mainly in arts, computer science, and business management. SHS Strands There are four tracks that are being offered in iACADEMY. First is the Academic Track, which offers two strands in Accountancy Business and Management (ABM) and Humanities and Social Sciences (HUMSS), second is the Technical Vocational (Tech-Voc) Track, which offers four strands in Computer Programming (Software Development), Animation, Fashion Design, and Graphic Illustration, third is the Arts and Design Track offers two strands in Multimedia Arts and Audio Production, and last is the Science, Technology, Engineering, and Math (STEM) Track, which only offers Robotics as its strand. School of Computing The School of Computing is the first IBM Center of Excellence (CoE) in the ASEAN region and the official Microsoft Training Center in the Philippines. The school offers three Bachelor of Science degrees in Computer Science (Software Engineering) with specialization in Software Engineering Specialization track, Cloud Computing Specialization Track in partnership with amazon Web Services, or Data Science Specialization track, Entertainment and Multimedia Computing (Game Development), and IT (Web Development). School of Business and Liberal Arts The School of Business and Liberal Arts offers three Bachelor of Science degrees in Business Administration with Specialization in Marketing Management, Accountancy, Real Estate Management, and one Bachelor of Arts degree in Psychology. School of Design The School of Design is the first to offer an animation program in the Philippines and the only Toon Boom Center of Excellence in Asia. It is also partnered with Wacom Authorized Training Partners to provide students with the latest technologies. The school offers a Bachelor of Science degree in Animation and two Bachelor of Arts degrees in Multimedia Arts and Design, Fashion Design and Technology, Film and Visual Effects, and Music Production, which began in the school year 2021-2022. Student life The senior high school program follows the semester calendar which usually starts from August and ends in June, while the college follows a trimester calendar starting from July. New students are offered to join SOAR (Student Orientation and Registration), an event that is held a week before the start of classes, to introduce the students around the campus. Traditions SOAR (Student Orientation and Registration) - An event to orient new students to the school Creative Camp - Free art workshops Battle League - Gaming competition to promote the Game Development industry Student organizations Senior High School Anime Habu Basic Integrated Theater Arts Guild of iACT (BiTAG) CTRL Dance Troupe iACADEMY Contribute, Connect, Continuum (iCON) iACADEMY Junior Software Developers Association (iJSDA) iACADEMY Student Council (CS) Juniors Games Developers Association of iACADEMY (JGDA) Magnates - SHS Chapter OCTAVE - SHS Chapter Prima - SHS Chapter Sining na Nakglilikha ng Buhay (SinLikHay) Student Athletes Society - SHS Chapter The Spines Vektor VELOCiTY - SHS Chapter Young Filmmakers Society of iACADEMY (YFS) College RHYTHM Creative Society Filmmakers Society of iACADEMY (FSi) iACADEMY Making Positive Action (iMPACT) iACADEMY Photography Society (Optics) iACADEMY Student Council (CSO) iACT International Games Developers Association of iACADEMY (IGDA) Magnates - College Chapter OCTAVE - College Chapter Pikzel Graphic Design Prima - College Chapter Software Engineering through Academics and Leadership (SEAL) Student Athletes Society - College Chapter References External links Official website Art schools in the Philippines Design schools Information technology institutes Educational institutions established in 2002 Universities and colleges in Makati 2002 establishments in the Philippines
54398950
https://en.wikipedia.org/wiki/2004%20Troy%20State%20Trojans%20football%20team
2004 Troy State Trojans football team
The 2004 Troy State Trojans football team represented Troy State University in the 2004 NCAA Division I-A football season. The Trojans played their home games at Movie Gallery Stadium in Troy, Alabama and competed in the Sun Belt Conference. The 2004 season was Troy State's first season as a member of the Sun Belt Conference. Troy State also made their first ever appearance in a Division I-A bowl game during this season since the program transitioned to I-A just three years prior, in 2001. The Trojans lost 34–21 to Northern Illinois in the Silicon Valley Football Classic. Schedule References Troy State Troy Trojans football seasons Sun Belt Conference football champion seasons Troy State Trojans football
23983155
https://en.wikipedia.org/wiki/Personal%20Antivirus
Personal Antivirus
Personal Antivirus is rogue anti-virus software created by a company named Innovagest (sometimes referred to as "Innovagest 2000"), and is related to other rogue software. It claims to be an anti-virus program, but instead merely displays false warnings about virus and spyware infections, and demands money to clean these infections. Description A common way that Personal Antivirus installs itself on a computer is through a malicious pop-up ad (though it may also be installed as part of a malicious video codec package). When a user visits a website hosting a Personal Antivirus ad, a pop-up window appears, claiming to be scanning the computer for virus infections. This "scan" inevitably finds a number of virus infections. Afterward, the user is told that they need to buy Personal Antivirus to clean these infections, and is directed to a site that accepts payments. If the user decides to buy and install the program, Personal Antivirus claims to have repaired the infections, but also regularly advertises additional programs or demands more money at regular intervals. New York Times Web Site In September, 2009, the New York Times web site unwittingly started to randomly display ads related to Personal Antivirus. The New York Times uses a mix of in-house advertising and advertising networks to display ads on their web site. The person responsible for the ads originally requested that the New York Times run ads for Vonage VoIP service. Because Vonage had previously advertised directly with the New York Times, the ads were approved and were delivered via a third-party ad network that was unfamiliar to the Times. On September 11, 2009, the Vonage ads that were originally approved switched to Personal Antivirus ads. These ads continued to be displayed throughout the following weekend. The ads were eventually stopped when the New York Times temporarily disabled ads displayed by third-party networks and investigated the source of the Personal Antivirus ads. The New York Times later advised readers that using a reputable, properly-updated anti-virus program would likely resolve any lingering infections from Personal Antivirus. They also discovered that during the same weekend, other sites had experienced similar malicious ads, possibly including the web site of the San Francisco Chronicle. References Rogue software
8485448
https://en.wikipedia.org/wiki/Integrated%20modular%20avionics
Integrated modular avionics
Integrated modular avionics (IMA) are real-time computer network airborne systems. This network consists of a number of computing modules capable of supporting numerous applications of differing criticality levels. In opposition to traditional federated architectures, the IMA concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. An IMA architecture imposes multiple requirements on the underlying operating system. History It is believed that the IMA concept originated with the avionics design of the fourth-generation jet fighters. It has been in use in fighters such as F-22 and F-35, or Dassault Rafale since the beginning of the '90s. Standardization efforts were ongoing at this time (see ASAAC or STANAG 4626), but no final documents were issued then. First uses for this concept were in development for business jets and regional jets at the end of the 1990s and were seen flying at the beginning of the 2000s, but it had not been yet standardized. The concept was then standardized and migrated to the commercial Airliner arena in the end of the 2000s (Airbus A380 then Boeing 787). Architecture IMA modularity simplifies the development process of avionics software: As the structure of the modules network is unified, it is mandatory to use a common API to access the hardware and network resources, thus simplifying the hardware and software integration. IMA concept also allows the Application developers to focus on the Application layer, reducing the risk of faults in the lower-level software layers. As modules often share an extensive part of their hardware and lower-level software architecture, maintenance of the modules is easier than with previous specific architectures. Applications can be reconfigured on spare modules if the primary module that supports them is detected faulty during operations, increasing the overall availability of the avionics functions. Communication between the modules can use an internal high speed Computer bus, or can share an external network, such as ARINC 429 or ARINC 664 (part 7). However, much complexity is added to the systems, which thus require novel design and verification approaches since applications with different criticality levels share hardware and software resources such as CPU and network schedules, memory, inputs and outputs. Partitioning is generally used in order to help segregate mixed criticality applications and thus ease the verification process. ARINC 650 and ARINC 651 provide general purpose hardware and software standards used in an IMA architecture. However, parts of the API involved in an IMA network has been standardized, such as: ARINC 653 for the software avionics partitioning constraints to the underlying Real-time operating system (RTOS), and the associated API Certification considerations RTCA DO-178C and RTCA DO-254 form the basis for flight certification today, while DO-297 gives specific guidance for Integrated modular avionics. ARINC 653 contributes by providing a framework that enables each software building block (called a partition) of the overall Integrated modular avionics to be tested, validated, and qualified independently (up to a certain measure) by its supplier. The FAA CAST-32A position paper provides information (not official guidance) for certification of multicore systems, but does not specifically address IMA with multicore. A research paper by VanderLeest and Matthews addresses implementation of IMA principles for multicore" Examples of IMA architecture Examples of aircraft avionics that uses IMA architecture : Airbus A220 : Rockwell Collins Pro Line Fusion Airbus A350 Airbus A380 Airbus A400M ATR 42 ATR 72 BAE Hawk (Hawk 128 AJT) Boeing 777 : includes AIMS avionics from Honeywell Aerospace Boeing 787 : GE Aviation Systems (formerly Smiths Aerospace) IMA architecture is called Common Core System Boeing 777X: will include the Common Core System from GE Aviation Bombardier Global 5000 / 6000 : Rockwell Collins Pro Line Fusion Dassault Falcon 900, Falcon 2000, and Falcon 7X : Honeywell's IMA architecture is called MAU (Modular Avionics Units), and the overall platform is called EASy F-22 Raptor Gulfstream G280: Rockwell Collins Pro Line Fusion Rafale : Thales IMA architecture is called MDPU (Modular Data Processing Unit) Sukhoi Superjet 100 COMAC C919 See also Annex: Acronyms and abbreviations in avionics OSI model Cockpit display system ARINC 653 : a standard API for avionics applications Def Stan 00-74 : ASAAC standard for IMA Systems Software STANAG 4626 References IMA Publications & Whitepapers "Transitioning from Federated Avionics Architectures to Integrated Modular Avionics", Christopher B. Watkins, Randy Walter, 26th Digital Avionics Systems Conference (DASC), Dallas, Texas, October 2007. "Advancing Open Standards in Integrated Modular Avionics: An Industry Analysis", Justin Littlefield-Lawwill, Ramanathan Viswanathan, 26th Digital Avionics Systems Conference (DASC), Dallas, Texas, October 2007. "Application of a Civil Integrated Modular Architecture to Military Transport Aircraft", R. Ramaker, W. Krug, W. Phebus, 26th Digital Avionics Systems Conference (DASC), Dallas, Texas, October 2007. "Integrating Modular Avionics: A New Role Emerges", Richard Garside, Joe F. Pighetti, 26th Digital Avionics Systems Conference (DASC), Dallas, Texas, October 2007. "Integrated Modular Avionics: Managing the Allocation of Shared Intersystem Resources", Christopher B. Watkins, 25th Digital Avionics Systems Conference (DASC), Portland, Oregon, October 2006. "Modular Verification: Testing a Subset of Integrated Modular Avionics in Isolation", Christopher B. Watkins, 25th Digital Avionics Systems Conference (DASC), Portland, Oregon, October 2006. "Certification Concerns with Integrated Modular Avionics (IMA) Projects", J. Lewis, L. Rierson, 22nd Digital Avionics Systems Conference (DASC), October 2003. Other External links What is integrated avionics ? Avionics Aircraft instruments Modularity
46664449
https://en.wikipedia.org/wiki/SnakeHead%20Software%2C%20LLC
SnakeHead Software, LLC
SnakeHead Software is an American mobile software development studio based in Austin, Texas. History SnakeHead Software was founded in October 2008 by Gerald Bailey. In December 2008 their combat flight simulator Flying Aces was released in Apple's App Store where it remained in the top 100 games for nearly two years. In February 2014, Austin App House (AAH) was created to develop mobile and web applications as an out-sourcing agency. The company has built applications and games for clients including Volkswagen, Ford, Nissan, Toyota, RAM, FieldSolutions, Walton & Johnson, Byte and numerous other companies. SnakeHead Software has built several augmented reality apps for use within the automotive industry as well as social media, personal security and educational apps. Games On November 7, 2009, the company released Air Assault, a twist on the 1985 hit Airborne. In February 2010, Air Assault became the #2 app in the Apple App Store and now has over 5 million downloads. Other games by SnakeHead Software include iBob, Texas Tea and Guardian AlertME. Acquisitions The company has made a series of acquisitions including Techarati and Agile Poet. In October 2012, it acquired Techarati, an Austin-based mobile app developer founded in 2008 by Drew Moynihan, and in 2013, purchased Agile Poet, a mobile app development company. Agile Poet, which was founded by Joshua McClure, previously specialized in mobile payment gateways and building mobile apps for corporate clients. In April 2013, the transaction was reversed. References Rob Heidrick, "SnakeHead Software". Community Impact, April 2, 2010. Christoper Calnan, "Game On". Austin Business Journal, May 21, 2010. Companies based in Austin, Texas
4260161
https://en.wikipedia.org/wiki/EAX%20mode
EAX mode
EAX mode (encrypt-then-authenticate-then-translate) is a mode of operation for cryptographic block ciphers. It is an Authenticated Encryption with Associated Data (AEAD) algorithm designed to simultaneously provide both authentication and privacy of the message (authenticated encryption) with a two-pass scheme, one pass for achieving privacy and one for authenticity for each block. EAX mode was submitted on October 3, 2003 to the attention of NIST in order to replace CCM as standard AEAD mode of operation, since CCM mode lacks some desirable attributes of EAX and is more complex. Encryption and authentication EAX is a flexible nonce-using two-pass AEAD scheme with no restrictions on block cipher primitive to be used, nor on block size, and supports arbitrary-length messages. Authentication tag length is arbitrarily sizeable up to the used cipher's block size. The block cipher primitive is used in CTR mode for encryption and as OMAC for authentication over each block through the EAX composition method, that may be seen as a particular case of a more general algorithm called EAX2 and described in The EAX Mode of Operation The reference implementation in the aforementioned paper uses AES in CTR mode for encryption combined with AES OMAC for authentication. Performance Being a two-pass scheme, EAX mode is slower than a well designed one-pass scheme based on the same primitives. EAX mode has several desirable attributes, notably: provable security (dependent on the security of the underlying primitive cipher); message expansion is minimal, being limited to the overhead of the tag length; using CTR mode means the cipher need be implemented only for encryption, in simplifying implementation of some ciphers (especially desirable attribute for hardware implementation); the algorithm is "on-line", that means that can process a stream of data, using constant memory, without knowing total data length in advance; the algorithm can pre-process static Associated Data (AD), useful for encryption/decryption of communication session parameters (where session parameters may represent the Associated Data). Notably, CCM mode lacks the last 2 attributes (CCM can process Associated Data, it can't pre-process it). Patent status The authors of EAX mode, Mihir Bellare, Phillip Rogaway, and David Wagner placed the work under public domain and have stated that they were unaware of any patents covering this technology. Thus, EAX mode of operation is believed to be free and unencumbered for any use. Use A modification of the EAX mode, so called EAX′ or EAXprime, is used in the ANSI C12.22 standard for transport of meter-based data over a network. In 2012 Kazuhiko Minematsu, Stefan Lucks, Hiraku Morita and Tetsu Iwata published a paper that proves the security of the mode with messages longer than the key, but demonstrates a trivial attack against short messages using this mode. It is not possible to create vulnerable short messages compliant with the ANSI C12.22 standard, but in other contexts in which such short messages are possible, EAXprime cannot be securely used. See also Authenticated Encryption with Associated Data (AEAD) Authenticated Encryption (AE) CCM mode CTR mode OMAC References External links NIST: Block Cipher Modes A Critique of CCM (February 2003) Software implementations C++: Dr. Brian Gladman's crypto library implementing EAX mode of operation Pascal / Delphi: Wolfgang Ehrhardt's crypto library implementing EAX mode of operation Java: BouncyCastle crypto library implementing EAX mode of operation C: libtomcrypt implementing EAX mode of operation Hardware implementations Block cipher modes of operation Authenticated-encryption schemes
3678080
https://en.wikipedia.org/wiki/Steve%20Riley%20%28American%20football%29
Steve Riley (American football)
Steven Bruce Riley (November 23, 1952 – September 16, 2021) was an American professional football player who was an offensive tackle for 10 seasons with the Minnesota Vikings of the National Football League (NFL). High school career Riley went to Castle Park High School in Chula Vista, California where he was a standout athlete on both the high school varsity basketball and football teams. Castle Park High School fielded very strong football teams. In his senior year, he was one of the co-captains and helped his team win the San Diego CIF championship. He also made first-team all-CIF and was voted offensive lineman of the year in his conference. College career Riley was recruited by Notre Dame, Colorado, New Mexico State, and San Diego State, among others. He played college football at the University of Southern California from 1970 to 1974. As a junior he started at tackle and was a part of the historic undefeated 1972 USC Trojans team. He played in two Rose Bowls, one of those being the 1973 Rose Bowl, where the USC Trojans defeated the Ohio State Buckeyes, 42-17, to become the national champions. The 1972 USC Trojans team is regarded by some as the best college football team ever. Professional career Riley was picked 25th overall in the first round of the 1974 NFL draft by the Minnesota Vikings. He played 11 seasons, all with the Vikings, from 1974 to 1984. He appeared in 138 games, making 128 starts. He was a part of the 1974 and 1976 NFC championship teams. He played in Super Bowls IX and XI. Riley took over the starting left tackle position in 1976 and started every game until early 1978 when a neck injury put him on the injured reserve game for 11 games. Riley returned to full-time duty in 1979 and started every game for the next five seasons. Over his career, Riley helped the Vikings reach the playoffs seven times. He started in Super Bowl XI to cap that season and helped Minnesota advance to the 1977 NFC Championship game. In his last year, Riley started every game despite playing the entire season with his left hand in a cast because of a broken thumb. He started the first six games at left tackle however, in attempt to decrease the amount of contact to his injured hand, he was moved to right tackle for games 7 through 11. Riley went back to left tackle for the remainder of the season as his teammate said the blind side position was too challenging. Awards and honors While at USC, he earned All-American first team honors as a 1973 senior as the Trojans returned to the Rose Bowl. He then played in the 1974 College All-Star game. Riley was voted by his teammates to be the recipient of the Ed Block Courage Award in 1984 largely in part of playing his entire last year with a broken thumb. His games played rank fourth-most in Vikings history among tackles. Personal life Riley resided in Southern California after his retirement where he owned a commercial property maintenance business in Irvine. He appeared as an extra in the movies, "The Bear Bryant Story", "North Dallas Forty", and "Against All Odds". He was married to his wife, Jan for 40 years. He had four daughters and ten grandchildren. References 1952 births 2021 deaths Sportspeople from Chula Vista, California Players of American football from California American football offensive tackles USC Trojans football players All-American college football players Minnesota Vikings players
198584
https://en.wikipedia.org/wiki/Laptop
Laptop
A laptop, laptop computer, or notebook computer is a small, portable personal computer (PC) with a screen and alphanumeric keyboard. These typically have a clam shell form factor with the screen mounted on the inside of the upper lid and the keyboard on the inside of the lower lid, although 2-in-1 PCs with a detachable keyboard are often marketed as laptops or as having a laptop mode. Laptops are folded shut for transportation, and thus are suitable for mobile use. Its name comes from lap, as it was deemed practical to be placed on a person's lap when being used. Today, laptops are used in a variety of settings, such as at work, in education, for playing games, web browsing, for personal multimedia, and general home computer use. As of 2021, in American English, the terms 'laptop computer' and 'notebook computer' are used interchangeably; in other dialects of English one or the other may be preferred. Although the terms 'notebook computers' or 'notebooks' originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the terms have come to mean the same thing and notebook no longer refers to any specific size. Laptops combine all the input/output components and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, data storage device, sometimes an optical disc drive, pointing devices (such as a touch pad or pointing stick), with an operating system, a processor and memory into a single unit. Most modern laptops feature integrated webcams and built-in microphones, while many also have touchscreens. Laptops can be powered either from an internal battery or by an external power supply from an AC adapter. Hardware specifications, such as the processor speed and memory capacity, significantly vary between different types, models and price points. Design elements, form factor and construction can also vary significantly between models depending on the intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low production cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or traveling sales representatives. As portable computers evolved into modern laptops, they became widely used for a variety of purposes. History As the personal computer (PC) became feasible in 1971, the idea of a portable personal computer soon followed. A "personal, portable information manipulator" was imagined by Alan Kay at Xerox PARC in 1968, and described in his 1972 paper as the "Dynabook". The IBM Special Computer APL Machine Portable (SCAMP) was demonstrated in 1973. This prototype was based on the IBM PALM processor. The IBM 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype. As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The first "laptop-sized notebook computer" was the Epson HX-20, invented (patented) by Suwa Seikosha's Yukio Yokozawa in July 1980, introduced at the COMDEX computer show in Las Vegas by Japanese company Seiko Epson in 1981, and released in July 1982. It had an LCD screen, a rechargeable battery, and a calculator-size printer, in a chassis, the size of an A4 notebook. It was described as a "laptop" and "notebook" computer in its patent. The portable micro computer Portal of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. It was a portable microcomputer designed and marketed by the studies and developments department of R2E Micral at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2  MHz. It was equipped with a central 64 KB RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (separate blocks), a 32-character screen, a floppy disk: capacity = 140,000 characters, of a thermal printer: speed = 28 characters / second, an asynchronous channel, asynchronous channel, a 220 V power supply. It weighed 12 kg and its dimensions were 45 x 45 x 15 cm. It provided total mobility. Its operating system was aptly named Prologue. The Osborne 1, released in 1981, was a luggable computer that used the Zilog Z80 and weighed . It had no battery, a cathode ray tube (CRT) screen, and dual single-density floppy drives. Both Tandy/RadioShack and Hewlett Packard (HP) also produced portable computers of varying designs during this period. The first laptops using the flip form factor appeared in the early 1980s. The Dulmont Magnum was released in Australia in 1981–82, but was not marketed internationally until 1984–85. The US$8,150 (US$ today) GRiD Compass 1101, released in 1982, was used at NASA and by the military, among others. The Sharp PC-5000, Ampere and Gavilan SC released in 1983. The Gavilan SC was described as a "laptop" by its manufacturer, while the Ampere had a modern clamshell design. The Toshiba T1100 won acceptance not only among PC experts but the mass market as a way to have PC portability. From 1983 onward, several new input techniques were developed and included in laptops, including the touch pad (Gavilan SC, 1983), the pointing stick (IBM ThinkPad 700, 1992), and handwriting recognition (Linus Write-Top, 1987). Some CPUs, such as the 1990 Intel i386SL, were designed to use minimum power to increase battery life of portable computers and were supported by dynamic power management features such as Intel SpeedStep and AMD PowerNow! in some designs. Displays reached 640x480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991, with increases in resolution and screen size occurring frequently until the introduction of 17" screen laptops in 2003. Hard drives started to be used in portables, encouraged by the introduction of 3.5" drives in the late 1980s, and became common in laptops starting with the introduction of 2.5" and smaller drives around 1990; capacities have typically lagged behind physically larger desktop drives. Common resolutions of laptop webcams are 720p (HD), and in lower-end laptops 480p. The earliest known laptops with 1080p (Full HD) webcams like the Samsung 700G7C were released in the early 2010s. Optical disc drives became common in full-size laptops around 1997; this initially consisted of CD-ROM drives, which were supplanted by CD-R, DVD, and Blu-ray drives with writing capability over time. Starting around 2011, the trend shifted against internal optical drives, and as of 2021, they have largely disappeared; they are still readily available as external peripherals. Etymology While the terms laptop and notebook are used interchangeably today, there is some question as to the original etymology and specificity of either term—the term laptop appears to have been coined in the early 1980s to describe a mobile computer which could be used on one's lap, and to distinguish these devices from earlier and much heavier, portable computers (informally called "luggables"). The term "notebook" appears to have gained currency somewhat later as manufacturers started producing even smaller portable devices, further reducing their weight and size and incorporating a display roughly the size of A4 paper; these were marketed as notebooks to distinguish them from bulkier mainstream or desktop replacement laptops. Types Since the introduction of portable computers during the late 1970s, their form has changed significantly, spawning a variety of visually and technologically differing subclasses. Except where there is a distinct legal trademark around a term (notably, Ultrabook), there are rarely hard distinctions between these classes and their usage has varied over time and between different sources. Since the late 2010s, the use of more specific terms has become less common, with sizes distinguished largely by the size of the screen. Smaller and Larger Laptops There were in the past a number of marketing categories for smaller and larger laptop computers; these included "subnotebook" models, low cost "netbooks", and "Ultra-mobile PCs" where the size class overlapped with devices like smartphone and handheld tablets, and "Desktop replacement" laptops for machines notably larger and heavier than typical to operate more powerful processors or graphics hardware. All of these terms have fallen out of favor as the size of mainstream laptops has gone down and their capabilities have gone up; except for niche models, laptop sizes tend to be distinguished by the size of the screen, and for more powerful models, by any specialized purpose the machine is intended for, such as a "gaming laptop" or a "mobile workstation" for professional use. Convertible, hybrid, 2-in-1 The latest trend of technological convergence in the portable computer industry spawned a broad range of devices, which combined features of several previously separate device types. The hybrids, convertibles, and 2-in-1s emerged as crossover devices, which share traits of both tablets and laptops. All such devices have a touchscreen display designed to allow users to work in a tablet mode, using either multi-touch gestures or a stylus/digital pen. Convertibles are devices with the ability to conceal a hardware keyboard. Keyboards on such devices can be flipped, rotated, or slid behind the back of the chassis, thus transforming from a laptop into a tablet. Hybrids have a keyboard detachment mechanism, and due to this feature, all critical components are situated in the part with the display. 2-in-1s can have a hybrid or a convertible form, often dubbed 2-in-1 detachable and 2-in-1 convertibles respectively, but are distinguished by the ability to run a desktop OS, such as Windows 10. 2-in-1s are often marketed as laptop replacement tablets. 2-in-1s are often very thin, around , and light devices with a long battery life. 2-in-1s are distinguished from mainstream tablets as they feature an x86-architecture CPU (typically a low- or ultra-low-voltage model), such as the Intel Core i5, run a full-featured desktop OS like Windows 10, and have a number of typical laptop I/O ports, such as USB 3 and Mini DisplayPort. 2-in-1s are designed to be used not only as a media consumption device but also as valid desktop or laptop replacements, due to their ability to run desktop applications, such as Adobe Photoshop. It is possible to connect multiple peripheral devices, such as a mouse, keyboard, and several external displays to a modern 2-in-1. Microsoft Surface Pro-series devices and Surface Book are examples of modern 2-in-1 detachable, whereas Lenovo Yoga-series computers are a variant of 2-in-1 convertibles. While the older Surface RT and Surface 2 have the same chassis design as the Surface Pro, their use of ARM processors and Windows RT do not classify them as 2-in-1s, but as hybrid tablets. Similarly, a number of hybrid laptops run a mobile operating system, such as Android. These include Asus's Transformer Pad devices, examples of hybrids with a detachable keyboard design, which do not fall in the category of 2-in-1s. Rugged laptop A rugged laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures, and wet or dusty environments. Rugged laptops are bulkier, heavier, and much more expensive than regular laptops, and thus are seldom seen in regular consumer use. Hardware The basic components of laptops function identically to their desktop counterparts. Traditionally they were miniaturized and adapted to mobile use, although desktop systems increasingly use the same smaller, lower-power parts which were originally developed for mobile use. The design restrictions on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components, although that difference has increasingly narrowed. In general, laptop components are not intended to be replaceable or upgradable by the end-user, except for components that can be detached; in the past, batteries and optical drives were commonly exchangeable. This restriction is one of the major differences between laptops and desktop computers, because the large "tower" cases used in desktop computers are designed so that new motherboards, hard disks, sound cards, RAM, and other components can be added. Memory and storage can often be upgraded with some disassembly, but with the most compact laptops, there may be no upgradeable components at all. Intel, Asus, Compal, Quanta, and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards and inability to upgrade components. The following sections summarizes the differences and distinguishing features of laptop components in comparison to desktop personal computer parts. Display Internally, a display is usually an LCD panel, although occasionally OLEDs are used. These interface to the laptop using the LVDS or embedded DisplayPort protocol, while externally, it can be a glossy screen or a matte (anti-glare) screen. As of 2021, mainstream consumer laptops tend to come with either 13" or 15"-16" screens; 14" models are more popular among business machines. Larger and smaller models are available, but less common – there is no clear dividing line in minimum or maximum size. Machines small enough to be handheld (screens in the 6–8" range) can be marketed either as very small laptops or "handheld PCs," while the distinction between the largest laptops and "All-in-One" desktops is whether they fold for travel. Sizes In the past, there was a broader range of marketing terms (both formal and informal) to distinguish between different sizes of laptops. These included Netbooks, subnotebooks, Ultra-mobile PC, and Desktop replacement computers; these are sometimes still used informally, although they are essentially dead in terms of manufacturer marketing. Resolution Having a higher resolution display allows more items to fit onscreen at a time, improving the user's ability to multitask, although at the higher resolutions on smaller screens, the resolution may only serve to display sharper graphics and text rather than increasing the usable area. Since the introduction of the MacBook Pro with Retina display in 2012, there have been an increase in the availability of "HiDPI" (or high Pixel density) displays; as of 2021, this is generally considered to be anything higher than 1920 pixels wide. This has increasingly converged around 4K (3840-pixel-wide) resolutions. External displays can be connected to most laptops, and models with a Mini DisplayPort can handle up to three. Refresh rates and 3D The earliest laptops known to feature a display with doubled 120 Hz of refresh rate and active shutter 3D system were released in 2011 by Dell (M17x) and Samsung (700G7A). Central processing unit A laptop's central processing unit (CPU) has advanced power-saving features and produces less heat than one intended purely for desktop use. Mainstream laptop CPUs made after 2018 have four processor cores, although some inexpensive models still have 2-core CPUs, and 6-core and 8-core models are also available. For the low price and mainstream performance, there is no longer a significant performance difference between laptop and desktop CPUs, but at the high end, the fastest desktop CPUs still substantially outperform the fastest laptop processors, at the expense of massively higher power consumption and heat generation; the fastest laptop processors top out at 56 watts of heat, while the fastest desktop processors top out at 150 watts. There has been a wide range of CPUs designed for laptops available from both Intel, AMD, and other manufacturers. On non-x86 architectures, Motorola and IBM produced the chips for the former PowerPC-based Apple laptops (iBook and PowerBook). Between around 2000 to 2014, most full-size laptops had socketed, replaceable CPUs; on thinner models, the CPU was soldered on the motherboard and was not replaceable or upgradable without replacing the motherboard. Since 2015, Intel has not offered new laptop CPU models with pins to be interchangeable, preferring ball grid array chip packages which have to be soldered;and as of 2021, only a few rare models using desktop parts. In the past, some laptops have used a desktop processor instead of the laptop version and have had high-performance gains at the cost of greater weight, heat, and limited battery life; this is not unknown as of 2021, but since around 2010, the practice has been restricted to small-volume gaming models. Laptop CPUs are rarely able to be overclocked; most use locked processors. Even on gaming models where unlocked processors are available, the cooling system in most laptops is often very close to its limits and there is rarely headroom for an overclocking–related operating temperature increase. Graphical processing unit On most laptops, a graphical processing unit (GPU) is integrated into the CPU to conserve power and space. This was introduced by Intel with the Core i-series of mobile processors in 2010, and similar accelerated processing unit (APU) processors by AMD later that year. Before that, lower-end machines tended to use graphics processors integrated into the system chipset, while higher-end machines had a separate graphics processor. In the past, laptops lacking a separate graphics processor were limited in their utility for gaming and professional applications involving 3D graphics, but the capabilities of CPU-integrated graphics have converged with the low-end of dedicated graphics processors since the mid-2010s. Higher-end laptops intended for gaming or professional 3D work still come with dedicated and in some cases even dual, graphics processors on the motherboard or as an internal expansion card. Since 2011, these almost always involve switchable graphics so that when there is no demand for the higher performance dedicated graphics processor, the more power-efficient integrated graphics processor will be used. Nvidia Optimus and AMD Hybrid Graphics are examples of this sort of system of switchable graphics. Memory Since around the year 2000, most laptops have used SO-DIMM RAM, although, as of 2021, an increasing number of models use memory soldered to the motherboard. Before 2000, most laptops used proprietary memory modules if their memory was upgradable. In the early 2010s, high end laptops such as the 2011 Samsung 700G7A have passed the 10 GB RAM barrier, featuring 16 GB of RAM. When upgradeable, memory slots are sometimes accessible from the bottom of the laptop for ease of upgrading; in other cases, accessing them requires significant disassembly. Most laptops have two memory slots, although some will have only one, either for cost savings or because some amount of memory is soldered. Some high-end models have four slots; these are usually mobile engineering workstations, although a few high-end models intended for gaming do as well. As of 2021, 8 GB RAM is most common, with lower-end models occasionally having 4GB. Higher-end laptops may come with 16 GB of RAM or more. Internal storage The earliest laptops most often used floppy disk for storage, although a few used either RAM disks or tape, by the late 1980s hard disk drives had become the standard form of storage. Between 1990 and 2009, almost all laptops typically had a hard disk drive (HDD) for storage; since then, solid-state drives (SSD) have gradually come to supplant hard drives in all but some inexpensive consumer models. Solid-state drives are faster and more power-efficient, as well as eliminating the hazard of drive and data corruption caused by a laptop's physical impacts, as they use no mechanical parts such as a rotational platter. In many cases, they are more compact as well. Initially, in the late 2000s, SSDs were substantially more expensive than HDDs, but as of 2021 prices on smaller capacity (under 1 terabyte) drives have converged; larger capacity drives remain more expensive than comparable-sized HDDs. Since around 1990, where a hard drive is present it will typically be a 2.5-inch drive; some very compact laptops support even smaller 1.8-inch HDDs, and a very small number used 1" Microdrives. Some SSDs are built to match the size/shape of a laptop hard drive, but increasingly they have been replaced with smaller mSATA or M.2 cards. SSDs using the newer and much faster NVM Express standard for connecting are only available as cards. As of 2021, many laptops no longer contain space for a 2.5" drive, accepting only M.2 cards; a few of the smallest have storage soldered to the motherboard. For those that can, they can typically contain a single 2.5-inch drive, but a small number of laptops with a screen wider than 15 inches can house two drives. A variety of external HDDs or NAS data storage servers with support of RAID technology can be attached to virtually any laptop over such interfaces as USB, FireWire, eSATA, or Thunderbolt, or over a wired or wireless network to further increase space for the storage of data. Many laptops also incorporate a card reader which allows for use of memory cards, such as those used for digital cameras, which are typically SD or microSD cards. This enables users to download digital pictures from an SD card onto a laptop, thus enabling them to delete the SD card's contents to free up space for taking new pictures. Removable media drive Optical disc drives capable of playing CD-ROMs, compact discs (CD), DVDs, and in some cases, Blu-ray discs (BD), were nearly universal on full-sized models between the mid-1990s and the early 2010s. As of 2021, drives are uncommon in compact or premium laptops; they remain available in some bulkier models, but the trend towards thinner and lighter machines is gradually eliminating these drives and players – when needed they can be connected via USB instead. Inputs An alphanumeric keyboard is used to enter text, data, and other commands (e.g., function keys). A touchpad (also called a trackpad), a pointing stick, or both, are used to control the position of the cursor on the screen, and an integrated keyboard is used for typing. Some touchpads have buttons separate from the touch surface, while others share the surface. A quick double-tap is typically registered as a click, and operating systems may recognize multi-finger touch gestures. An external keyboard and mouse may be connected using a USB port or wirelessly, via Bluetooth or similar technology. Some laptops have multitouch touchscreen displays, either available as an option or standard. Most laptops have webcams and microphones, which can be used to communicate with other people with both moving images and sound, via web conferencing or video-calling software. Laptops typically have USB ports and a combined headphone/microphone jack, for use with headphones, a combined headset, or an external mic. Many laptops have a card reader for reading digital camera SD cards. Input/output (I/O) ports On a typical laptop there are several USB ports; if they use only the older USB connectors instead of USB-C, they will typically have an external monitor port (VGA, DVI, HDMI or Mini DisplayPort or occasionally more than one), an audio in/out port (often in form of a single socket) is common. It is possible to connect up to three external displays to a 2014-era laptop via a single Mini DisplayPort, using multi-stream transport technology. Apple, in a 2015 version of its MacBook, transitioned from a number of different I/O ports to a single USB-C port. This port can be used both for charging and connecting a variety of devices through the use of aftermarket adapters. Google, with its updated version of Chromebook Pixel, shows a similar transition trend towards USB-C, although keeping older USB Type-A ports for a better compatibility with older devices. Although being common until the end of the 2000s decade, Ethernet network port are rarely found on modern laptops, due to widespread use of wireless networking, such as Wi-Fi. Legacy ports such as a PS/2 keyboard/mouse port, serial port, parallel port, or FireWire are provided on some models, but they are increasingly rare. On Apple's systems, and on a handful of other laptops, there are also Thunderbolt ports, but Thunderbolt 3 uses USB-C. Laptops typically have a headphone jack, so that the user can connect external headphones or amplified speaker systems for listening to music or other audio. Expansion cards In the past, a PC Card (formerly PCMCIA) or ExpressCard slot for expansion was often present on laptops to allow adding and removing functionality, even when the laptop is powered on; these are becoming increasingly rare since the introduction of USB 3.0. Some internal subsystems such as Ethernet, Wi-Fi, or a wireless cellular modem can be implemented as replaceable internal expansion cards, usually accessible under an access cover on the bottom of the laptop. The standard for such cards is PCI Express, which comes in both mini and even smaller M.2 sizes. In newer laptops, it is not uncommon to also see Micro SATA (mSATA) functionality on PCI Express Mini or M.2 card slots allowing the use of those slots for SATA-based solid-state drives. Battery and power supply Since the late 1990s, laptops have typically used lithium ion or lithium polymer batteries, These replaced the older nickel metal-hydride typically used in the 1990s, and nickel–cadmium batteries used in most of the earliest laptops. A few of the oldest laptops used non-rechargeable batteries, or lead–acid batteries. Battery life is highly variable by model and workload and can range from one hour to nearly a day. A battery's performance gradually decreases over time; a substantial reduction in capacity is typically evident after one to three years of regular use, depending on the charging and discharging pattern and the design of the battery. Innovations in laptops and batteries have seen situations in which the battery can provide up to 24 hours of continued operation, assuming average power consumption levels. An example is the HP EliteBook 6930p when used with its ultra-capacity battery. Laptops with removable batteries may support larger replacement batteries with extended capacity. A laptop's battery is charged using an external power supply, which is plugged into a wall outlet. The power supply outputs a DC voltage typically in the range of 7.2—24 volts. The power supply is usually external and connected to the laptop through a DC connector cable. In most cases, it can charge the battery and power the laptop simultaneously. When the battery is fully charged, the laptop continues to run on power supplied by the external power supply, avoiding battery use. If the used power supply is not strong enough to power computing components and charge the battery simultaneously, the battery may charge in a shorter period of time if the laptop is turned off or sleeping. The charger typically adds about to the overall transporting weight of a laptop, although some models are substantially heavier or lighter. Most 2016-era laptops use a smart battery, a rechargeable battery pack with a built-in battery management system (BMS). The smart battery can internally measure voltage and current, and deduce charge level and State of Health (SoH) parameters, indicating the state of the cells. Power connectors Historically, DC connectors, typically cylindrical/barrel-shaped coaxial power connectors have been used in laptops. Some vendors such as Lenovo made intermittent use of a rectangular connector. Some connector heads feature a center pin to allow the end device to determine the power supply type by measuring the resistance between it and the connector's negative pole (outer surface). Vendors may block charging if a power supply is not recognized as original part, which could deny the legitimate use of universal third-party chargers. With the advent of USB-C, portable electronics made increasing use of it for both power delivery and data transfer. Its support for 20 V (common laptop power supply voltage) and 5 A typically suffices for low to mid-end laptops, but some with higher power demands such as gaming laptops depend on dedicated DC connectors to handle currents beyond 5 A without risking overheating, some even above 10 A. Additionally, dedicated DC connectors are more durable and less prone to wear and tear from frequent reconnection, as their design is less delicate. Cooling Waste heat from the operation is difficult to remove in the compact internal space of a laptop. The earliest laptops used passive cooling; this gave way to heat sinks placed directly on the components to be cooled, but when these hot components are deep inside the device, a large space-wasting air duct is needed to exhaust the heat. Modern laptops instead rely on heat pipes to rapidly move waste heat towards the edges of the device, to allow for a much smaller and compact fan and heat sink cooling system. Waste heat is usually exhausted away from the device operator towards the rear or sides of the device. Multiple air intake paths are used since some intakes can be blocked, such as when the device is placed on a soft conforming surface like a chair cushion. Secondary device temperature monitoring may reduce performance or trigger an emergency shutdown if it is unable to dissipate heat, such as if the laptop were to be left running and placed inside a carrying case. Aftermarket cooling pads with external fans can be used with laptops to reduce operating temperatures. Docking station A docking station (sometimes referred to simply as a dock) is a laptop accessory that contains multiple ports and in some cases expansion slots or bays for fixed or removable drives. A laptop connects and disconnects to a docking station, typically through a single large proprietary connector. A docking station is an especially popular laptop accessory in a corporate computing environment, due to a possibility of a docking station transforming a laptop into a full-featured desktop replacement, yet allowing for its easy release. This ability can be advantageous to "road warrior" employees who have to travel frequently for work, and yet who also come into the office. If more ports are needed, or their position on a laptop is inconvenient, one can use a cheaper passive device known as a port replicator. These devices mate to the connectors on the laptop, such as through USB or FireWire. Charging trolleys Laptop charging trolleys, also known as laptop trolleys or laptop carts, are mobile storage containers to charge multiple laptops, netbooks, and tablet computers at the same time. The trolleys are used in schools that have replaced their traditional static computer labs suites of desktop equipped with "tower" computers, but do not have enough plug sockets in an individual classroom to charge all of the devices. The trolleys can be wheeled between rooms and classrooms so that all students and teachers in a particular building can access fully charged IT equipment. Laptop charging trolleys are also used to deter and protect against opportunistic and organized theft. Schools, especially those with open plan designs, are often prime targets for thieves who steal high-value items. Laptops, netbooks, and tablets are among the highest–value portable items in a school. Moreover, laptops can easily be concealed under clothing and stolen from buildings. Many types of laptop–charging trolleys are designed and constructed to protect against theft. They are generally made out of steel, and the laptops remain locked up while not in use. Although the trolleys can be moved between areas from one classroom to another, they can often be mounted or locked to the floor or walls to prevent thieves from stealing the laptops, especially overnight. Solar panels In some laptops, solar panels are able to generate enough solar power for the laptop to operate. The One Laptop Per Child Initiative released the OLPC XO-1 laptop which was tested and successfully operated by use of solar panels. Presently, they are designing an OLPC XO-3 laptop with these features. The OLPC XO-3 can operate with 2 watts of electricity because its renewable energy resources generate a total of 4 watts. Samsung has also designed the NC215S solar–powered notebook that will be sold commercially in the U.S. market. Accessories A common accessory for laptops is a laptop sleeve, laptop skin, or laptop case, which provides a degree of protection from scratches. Sleeves, which are distinguished by being relatively thin and flexible, are most commonly made of neoprene, with sturdier ones made of low-resilience polyurethane. Some laptop sleeves are wrapped in ballistic nylon to provide some measure of waterproofing. Bulkier and sturdier cases can be made of metal with polyurethane padding inside and may have locks for added security. Metal, padded cases also offer protection against impacts and drops. Another common accessory is a laptop cooler, a device that helps lower the internal temperature of the laptop either actively or passively. A common active method involves using electric fans to draw heat away from the laptop, while a passive method might involve propping the laptop up on some type of pad so it can receive more airflow. Some stores sell laptop pads that enable a reclining person on a bed to use a laptop. Modularity Some of the components of earlier models of laptops can easily be replaced without opening completely its bottom part, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc. Some of the components of recent models of laptops reside inside. Replacing most of its components, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc., requires removal of its either top or bottom part, removal of the motherboard, and returning them. In some types, solder and glue are used to mount components such as RAM, storage, and batteries, making repairs additionally difficult. Obsolete features Features that certain early models of laptops used to have that are not available in most current laptops include: Reset ("cold restart") button in a hole (needed a thin metal tool to press) Instant power off button in a hole (needed a thin metal tool to press) Integrated charger or power adapter inside the laptop Floppy disk drive Serial port Parallel port Modem Shared PS/2 input device port IrDA S-video port S/PDIF audio port PC Card / PCMCIA slot ExpressCard slot CD/DVD Drives (starting with 2013 models) VGA port (starting with 2013 models) Comparison with desktops Advantages Portability is usually the first feature mentioned in any comparison of laptops versus desktop PCs. Physical portability allows a laptop to be used in many places—not only at home and the office but also during commuting and flights, in coffee shops, in lecture halls and libraries, at clients' locations or a meeting room, etc. Within a home, portability enables laptop users to move their devices from the living room to the dining room to the family room. Portability offers several distinct advantages: Productivity: Using a laptop in places where a desktop PC cannot be used can help employees and students to increase their productivity on work or school tasks, such as an office worker reading their work e-mails during an hour-long commute by train, or a student doing their homework at the university coffee shop during a break between lectures, for example. Immediacy: Carrying a laptop means having instant access to information, including personal and work files. This allows better collaboration between coworkers or students, as a laptop can be flipped open to look at a report, document, spreadsheet, or presentation anytime and anywhere. Up-to-date information: If a person has more than one desktop PC, a problem of synchronization arises: changes made on one computer are not automatically propagated to the others. There are ways to resolve this problem, including physical transfer of updated files (using a USB flash memory stick or CD-ROMs) or using synchronization software over the Internet, such as cloud computing. However, transporting a single laptop to both locations avoids the problem entirely, as the files exist in a single location and are always up-to-date. Connectivity: In the 2010s, a proliferation of Wi-Fi wireless networks and cellular broadband data services (HSDPA, EVDO and others) in many urban centers, combined with near-ubiquitous Wi-Fi support by modern laptops meant that a laptop could now have easy Internet and local network connectivity while remaining mobile. Wi-Fi networks and laptop programs are especially widespread at university campuses. Other advantages of laptops: Size: Laptops are smaller than desktop PCs. This is beneficial when space is at a premium, for example in small apartments and student dorms. When not in use, a laptop can be closed and put away in a desk drawer. Low power consumption: Laptops are several times more power-efficient than desktops. A typical laptop uses 20–120 W, compared to 100–800 W for desktops. This could be particularly beneficial for large businesses, which run hundreds of personal computers thus multiplying the potential savings, and homes where there is a computer running 24/7 (such as a home media server, print server, etc.). Quiet: Laptops are typically much quieter than desktops, due both to the components (quieter, slower 2.5-inch hard drives) and to less heat production leading to the use of fewer and slower cooling fans. Battery: a charged laptop can continue to be used in case of a power outage and is not affected by short power interruptions and blackouts. A desktop PC needs an uninterruptible power supply (UPS) to handle short interruptions, blackouts, and spikes; achieving on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS. All-in-One: designed to be portable, most 2010-era laptops have all components integrated into the chassis (however, some small laptops may not have an internal CD/CDR/DVD drive, so an external drive needs to be used). For desktops (excluding all-in-ones) this is usually divided into the desktop "tower" (the unit with the CPU, hard drive, power supply, etc.), keyboard, mouse, display screen, and optional peripherals such as speakers. Disadvantages Compared to desktop PCs, laptops have disadvantages in the following areas: Performance While the performance of mainstream desktops and laptops are comparable, and the cost of laptops has fallen less rapidly than desktops, laptops remain more expensive than desktop PCs at the same performance level. The upper limits of performance of laptops remain much lower than the highest-end desktops (especially "workstation class" machines with two processor sockets), and "leading-edge" features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops. For Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even relatively low-end laptops (such as Netbooks) can be fast enough for some users. Most higher-end laptops are sufficiently powerful for high-resolution movie playback, some 3D gaming and video editing and encoding. However, laptop processors can be disadvantaged when dealing with a higher-end database, maths, engineering, financial software, virtualization, etc. This is because laptops use the mobile versions of processors to conserve power, and these lag behind desktop chips when it comes to performance. Some manufacturers work around this performance problem by using desktop CPUs for laptops. Upgradeability The upgradeability of laptops is very limited compared to thoroughly standardized desktops. In general, hard drives and memory can be upgraded easily. Optical drives and internal expansion cards may be upgraded if they follow an industry standard, but all other internal components, including the motherboard, CPU, and graphics, are not always intended to be upgradeable. Intel, Asus, Compal, Quanta and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards. The reasons for limited upgradeability are both technical and economic. There is no industry-wide standard form factor for laptops; each major laptop manufacturer pursues its own proprietary design and construction, with the result that laptops are difficult to upgrade and have high repair costs. Moreover, starting with 2013 models, laptops have become increasingly integrated (soldered) with the motherboard for most of its components (CPU, SSD, RAM, keyboard, etc.) to reduce size and upgradeability prospects. Devices such as sound cards, network adapters, hard and optical drives, and numerous other peripherals are available, but these upgrades usually impair the laptop's portability, because they add cables and boxes to the setup and often have to be disconnected and reconnected when the laptop is on the move. Ergonomics and health effects Wrists Prolonged use of laptops can cause repetitive strain injury because of their small, flat keyboard and trackpad pointing devices. Usage of separate, external ergonomic keyboards and pointing devices is recommended to prevent injury when working for long periods of time; they can be connected to a laptop easily by USB, Bluetooth or via a docking station. Some health standards require ergonomic keyboards at workplaces. Neck and spine A laptop's integrated screen often requires users to lean over for a better view, which can cause neck or spinal injuries. A larger and higher-quality external screen can be connected to almost any laptop to alleviate this and to provide additional screen space for more productive work. Another solution is to use a computer stand. Possible effect on fertility A study by State University of New York researchers found that heat generated from laptops can increase the temperature of the lap of male users when balancing the computer on their lap, potentially putting sperm count at risk. The study, which included roughly two dozen men between the ages of 21 and 35, found that the sitting position required to balance a laptop can increase scrotum temperature by as much as . However, further research is needed to determine whether this directly affects male sterility. A later 2010 study of 29 males published in Fertility and Sterility found that men who kept their laptops on their laps experienced scrotal hyperthermia (overheating) in which their scrotal temperatures increased by up to . The resulting heat increase, which could not be offset by a laptop cushion, may increase male infertility. A common practical solution to this problem is to place the laptop on a table or desk or to use a book or pillow between the body and the laptop. Another solution is to obtain a cooling unit for the laptop. These are usually USB powered and consist of a hard thin plastic case housing one, two, or three cooling fans – with the entire assembly designed to sit under the laptop in question – which results in the laptop remaining cool to the touch, and greatly reduces laptop heat buildup. Thighs Heat generated from using a laptop on the lap can also cause skin discoloration on the thighs known as "toasted skin syndrome". Durability Laptops are less durable than desktops/PCs. However, the durability of the laptop depends on the user if proper maintenance is done then the laptop can work longer. Equipment wear Because of their portability, laptops are subject to more wear and physical damage than desktops. Components such as screen hinges, latches, power jacks, and power cords deteriorate gradually from ordinary use and may have to be replaced. A liquid spill onto the keyboard, a rather minor mishap with a desktop system (given that a basic keyboard costs about US$20), can damage the internals of a laptop and destroy the computer, result in a costly repair or entire replacement of laptops. One study found that a laptop is three times more likely to break during the first year of use than a desktop. To maintain a laptop, it is recommended to clean it every three months for dirt, debris, dust, and food particles. Most cleaning kits consist of a lint-free or microfiber cloth for the LCD screen and keyboard, compressed air for getting dust out of the cooling fan, and a cleaning solution. Harsh chemicals such as bleach should not be used to clean a laptop, as they can damage it. Heating and cooling Laptops rely on extremely compact cooling systems involving a fan and heat sink that can fail from blockage caused by accumulated airborne dust and debris. Most laptops do not have any type of removable dust collection filter over the air intake for these cooling systems, resulting in a system that gradually conducts more heat and noise as the years pass. In some cases, the laptop starts to overheat even at idle load levels. This dust is usually stuck inside where the fan and heat sink meet, where it can not be removed by a casual cleaning and vacuuming. Most of the time, compressed air can dislodge the dust and debris but may not entirely remove it. After the device is turned on, the loose debris is reaccumulated into the cooling system by the fans. Complete disassembly is usually required to clean the laptop entirely. However, preventative maintenance such as regular cleaning of the heat sink via compressed air can prevent dust build-up on the heat sink. Many laptops are difficult to disassemble by the average user and contain components that are sensitive to electrostatic discharge (ESD). Battery life Battery life is limited because the capacity drops with time, eventually requiring replacement after as little as a year. A new battery typically stores enough energy to run the laptop for three to five hours, depending on usage, configuration, and power management settings. Yet, as it ages, the battery's energy storage will dissipate progressively until it lasts only a few minutes. The battery is often easily replaceable and a higher capacity model may be obtained for longer charging and discharging time. Some laptops (specifically ultrabooks) do not have the usual removable battery and have to be brought to the service center of their manufacturer or a third-party laptop service center to have their battery replaced. Replacement batteries can also be expensive. Security and privacy Because they are valuable, commonly used, portable, and easy to hide in a backpack or other type of travel bag, laptops are often stolen. Every day, over 1,600 laptops go missing from U.S. airports. The cost of stolen business or personal data, and of the resulting problems (identity theft, credit card fraud, breach of privacy), can be many times the value of the stolen laptop itself. Consequently, the physical protection of laptops and the safeguarding of data contained on them are both of great importance. Most laptops have a Kensington security slot, which can be used to tether them to a desk or other immovable object with a security cable and lock. In addition, modern operating systems and third-party software offer disk encryption functionality, which renders the data on the laptop's hard drive unreadable without a key or a passphrase. As of 2015, some laptops also have additional security elements added, including eye recognition software and fingerprint scanning components. Software such as LoJack for Laptops, Laptop Cop, and GadgetTrack have been engineered to help people locate and recover their stolen laptops in the event of theft. Setting one's laptop with a password on its firmware (protection against going to firmware setup or booting), internal HDD/SSD (protection against accessing it and loading an operating system on it afterward), and every user account of the operating system are additional security measures that a user should do. Fewer than 5% of lost or stolen laptops are recovered by the companies that own them, however, that number may decrease due to a variety of companies and software solutions specializing in laptop recovery. In the 2010s, the common availability of webcams on laptops raised privacy concerns. In Robbins v. Lower Merion School District (Eastern District of Pennsylvania 2010), school-issued laptops loaded with special software enabled staff from two high schools to take secret webcam shots of students at home, via their students' laptops. Sales Manufacturers There are many laptop brands and manufacturers. Several major brands that offer notebooks in various classes are listed in the adjacent box. The major brands usually offer good service and support, including well-executed documentation and driver downloads that remain available for many years after a particular laptop model is no longer produced. Capitalizing on service, support, and brand image, laptops from major brands are more expensive than laptops by smaller brands and ODMs. Some brands specialize in a particular class of laptops, such as gaming laptops (Alienware), high-performance laptops (HP Envy), netbooks (EeePC) and laptops for children (OLPC). Many brands, including the major ones, do not design and do not manufacture their laptops. Instead, a small number of Original Design Manufacturers (ODMs) design new models of laptops, and the brands choose the models to be included in their lineup. In 2006, 7 major ODMs manufactured 7 of every 10 laptops in the world, with the largest one (Quanta Computer) having 30% of the world market share. Therefore, identical models are available both from a major label and from a low-profile ODM in-house brand. Market share Battery-powered portable computers had just 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008 it was estimated that 145.9 million notebooks were sold, and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units. May 2005 was the first time notebooks outsold desktops in the US over the course of a full month; at the time notebooks sold for an average of $1,131 while desktops sold for an average of $696. When looking at operating systems, for Microsoft Windows laptops the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing an average US$689 at U.S. retail stores in August 2008. In 2009, ASP had further fallen to $602 by January and to $560 in February. While Windows machines ASP fell $129 in these seven months, Apple macOS laptop ASP declined just $12 from $1,524 to $1,512. Disposal The list of materials that go into a laptop computer is long, and many of the substances used, such as beryllium (used in beryllium-copper alloy contacts in some connectors and sockets), lead (used in lead-tin solder), chromium, and mercury (used in CCFL LCD backlights) compounds, are toxic or carcinogenic to humans. Although these toxins are relatively harmless when the laptop is in use, concerns that discarded laptops cause a serious health risk and toxic environmental damage, were so strong, that the Waste Electrical and Electronic Equipment Directive (WEEE Directive) in Europe specified that all laptop computers must be recycled by law. Similarly, the U.S. Environmental Protection Agency (EPA) has outlawed landfill dumping or the incinerating of discarded laptop computers. Most laptop computers begin the recycling process with a method known as Demanufacturing, this involves the physical separation of the components of the laptop. These components are then either grouped into materials (e.g. plastic, metal and glass) for recycling or more complex items that require more advanced materials separation (e.g.) circuit boards, hard drives and batteries. Corporate laptop recycling can require an additional process known as data destruction. The data destruction process ensures that all information or data that has been stored on a laptop hard drive can never be retrieved again. Below is an overview of some of the data protection and environmental laws and regulations applicable for laptop recycling data destruction: Data Protection Act 1998 (DPA) EU Privacy Directive (Due 2016) Financial Conduct Authority Sarbanes-Oxley Act PCI-DSS Data Security Standard Waste, Electronic & Electrical Equipment Directive (WEEE) Basel Convention Bank Secrecy Act (BSA) FACTA Sarbanes-Oxley Act FDA Security Regulations (21 C.F.R. part 11) Gramm-Leach-Bliley Act (GLBA) HIPAA (Health Insurance Portability and Accountability Act) NIST SP 800–53 Add NIST SP 800–171 Identity Theft and Assumption Deterrence Act Patriot Act of 2002 PCI Data Security Standard US Safe Harbor Provisions Various state laws JAN 6/3 Gramm-leach-Bliley Act DCID Extreme use The ruggedized Grid Compass computer was used since the early days of the Space Shuttle program. The first commercial laptop used in space was a Macintosh portable in 1991 aboard Space Shuttle mission STS-43. Apple and other laptop computers continue to be flown aboard crewed spaceflights, though the only long-duration flight certified computer for the International Space Station is the ThinkPad. As of 2011, over 100 ThinkPads were aboard the ISS. Laptops used aboard the International Space Station and other spaceflights are generally the same ones that can be purchased by the general public but needed modifications are made to allow them to be used safely and effectively in a weightless environment such as updating the cooling systems to function without relying on hot air rising and accommodation for the lower cabin air pressure. Laptops operating in harsh usage environments and conditions, such as strong vibrations, extreme temperatures, and wet or dusty conditions differ from those used in space in that they are custom designed for the task and do not use commercial off-the-shelf hardware. See also List of computer size categories List of laptop brands and manufacturers Netbook Smartbook Chromebook Ultrabook Smartphone Subscriber Identity Module Mobile broadband Mobile Internet device (MID) Personal digital assistant VIA OpenBook Tethering XJACK Open-source computer hardware Novena Portal laptop computer Mobile modem Stereoscopy glasses Notes References Classes of computers Japanese inventions Mobile computers Office equipment Personal computers 1980s neologisms
61496206
https://en.wikipedia.org/wiki/1913%20Auckland%20Rugby%20League%20season
1913 Auckland Rugby League season
The 1913 Auckland Rugby League season was the 5th season of the Auckland Rugby league. The first grade competition began on 3 May with the same 6 teams that had competed in the 1912 season, however Manukau Rovers pulled out of the competition midway through the season as they struggled to put a full team on the field. The Eden Ramblers also pulled out at the same time. North Shore Albions were crowned champions for the first time. Other clubs competing in lower grades were Otahuhu, Northcote Ramblers now known as the Northcote Tigers, and Ellerslie Wanderers, who later became known as the Ellerslie Eagles. A match was also played between Avondale and New Lynn in Avondale on 13 September. The match was won by New Lynn by 23 points to 8. Switching codes Karl Ifwersen switched from rugby union where he had been playing in Auckland and made his debut appearance for North Shore Albions. He was to go on to have a remarkable rugby league career and his scoring feats were un-rivalled through the 1910s in Auckland rugby league. While New Zealand representatives Graham Cook and Cecil King had moved from Wellington and made debut appearances for Newton Rangers. Charles Savory controversy In a match involving Ponsonby and Manukau in Onehunga, Charles Savory was accused of kicking an opponent. The incident was not seen by the referee but an Auckland Rugby League official claimed to have seen it and as a result Savory was banned for life by Auckland Rugby League. Savory had been selected to play for New Zealand on their tour of Australia and as a result of the ban was unable to make the trip. When the evidence was presented to New Zealand League they said that the evidence was not sufficient to justify the penalty and refused to confirm it. Auckland Rugby League then decided to strike Savory off the list of registered players thus making him ineligible to play in Auckland. Auckland selector Ronald MacDonald chose Savory to play against Wellington in their match on 23 August but was told at an Auckland Rugby League meeting that he was ineligible and they questioned why he had chosen him to play. MacDonald replied "one reason is because he is one of the best forwards in the Dominion. What was he suspended for?". A lengthy discussion followed and MacDonald refused to withdraw Savory's name from selection and a motion was then passed that MacDonald be removed from his position as Auckland selector. This was carried unanimously with Mr Angus Campbell appointed selector, and Morgan Hayward chosen to replace Savory in the side for the match with Wellington. Death of Adolphus Theodore Bust whilst playing Tragedy struck in May in a 3rd grade match between Ellerslie and Ponsonby when 26 year old Adolphus Theodore Bust was severely injured and later passed away as a result of his injuries. The death occurred at the Ellerslie Domain. He was said to have collided with an opposing player and the two of them fell to the ground with a third player falling on top of them. The other two men rose to their feet to carry on playing but Bust remained stationary on the ground. Dr. Baber was called to attend from his residence in Remuera but he found that Bust's spinal cord was fractured near the base of the skull and he recommended he be taken to hospital however Bust's father decided to have him taken to his home in Ellerslie. He was unable to be revived and died at 8.30am the following morning. After the incident the deceased father said he witnessed the incident and was satisfied that it was an accident. Martin Ellis, the player involved in the tackle said that he was running down the field and Bust was waiting to tackle him and had dived and caught Ellis by the legs but his neck struck him on the hip and they both fell to the ground. The coroner returned the verdict that nobody was to blame for the death. None of the Ellerslie teams took the field the following weekend out of respect. Myers Cup (first grade competition) Eighteen regular season matches were played before North Shore Albions were awarded the title with a 5 win, 1 draw, 1 loss record. Myers Cup standings Myers Cup fixtures Round 1 Round 2 Round 3 Round 4 In the Ponsonby match with North Shore Harry Fricker was ordered off for striking an opponent. The act was missed by the referee but seen by the line umpire. The match between Manukau and Eden was reported as a win to Manukau and a win to Eden in differing reports. Round 5 Manukau defaulted their match to North Shore Albions. The later arrived in Onehunga to find that their opponents could not muster a team. Jim Rukutai and other prominent players were said to be suffering from influenza. This was to be Manukau's last game in the senior grade for decades as they forfeited the following week along with Eden and dropped out of the senior competition. Rukutai was diagnosed with smallpox and was put into isolation in a Point Chevalier hospital. However it was soon after realised that he was actually suffering from a severe case of chicken pox and he made a full recovery soon after. Eden were to cease playing as a club a few seasons later and never returned. Round 6 A somewhat unusual event occurred in the match between Ponsonby and North Shore when it was briefly suspended after a player from North Shore dropped his false teeth. He was inevitably subjected to some “good-natured banter from the crowd”. Round 7 With Manukau and Eden both disbanding their senior teams Pullen from Manukau transferred to North Shore and played for them, while Don Kenealy of Eden transferred and played for City. Knockout competition After North Shore had won the championship the league decided to play a knockout competition between the four remaining teams. Newton and City both won their matches and progressed to the final. Round 1 Knockout final City were joined by Jim Rukutai for the match following Manukau's senior team disbanding. Top try and point scorers Scoring included both the first grade championship and the knockout matches. A large number of matches did not have the scorers named meaning the following lists are incomplete. Points missing are as follows: Newton Rangers (22), City Rovers (18), Ponsonby United (25), Eden Ramblers (15), and Manukau Rovers (18). Exhibition Match Hamilton v City Rovers On July 19 City Rovers travelled to Hamilton to play the local side. Avondale v New Lynn On September 13 Avondale played New Lynn in their "annual football match". Several of the players including Bert and John Denyer, Kenealy, Bond, and Bob Biggs had played for the recently folded Eden Ramblers who were based in the Avondale/Point Chevalier area. Thacker Shield On 7 September North Shore Albions journeyed to Christchurch to play against Sydenham to play for the Thacker Shield. At the start of the season Dr. Thacker, president of the Canterbury league had presented the shield for competition amongst the senior clubs of Christchurch but he had stipulated that it was open to competition to any club in New Zealand. When North Shore won the Auckland championship they immediately issued a challenge to Sydenham. North Shore sent a strong team south but were without Karl Ifwersen and Stan Walters who were representing New Zealand against the touring New South Wales side. Lower grades Grades were made of the following teams with the winning team in bold: Second Grade: City Rovers, Ellerslie United, Newton Rangers, North Shore Albions, Northcote Ramblers, Otahuhu United (runner up), Ponsonby United Third Grade: City Rovers (runner up), Ellerslie United, Eden Ramblers A, Eden Ramblers B, Manukau Rovers A, Manukau Rovers B, North Shore Albions, Northcote Ramblers, Otahuhu United, Ponsonby United Fourth Grade: City Rovers, Manukau Rovers, Newton Rangers, North Shore Albions, Otahuhu United, Ponsonby United Representative season 1913 was a very busy year for the Auckland representative team as they played 10 matches recording a 7 win, 3 loss record. Their three defeats were against the touring New South Wales team and then on a two match end of season tour to Taranaki and Wellington. The first representative fixture of the season was played on 28 June against a Country selection at Victoria Park, Auckland. Three thousand spectators attended and 117 pounds was collected. Further matches were played against Taranaki, Hawke's Bay, Nelson, Canterbury, Wellington, and New South Wales. Auckland also played an exhibition match in Pukekohe against the Auckland club champions North Shore Albions. On August 9 Auckland Juniors beat Waikato Juniors 33-5 in Huntly. Representative matches Auckland v Waikato Country Auckland v Taranaki (Northern Union C.C.) Auckland V Hawke’s Bay (Northern Union C.C.) Auckland V Nelson (Northern Union C.C.) Auckland v Canterbury (Northern Union C.C.) Auckland v North Shore Albions (exhibition match) Auckland v Wellington (Northern Union C.C.) Auckland v New South Wales Auckland v Taranaki Thomas McClymont injured his arm late in the first half and went off but came back on. Then early in the second half he retired permanently meaning Auckland only had 12 players. Bob Mitchell and Stan Walters joined the team in New Plymouth having left Wellington after the New Zealand match there. Karl Ifwersen was supposed to also join but he had been injured in New Zealand's match so went directly back to Auckland. George Seagar who had gone on tour was refereeing at late notice as Taranaki had been unable to organise a suitable referee. The Taranaki forwards were said to have dominated the match and while the Auckland backs played brilliantly they failed to finish many chances. Auckland v Wellington A player named 'Murdoch' appeared for Auckland and this is likely to have been the treasurer/manager of the Auckland side Adam Murdoch. There were no team lists in any of the newspapers and only 12 players were mentioned by name in the match reports. When Murdoch passed away in September of 1944 the Auckland Rugby League sent their condolences to his family. Those were Mansell, Cook, Woodward, Kenealy, Tobin, Seagar, Webb, Murdoch, Mitchell, Walters, Rukutai, and Denize. The other one who may have played is Clark, Manning, or Fricker who had all been with the touring side in Taranaki. Auckland representative matches played and scorers Adam Murdoch was a member of the Ponsonby United club but non-playing. Was on tour as manager for the Taranaki and Wellington games. References External links Auckland Rugby League Official Site Auckland Rugby League seasons Auckland Rugby League
261654
https://en.wikipedia.org/wiki/IBM%20System/36
IBM System/36
The IBM System/36 (often abbreviated as S/36) was a midrange computer marketed by IBM from 1983 to 2000 - a multi-user, multi-tasking successor to the System/34. Like the System/34 and the older System/32, the System/36 was primarily programmed in the RPG II language. One of the machine's optional features was an off-line storage mechanism (on the 5360 model) that utilized "magazines" – boxes of 8-inch floppies that the machine could load and eject in a nonsequential fashion. The System/36 also had many mainframe features such as programmable job queues and scheduling priority levels. While these systems were similar to other manufacturer's minicomputers, IBM themselves described the System/32, System/34 and System/36 as "small systems" and later as midrange computers along with the System/38 and succeeding IBM AS/400 range. The AS/400 series and IBM Power Systems running IBM i can run System/36 code in the System/36 Environment, although the code needs to be recompiled on IBM i first. Overview of the IBM System/36 The IBM System/36 was a popular small business computer system, first announced on 16 May 1983 and shipped later that year. It had a 17-year product lifespan. The first model of the System/36 was the 5360. In the 1970s, the US Department of Justice brought an antitrust lawsuit against IBM, claiming it was using unlawful practices to knock out competitors. At this time, IBM had been about to consolidate its entire line (System/370, 4300, System/32, System/34, System/38) into one "family" of computers with the same ISAM database technology, programming languages, and hardware architecture. After the lawsuit was filed, IBM decided it would have two families: the System/38 line, intended for large companies and representing IBM's future direction, and the System/36 line, intended for small companies who had used the company's legacy System/32/34 computers. In the late 1980s the lawsuit was dropped, and IBM decided to recombine the two product lines, creating the AS/400 - which replaced both the System/36 and System/38. The System/36 used virtually the same RPG II, Screen Design Aid, OCL, and other technologies that the System/34 used, though it was object-code incompatible. The S/36 was a small business computer; it had an 8-inch diskette drive, between one and four hard drives in sizes of 30 to 716 MB, and memory from 128K up to 7MB. Tape drives were available as backup devices; the 6157 QIC (quarter-inch cartridge) and the reel-to-reel 8809 both had capacities of roughly 60MB. The Advanced/36 9402 tape drive had a capacity of 2.5GB. The IBM 5250 series of terminals were the primary interface to the System/36. System architecture Processors S/36s had two sixteen-bit processors, the CSP or Control Storage Processor, and the MSP or Main Storage Processor. The MSP was the workhorse; it performed the instructions in the computer programs. The CSP was the governor; it performed system functions in the background. Special utility programs were able to make direct calls to the CSP to perform certain functions; these are usually system programs like $CNFIG which was used to configure the computer system. As with the earlier System/32 and System/34 hardware, the execution of so-called "scientific instructions" (i.e. floating point operations) was implemented in software on the CSP. The primary purpose of the CSP was to keep the MSP busy; as such, it ran at slightly more than 4X the speed of the MSP. The first System/36 models (the 5360-A) had a 4 MHz CSP and a 1 MHz MSP. The CSP would load code and data into main storage behind the MSP's program counter. As the MSP was working on one process, the CSP was filling storage for the next process. The 5360 processors came in four models, labeled 5360-A through 5360-D. The later "D" model was about 60 percent faster than the "A" model. Front panel The 5360, 5362, and 5363 processors had a front panel display with four hexadecimal LEDs. If the operator "dialed up" the combination F-F-0-0 before performing an Initial Program Load (IPL, or system boot), many diagnostics were skipped, causing the duration of the IPL to be about a minute instead of about 10 minutes. Of course part of the IPL was typically keysorting the indexed files and if the machine had been shut down without a "keysort" (performed part of the P S (or STOP SYSTEM) then depending on the number of indexed files (and their sizes) it could take upwards of an hour to come back up. Memory and disk The smallest S/36 had 128K of RAM and a 30 MB hard drive. The largest configured S/36 could support 7MB of RAM and 1478MB of disk space. This cost over US$200,000 back in the early 1980s. S/36 hard drives contained a feature called "the extra cylinder," so that bad spots on the drive were detected and dynamically mapped out to good spots on the extra cylinder. It is therefore possible for the S/36 to use more space than it can technically address. Disk address sizes limit the size of the active S/36 partition to about 2GB; however, the Advanced/36 Large Package had a 4GB hard drive which could contain up to three (emulated) S/36s, and Advanced/36 computers had more memory than SSP could address (32MB to 96MB) which was used to increase disk caching. Disk space on the System/36 was organized by blocks, with one block consisting of 2560 bytes. A high-end 5360 system would ship with about 550,000 blocks of disk space available. System objects could be allocated in blocks or records, but internally it was always blocks. The System/36 supported memory paging, referring to as "swapping". Software The System Support Program (SSP) was the only operating system of the S/36. It contained support for multiprogramming, multiple processors, 80 devices, job queues, printer queues, security, indexed file support, and fully installed, it was about 10MB. On the Advanced/36, the number of workstations/printers was increased to 160. In the Guest/36 environment of certain OS/400 releases, up to 216 devices were supported. The S/36 could compile and run programs up to 64 kB in size, although most were not this large. This became a bottleneck issue only for the largest screen programs. With the Advanced/36, there were features added to the SSP operating system including the ability to call other programs from within. So a program that was say 60 kB could call another program that was 30kB or 40KB. This call/parm had been available with third-party packages on the System/36 but not widely used until the feature was put in 7.1 and 7.5 of SSP on the Advanced/36. Hardware models Main line System/36 Model 5360 The System/36 5360 was the first model of System/36. It weighed 700 lb (318 kg), cost $140,000 and is believed to have had processor speeds of about 2MHz and 8MHz for its two processors. The system ran on 208 or 240 volts AC. The five red lights on the System/36 were as follows: (1) Power check. (2) Processor check. (3) Program check. (4) Console check. (5) Temperature check. If any light other than #4 ever came on, the system needed to be rebooted. Console can be restored if it has been powered off, but the other conditions are unrecoverable. There were various models of the 5360, including a C and D model that dealt with speed and the ability to support an additional frame to house two additional drives. System/36 Model 5362 IBM introduced the 5362 or "Compact 36" in 1984 as a system targeted at the lower end of their market. It had a deskside tower form factor. It was designed to operate in a normal office environment, requiring little special consideration. It differed from the 5360 in by having a more limited card cage, capable of fewer peripherals. It used 14" fixed disks (30 or 60MB) and could support up to two; main storage ranged from 128KB to 512 KB. One 8" floppy diskette drive was built in. The 5362 also allowed the use of a channel attached external desktop 9332-200, 400, & 600 DASD, effectively allowing a maximum of 720MB. The 5362 weighed 150 pounds (68 kg) and cost $20,000. System/36 Model 5364 The model 5364 was called the "System/36 PC" or "Desktop 36" (and also, informally, the "Baby/36" by some – but this name was later attached to a software program produced by California Software Products, Inc.). The 5364 was a June 1985 attempt by IBM to implement a System/36 on PC-sized hardware. Inside, there were IBM chips, but the cabinet size was reminiscent of an IBM PC/AT of the period. The machine had a 1.2 MB 5.25-inch diskette drive, which was incompatible with PCs and with other S/36s. The control panel/system console (connected via an expansion card) was an IBM PC with at least 256KB RAM. System/36 Model 5363 The model 5363 was positioned as a replacement for the 5364, and was announced in October 1987. It used a deskside tower style enclosure like that of the 5362, but was only 2/3 the size. It featured updated hardware using newer, smaller hard drive platters, a 5" diskette drive, and a revised distribution of the SSP. AS/400-based backports The System/36 Environment of IBM i (previously OS/400) is a feature which provides a number of SSP utilities, as well as RPG II and OCL support. It does not implement binary compatibility with the System/36 - instead it allows programmers to port System/36 applications to IBM i by recompiling the code on top of the System/36 Environment, generating programs which use the native IBM i APIs. From V3R6 to V4R4, OS/400 was capable of running up to three instances of SSP inside virtual machines known as ‘’Guest/36’’ or ‘’M36’’. This relied on emulation of emulation of the MSP implemented by the OS/400 SLIC, and thus provided binary compatibility with SSP programs. AS/Entry (9401) The AS/Entry was just a stripped-down AS/400, first model was based on a AS/400 9401-P03. The operating system was SSP Release 6. This machine was offered c.1991 to target customers who had a S/36 and wanted to one day migrate to an AS/400, but did not want a large investment in an AS/400. In this regard, the AS/Entry was a failure because IBM decided the machine's architecture was not economically feasible and the older model 5363 that the 9401 was based on was a much more reliable system. The entry line was later upgraded to AS/400 9401-150 hardware. Advanced/36 (9402, 9406) In 1994, IBM released the AS/400 Advanced/36 with two models (9402-236 and 9402-436). Priced as low as $7995, it was a machine that allowed System/36 users to get faster and more modern hardware while "staying 36." Based on standard AS/400 hardware, the Advanced/36 could run SSP, the operating system of the System/36, alone, or within AS/400's OS/400 as a virtual machine so that it could be upgraded to a full-blown AS/400 for just extra licensing costs. The A/36 was packaged in a black enclosure which was slightly larger than a common PC cabinet. The Advanced/36 bought the world of System/36 and SSP about five more years in the marketplace, but by the end of the 20th century, the marketplace for the System/36 was almost unrecognizable. The IBM printers and displays that had completely dominated the marketplace in the 80s were replaced by a PC or a third-party monitor with an attached PC-type printer. Twinaxial cable had disappeared in favor of cheap adapters and standard telephone wire. The System/36 was eventually replaced by AS/400s at the high end and PCs at the low end. The Advanced line was later upgraded to AS/400 9406-170 hardware. By 2000, the Advanced/36 was withdrawn from marketing. References Further reading News 3X/400's Desktop Guide to the S/36 Midrange Computing's Power Tools Everything You Always Wanted to Know About the System/36 But Nobody Told You by Charlie Massoglia Writing and Using System/36 Procedures Effectively by Charlie Massoglia Everything You Always Wanted to Know About POP But Nobody Told You by Merikay Lee System/3, System/34, and System/36 Disk Sort as a Programming Language by Charlie Massoglia External links IBM Archives: IBM System/36 Bitsavers' Archive of System/36 Documentation IBM System/36 brochures and Manuals System 36 Computer-related introductions in 1983 16-bit computers
131381
https://en.wikipedia.org/wiki/Talker
Talker
A talker is a chat system that people use to talk to each other over the Internet. Dating back to the 1980s, they were a predecessor of instant messaging. A talker is a communication system precursor to MMORPGs and other virtual worlds such as Second Life. Talkers are a form of online virtual worlds in which multiple users are connected at the same time to chat in real-time. People log in to the talkers remotely (usually via Telnet), and have a basic text interface with which to communicate with each other. The early talkers were similar to MUDs with most of the complex game machinery stripped away, leaving just the communication level commands – hence the name "talker". ew-too was, in fact, a MUD server with the game elements removed. Most talkers are free and based on open source software. Many of the online metaphors used on talkers, such as "rooms" and "residency", were established by these early pioneering services and remain in use by modern 3D interfaces such as Second Life. History of talkers Early Internet talkers In the school year of 1983–1984, Mark Jenks and Todd Krause, two students at Washington High School in Milwaukee, wrote a software program for talking among a group of people. They used the PDP-11 at the Milwaukee Public Schools (MPS) central office. After searching around the PDP-11 files and directories, Mark found the PDP-11 program talk, and decided that they could do better. The system had approximately 40 300–2400 bit per second modems attached to it, with a single phone number and a hunt group. The talk program was named TALK and was written to handle many options that are seen in IRC today: tables, private messages, actions, moderators and inviting to tables. Another talk server called NUTS, which stood for Neil's Unix Talk Server, was released in 1993 and became fairly popular on Unix systems. Its command system was broadly based on the Unaxcess BBS and being room based it took a lot of inspiration from MUDs too. The source code was given away and became the basis of a huge number of variants and rewrites during the 1990s. Cat Chat was the first Internet / JANET talker, created in 1990. Talker hosting In 1996, talker.com was formed, the first server to sell space for talkers, later giving it the name Dragonroost. The server had over 90 talkers on it at one time, during the mid-1990s boom of talkers. A number of other hosts started up as alternative hosting companies to talker.com. Talker.com ceased hosting any other talkers besides its owners' on September 28, 2009. See also Instant messaging IRC ICQ MUD Online chat LPMud Talk, a Unix text chat program Telnet References Further reading - an ethnographic study of youth online, analyzes textual interactions, including at Middle Earth-related talkers External links BBC h2g2 (wikipedia-style) Article on talkers Cheeseplant's House History, of some historical significance. Playground Plus Code Base Chat Rooms Online chat
60105148
https://en.wikipedia.org/wiki/Deep%20reinforcement%20learning
Deep reinforcement learning
Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g. every pixel rendered to the screen in a video game) and decide what actions to perform to optimize an objective (eg. maximizing the game score). Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare. Overview Deep learning Deep learning is a form of machine learning that utilizes a neural network to transform a set of inputs into a set of outputs via an artificial neural network. Deep learning methods, often using supervised learning with labeled datasets, have been shown to solve tasks that involve handling complex, high-dimensional raw input data such as images, with less manual feature engineering than prior methods, enabling significant progress in several fields including computer vision and natural language processing. Reinforcement learning Reinforcement learning is a process in which an agent learns to make decisions through trial and error. This problem is often modeled mathematically as a Markov decision process (MDP), where an agent at every timestep is in a state , takes action , receives a scalar reward and transitions to the next state according to environment dynamics . The agent attempts to learn a policy , or map from observations to actions, in order to maximize its returns (expected sum of rewards). In reinforcement learning (as opposed to optimal control) the algorithm only has access to the dynamics through sampling. Deep reinforcement learning In many practical decision making problems, the states of the MDP are high-dimensional (eg. images from a camera or the raw sensor stream from a robot) and cannot be solved by traditional RL algorithms. Deep reinforcement learning algorithms incorporate deep learning to solve such MDPs, often representing the policy or other learned functions as a neural network, and developing specialized algorithms that perform well in this setting. History Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning, where a neural network is used in reinforcement learning to represent policies or value functions. Because in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single neural network, it is also sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon. Four inputs were used for the number of pieces of a given color at a given location on the board, totaling 198 input signals. With zero knowledge built in, the network learned to play the game at an intermediate level by self-play and TD(). Seminal textbooks by Sutton and Barto on reinforcement learning, Bertsekas and Tsitiklis on neuro-dynamic programming, and others advanced knowledge and interest in the field. Katsunari Shibata's group showed that various functions emerge in this framework, including image recognition, color constancy, sensor motion (active recognition), hand-eye coordination and hand reaching movement, explanation of brain activities, knowledge transfer, memory, selective attention, prediction, and exploration. Starting around 2012, the so called Deep learning revolution led to an increased interest in using deep neural networks as function approximators across a variety of domains. This led to a renewed interest in researchers using deep neural networks to learn the policy, value, and/or Q functions present in existing reinforcement learning algorithms. Beginning around 2013, DeepMind showed impressive learning results using deep RL to play Atari video games. The computer player a neural network trained using a deep RL algorithm, a deep version of Q-learning they termed deep Q-networks (DQN), with the game score as the reward. They used a deep convolutional neural network to process 4 frames RGB pixels (84x84) as inputs. All 49 games were learned using the same network architecture and with minimal prior knowledge, outperforming competing methods on almost all the games and performing at a level comparable or superior to a professional human game tester. Deep reinforcement learning reached another milestone in 2015 when AlphaGo, a computer program trained with deep RL to play Go, became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. In a subsequent project in 2017, AlphaZero improved performance on Go while also demonstrating they could use the same algorithm to learn to play chess and shogi at a level competitive or superior to existing computer programs for those games, and again improved in 2019 with MuZero. Separately, another milestone was achieved by researchers from Carnegie Mellon University in 2019 developing Pluribus, a computer program to play poker that was the first to beat professionals at multiplayer games of no-limit Texas hold 'em. OpenAI Five, a program for playing five-on-five Dota 2 beat the previous world champions in a demonstration match in 2019. Deep reinforcement learning has also been applied to many domains beyond games. In robotics, it has been used to let robots perform simple household tasks and solve a Rubik's cube with a robot hand. Deep RL has also found sustainability applications, used to reduce energy consumption at data centers. Deep RL for autonomous driving is an active area of research in academia and industry. Loon explored deep RL for autonomously navigating their high-altitude balloons. Algorithms Various techniques exist to train policies to solve tasks with deep reinforcement learning algorithms, each having their own benefits. At the highest level, there is a distinction between model-based and model-free reinforcement learning, which refers to whether the algorithm attempts to learn a forward model of the environment dynamics. In model-based deep reinforcement learning algorithms, a forward model of the environment dynamics is estimated, usually by supervised learning using a neural network. Then, actions are obtained by using model predictive control using the learned model. Since the true environment dynamics will usually diverge from the learned dynamics, the agent re-plans often when carrying out actions in the environment. The actions selected may be optimized using Monte Carlo methods such as the cross-entropy method, or a combination of model-learning with model-free methods. In model-free deep reinforcement learning algorithms, a policy is learned without explicitly modeling the forward dynamics. A policy can be optimized to maximize returns by directly estimating the policy gradient but suffers from high variance, making it impractical for use with function approximation in deep RL. Subsequent algorithms have been developed for more stable learning and widely applied. Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function that estimates the future returns taking action from state . In continuous spaces, these algorithms often learn both a value estimate and a policy. Research Deep reinforcement learning is an active area of research, with several lines of inquiry. Exploration An RL agent must balance the exploration/exploitation tradeoff: the problem of deciding whether to pursue actions that are already known to yield high rewards or explore other actions in order to discover higher rewards. RL agents usually collect data with some type of stochastic policy, such as a Boltzmann distribution in discrete action spaces or a Gaussian distribution in continuous action spaces, inducing basic exploration behavior. The idea behind novelty-based, or curiosity-driven, exploration is giving the agent a motive to explore unknown outcomes in order to find the best solutions. This is done by "modify[ing] the loss function (or even the network architecture) by adding terms to incentivize exploration". An agent may also be aided in exploration by utilizing demonstrations of successful trajectories, or reward-shaping, giving an agent intermediate rewards that are customized to fit the task it is attempting to complete. Off-policy reinforcement learning An important distinction in RL is the difference between on-policy algorithms that require evaluating or improving the policy that collects data, and off-policy algorithms that can learn a policy from data generated by an arbitrary policy. Generally, value-function based methods such as Q-learning are better suited for off-policy learning and have better sample-efficiency - the amount of data required to learn a task is reduced because data is re-used for learning. At the extreme, offline (or "batch") RL considers learning a policy from a fixed dataset without additional interaction with the environment. Inverse reinforcement learning Inverse RL refers to inferring the reward function of an agent given the agent's behavior. Inverse reinforcement learning can be used for learning from demonstrations (or apprenticeship learning) by inferring the demonstrator's reward and then optimizing a policy to maximize returns with RL. Deep learning approaches have been used for various forms of imitation learning and inverse RL. Goal-conditioned reinforcement learning Another active area of research is in learning goal-conditioned policies, also called contextual or universal policies that take in an additional goal as input to communicate a desired aim to the agent. Hindsight experience replay is a method for goal-conditioned RL that involves storing and learning from previous failed attempts to complete a task. While a failed attempt may not have reached the intended goal, it can serve as a lesson for how achieve the unintended result through hindsight relabeling. Multi-agent reinforcement learning Many applications of reinforcement learning do not involve just a single agent, but rather a collection of agents that learn together and co-adapt. These agents may be competitive, as in many games, or cooperative as in many real-world multi-agent systems. Multi-agent learning studies the problems introduced in this setting. Generalization The promise of using deep learning tools in reinforcement learning is generalization: the ability to operate correctly on previously unseen inputs. For instance, neural networks trained for image recognition can recognize that a picture contains a bird even it has never seen that particular image or even that particular bird. Since deep RL allows raw data (e.g. pixels) as input, there is a reduced need to predefine the environment, allowing the model to be generalized to multiple applications. With this layer of abstraction, deep reinforcement learning algorithms can be designed in a way that allows them to be general and the same model can be used for different tasks. One method of increasing the ability of policies trained with deep RL policies to generalize is to incorporate representation learning. References Machine learning algorithms Reinforcement learning Deep learning Artificial intelligence
45204749
https://en.wikipedia.org/wiki/United%20States%20Special%20Operations%20Command
United States Special Operations Command
The United States Special Operations Command (USSOCOM or SOCOM) is the unified combatant command charged with overseeing the various special operations component commands of the Army, Marine Corps, Navy, and Air Force of the United States Armed Forces. The command is part of the Department of Defense and is the only unified combatant command created by an Act of Congress. USSOCOM is headquartered at MacDill Air Force Base in Tampa, Florida. The idea of an American unified special operations command had its origins in the aftermath of Operation Eagle Claw, the disastrous attempted rescue of hostages at the American embassy in Iran in 1980. The ensuing investigation, chaired by Admiral James L. Holloway III, the retired Chief of Naval Operations, cited lack of command and control and inter-service coordination as significant factors in the failure of the mission. Since its activation on 16 April 1987, U.S. Special Operations Command has participated in many operations, from the 1989 invasion of Panama to the current War on Terror. USSOCOM is involved with clandestine activity, such as direct action, special reconnaissance, counter-terrorism, foreign internal defense, unconventional warfare, psychological warfare, civil affairs, and counter-narcotics operations. Each branch has a distinct Special Operations Command that is capable of running its own operations, but when the different special operations forces need to work together for an operation, USSOCOM becomes the joint component command of the operation, instead of a SOC of a specific branch. History The unwieldy command and control structure of separate U.S. military special operations forces (SOF), which led to the failure of Operation Eagle Claw in 1980, highlighted the need within the US Department of Defense for reform and reorganization. The US Army Chief of Staff, General Edward C. "Shy" Meyer, had already helped create the U.S. Delta Force in 1977. Following Eagle Claw, he called for a further restructuring of special operations capabilities. Although unsuccessful at the joint level, Meyer nevertheless went on to consolidate Army SOF units under the new 1st Special Operations Command in 1982. By 1983, there was a small but growing sense in the US Congress of the need for military reforms. In June, the Senate Armed Services Committee (SASC) began a two-year-long study of the Defense Department, which included an examination of SOF spearheaded by Senator Barry Goldwater (R-AZ). With concern mounting on Capitol Hill, the Department of Defense created the Joint Special Operations Agency on 1 January 1984; this agency, however, had neither operational nor command authority over any SOF. The Joint Special Operations Agency thus did little to improve SOF readiness, capabilities, or policies, and therefore was deemed insufficient. Within the Defense Department, there were a few staunch SOF supporters. Noel Koch, Principal Deputy Assistant Secretary of Defense for International Security Affairs, and his deputy, Lynn Rylander, both advocated SOF reforms. At the same time, a few on Capitol Hill were determined to overhaul United States Special Operations Forces. They included Senators Sam Nunn (D-GA) and William Cohen (R-ME), both members of the Armed Services Committee, and Representative Dan Daniel (D-VA), the chairman of the United States House Armed Services Subcommittee on Readiness. Congressman Daniel had become convinced that the U.S. military establishment was not interested in special operations, that the country's capability in this area was the second rate, and that SOF operational command and control was an endemic problem. Senators Nunn and Cohen also felt strongly that the Department of Defense was not preparing adequately for future threats. Senator Cohen agreed that the U.S. needed a clearer organizational focus and chain of command for special operations to deal with low-intensity conflicts. In October 1985, the Senate Armed Services Committee published the results of its two-year review of the U.S. military structure, entitled "Defense Organization: The Need For Change." James R. Locher III, the principal author of this study, also examined past special operations and speculated on the most likely future threats. This influential document led to the 1986 Goldwater-Nichols Act. By spring 1986, SOF advocates had introduced reform bills in both houses of Congress. On 15 May, Senator Cohen introduced the Senate bill, co-sponsored by Senator Nunn and others, which called for a joint military organization for SOF and the establishment of an office in the Defense Department to ensure adequate funding and policy emphasis for low-intensity conflict and special operations. Representative Daniel's proposal went even further—he wanted a national special operations agency headed by a civilian who would bypass the Joint Chiefs and report directly to the US Secretary of Defense; this would keep Joint Chiefs and the Services out of the SOF budget process. Congress held hearings on the two bills in the summer of 1986. Admiral William J. Crowe Jr., Chairman of the Joint Chiefs of Staff, led the Pentagon's opposition to the bills. As an alternative, he proposed a new Special Operations Forces command led by a three-star general. This proposal was not well received on Capitol Hill—Congress wanted a four-star general in charge to give SOF more clout. A number of retired military officers and others testified in favor of the need for reform. By most accounts, retired Army Major General Richard Scholtes gave the most compelling reasons for the change. Scholtes, who commanded the joint special operations task force during Operation Urgent Fury, explained how conventional force leaders misused SOF during the operation, not allowing them to use their unique capabilities, which resulted in high SOF casualties. After his formal testimony, Scholtes met privately with a small number of Senators to elaborate on the problems that he had encountered in Grenada. Both the House and Senate passed SOF reform bills, and these went to a conference committee for reconciliation. Senate and House conferees forged a compromise. The bill called for a unified combatant command headed by a four-star general for all SOF, an Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict, a coordinating board for low-intensity conflict within the National Security Council, and a new Major Force Program (MFP-11) for SOF (the so-called "SOF checkbook"). The final bill, attached as a rider to the 1987 Defense Authorization Act, amended the Goldwater-Nichols Act and was signed into law in October 1986. This was interpreted as Congress forcing the hand of the DOD and the Reagan administration regarding what it saw as the past failures and emerging threats. The DOD and the administration were responsible for implementing the law, and Congress subsequently passed two additional bills to ensure implementation. The legislation promised to improve SOF in several respects. Once implemented, MFP-11 provided SOF with control over its own resources, better enabling it to modernize the force. Additionally, the law fostered interservice cooperation: a single commander for all SOF promoted interoperability among the same command forces. The establishment of a four-star commander-in-chief and an Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict eventually gave SOF a voice in the highest councils of the Defense Department. However, implementing the provisions and mandates of the Nunn-Cohen Amendment to the National Defense Authorization Act for Fiscal Year 1987 was neither rapid nor smooth. One of the first issues to arise was the appointment of an Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict & Interdependent Capabilities, whose principal duties included monitorship of special operations activities and the low-intensity conflict activities of the Department of Defense. Congress increased the number of assistant secretaries of defense from 11 to 12, but the Department of Defense still did not fill this new billet. In December 1987, Congress directed Secretary of the Army John O. Marsh to carry out the ASD (SO/LIC) duties until the Senate approved a suitable replacement. Not until 18 months after the legislation passed did Ambassador Charles Whitehouse assume the duties of ASD (SO/LIC). Meanwhile, the establishment of USSOCOM provided its own measure of excitement. A quick solution to manning and basing a brand new unified command was to abolish an existing command. United States Readiness Command (USREDCOM), with an often misunderstood mission, did not appear to have a viable mission in the post-Goldwater-Nichols era, and its commander-in-chief, General James Lindsay, had had some special operations experience. On 23 January 1987, the Joint Chiefs of Staff recommended to the Secretary of Defense that USREDCOM be disestablished to provide billets and facilities for USSOCOM. President Ronald Reagan approved the establishment of the new command on 13 April 1987. The Department of Defense activated USSOCOM on 16 April 1987 and nominated General Lindsay to be the first Commander in Chief Special Operations Command (USCINCSOC). The Senate accepted him without debate. Operation Earnest Will USSOCOM's first tactical operation involved 160th Special Operations Aviation Regiment (Airborne) ("Night Stalkers") aviators, SEALs, and Special Boat Teams (SBT) working together during Operation Earnest Will in September 1987. During Operation Earnest Will, the United States ensured that neutral oil tankers and other merchant ships could safely transit the Persian Gulf during the Iran–Iraq War. Iranian attacks on tankers prompted Kuwait to ask the United States in December 1986 to register 11 Kuwaiti tankers as American ships so that they could be escorted by the U.S. Navy. President Reagan agreed to the Kuwaiti request on 10 March 1987, hoping it would deter Iranian attacks. The protection offered by U.S. naval vessels, however, did not stop Iran, which used mines and small boats to harass the convoys steaming to and from Kuwait. In late July 1987, Rear Admiral Harold J. Bernsen, commander of the Middle East Force, requested NSW assets. Special Boat Teams deployed with six Mark III Patrol Boats and two SEAL platoons in August. The Middle East Force decided to convert two oil servicing barges, Hercules and Wimbrown VII, into mobile sea bases. The mobile sea bases allowed SOF in the northern Persian Gulf to thwart clandestine Iranian mining and small boat attacks. On 21 September, Nightstalkers flying MH-60 and Little Birds took off from the frigate USS Jarrett to track an Iranian ship, Iran Ajr. The Nightstalkers observed Iran Ajr turn off her lights and begin laying mines. After receiving permission to attack, the helicopters fired guns and rockets, stopping the ship. As Iran Ajrs crew began to push mines over the side, the helicopters resumed firing until the crew abandoned the ship. Special Boat Teams provided security while a SEAL team boarded the vessel at first light and discovered nine mines on the vessel's deck, as well as a logbook revealing areas where previous mines had been laid. The logbook implicated Iran in mining international waters. Within a few days, the Special Operations forces had determined the Iranian pattern of activity; the Iranians hid during the day near oil and gas platforms in Iranian waters and at night they headed toward the Middle Shoals Buoy, a navigation aid for tankers. With this knowledge, SOF launched three Little Bird helicopters and two patrol craft to the buoy. The Little Bird helicopters arrived first and were fired upon by three Iranian boats anchored near the buoy. After a short but intense firefight, the helicopters sank all three boats. Three days later, in mid-October, an Iranian Silkworm missile hit the tanker Sea Isle City near the oil terminal outside Kuwait City. Seventeen crewmen and the American captain were injured in the missile attack. During Operation Nimble Archer, four destroyers shelled two oil platforms in the Rostam oil field. After the shelling, a SEAL platoon and a demolition unit planted explosives on one of the platforms to destroy it. The SEALs next boarded and searched a third-platform away. Documents and radios were taken for intelligence purposes. On 14 April 1988, east of Bahrain, the frigate USS Samuel B. Roberts hit a mine, blowing an immense hole in its hull. Ten sailors were injured. During Operation Praying Mantis the U.S. retaliated fiercely, attacking the Iranian frigate Sahand and oil platforms in the Sirri and Sassan oil fields. After U.S. warships bombarded the Sirri platform and set it ablaze, a UH-60 with a SEAL platoon flew toward the platform but was unable to get close enough because of the roaring fire. Secondary explosions soon wrecked the platform. Thereafter, Iranian attacks on neutral ships dropped drastically. On 18 July, Iran accepted the United Nations cease-fire; on 20 August 1988, the Iran–Iraq War ended. The remaining SEALs, patrol boats, and helicopters then returned to the United States. Special operations forces provided critical skills necessary to help CENTCOM gain control of the northern Persian Gulf and balk Iran's small boats and minelayers. The ability to work at night proved vital because Iranian units used darkness to conceal their actions. Additionally, because of Earnest Will operational requirements, USSOCOM would acquire new weapons systems—the patrol coastal ships and the Mark V Special Operations Craft. Somalia Special Operations Command first became involved in Somalia in 1992 as part of Operation Provide Relief. C-130s circled over Somali airstrips during the delivery of relief supplies. Special Forces medics accompanied many relief flights into the airstrips throughout southern Somalia to assess the area. They were the first U.S. soldiers in Somalia, arriving before U.S. forces who supported the expanded relief operations of Restore Hope. The first teams into Somalia was CIA Special Activities Division paramilitary officers with elements of JSOC. They conducted very high-risk advanced force operations prior to the entry of the follow-on forces. The first casualty of the conflict came from this team and was a Paramilitary officer and former Delta Force operator named Larry Freedman. Freedman was awarded the Intelligence Star for "extraordinary heroism" for his actions. The earliest missions during Operation Restore Hope were conducted by Navy SEALs. The SEALs performed several hydrographic reconnaissance missions to find suitable landing sites for Marines. On 7 December, the SEALs swam into Mogadishu Harbor, where they found suitable landing sites, assessed the area for threats, and concluded that the port could support offloading ships. This was a tough mission because the SEALs swam against a strong current which left many of them overheated and exhausted. Furthermore, they swam through raw sewage in the harbor, which made them sick. When the first SEALs hit the shore the following night, they were surprised to meet members of the news media. The first Marines came ashore soon thereafter, and the press redirected their attention to them. Later, the SEALs provided personal security for President George Bush during a visit to Somalia. In December 1992, Special Forces assets in Kenya moved to Somalia and joined Operation Restore Hope. January 1993, a Special Forces command element deployed to Mogadishu as the Joint Special Operations Forces-Somalia (JSOFOR) that would command and control all special operations for Restore Hope. JSOFOR's mission was to make initial contact with indigenous factions and leaders; provide information for force protection; and provide reports on the area for future relief and security operations. Before redeploying in April, JSOFOR elements drove over , captured 277 weapons, and destroyed over of explosives. In August 1993, Secretary of Defense Les Aspin directed the deployment of a Joint Special Operations Task Force (JSOTF) to Somalia in response to attacks made by General Mohamed Farrah Aidid's supporters upon U.S. and UN forces. The JSOTF, named Task Force (TF) Ranger was charged with a mission named Operation Gothic Serpent to capture Aidid. This was an especially arduous mission, for Aidid had gone underground, after several Lockheed AC-130 air raids and UN assaults on his strongholds. While Marines from the 24th MEU provided an interim QRF (Force Recon Det and helicopters from HMM-263), the task force arrived in the country and began training exercises. The Marines were asked to take on the Aidid snatch mission, but having the advantage of being in the area for more than two months, decided after mission analysis that the mission was a "no-go" due to several factors, centered around the inability to rescue the crew of a downed helicopter (re: the indigenous forces technique of using RPGs against helicopters and blocking the narrow streets in order to restrict the movement of a ground rescue force). This knowledge was not passed on to the Rangers, due to the Marines operating from the USS Wasp and the Rangers remaining on land. TF Ranger was made up of operators from Delta Force, 75th Ranger Regiment, 160th SOAR, SEALs from the Naval Special Warfare Development Group, and Air Force special tactics units. During August and September 1993, the task force conducted six missions into Mogadishu, all of which were successes. Although Aidid remained free, the effect of these missions seriously limited his movements. On 3 October, TF Ranger launched its seventh mission, this time into Aidid's stronghold the Bakara Market to capture two of his key lieutenants. The mission was expected to take only one or two hours. Helicopters carried an assault and a ground convoy of security teams launched in the late afternoon from the TF Ranger compound at Mogadishu airport. The TF came under increasingly heavy fire, more intense than during previous missions. The assault team captured 24 Somalis including Aidid's lieutenants and were loading them onto the convoy trucks when a MH-60 Blackhawk was hit by a rocket-propelled grenade (RPG). A small element from the security forces, as well as an MH-6 assault helicopter and an MH-60 carrying a fifteen-man combat search and rescue (CSAR) team, rushed to the crash site. The battle became increasingly worse. An RPG struck another MH-60, crashing less than to the south of the first downed helicopter. The task force faced overwhelming Somali mobs that overran the crash sites, causing a dire situation. A Somali mob overran the second site and, despite a heroic defense, killed everyone except the pilot, whom they took prisoner. Two defenders of this crash site, Master Sergeant Gary Gordon and Sergeant First Class Randall Shughart, were posthumously awarded the Medal of Honor. About this time, the mission's quick reaction force (QRF) also tried to reach the second crash site. This force too was pinned by the Somali fire and required the fire support of two AH-6 helicopters before it could break contact and make its way back to the base. The assault and security elements moved on foot towards the first crash area, passing through heavy fire, and occupied buildings south and southwest of the downed helicopter. They fought to establish defensive positions so as not to be pinned down by the very heavy enemy fire while treating their wounded and worked to free the pilot's body from the downed helicopter. With the detainees loaded on trucks, the ground convoy force attempted to reach the first crash site. Unable to find it amongst the narrow, winding alleyways, the convoy came under devastating small arms and RPG fire. The convoy had to return to base after suffering numerous casualties and sustaining substantial damage to their vehicles. Reinforcements, consisting of elements from the QRF, 10th Mountain Division soldiers, Rangers, SEALs, Pakistan Army tanks and Malaysian armored personnel carriers, finally arrived at 1:55 am on 4 October. The combined force worked until dawn to free the pilot's body, receiving RPG and small arms fire throughout the night. All the casualties were loaded onto the armored personnel carriers, and the remainder of the force was left behind and had no choice but to move out on foot. AH-6 gunships raked the streets with fire to support the movement. The main force of the convoy arrived at the Pakistani Stadium-compound for the QRF-at 6:30 am, thus concluding one of the bloodiest and fiercest urban firefights since the Vietnam War. Task Force Ranger experienced a total of 17 killed in action and 106 wounded. Various estimates placed Somali casualties above 1,000. Although Task Force Ranger's few missions were successes, the overall outcome of Operation Gothic Serpent was deemed a failure because of the Task Force's failure to complete their stated mission, capturing Mohamed Farrah Aidid. Most U.S. forces pulled out of Somalia by March 1994. The withdrawal from Somalia was completed in March 1995. Even though Operation Gothic Serpent failed, USSOCOM still made significant contributions to operations in Somalia. SOF performed reconnaissance and surveillance missions, assisted with humanitarian relief, protected American forces, and conducted riverine patrols. Additionally, they ensured the safe landing of the Marines and safeguarded the arrival of merchant ships carrying food. Iraq USSOCOM's 10th Special Forces Group, elements of JSOC, and CIA/SAD Paramilitary Officers linked up again and were the first to enter Iraq prior to the invasion. Their efforts organized the Kurdish Peshmerga to defeat Ansar Al Islam in Northern Iraq before the invasion. This battle was for control of a territory in Northeastern Iraq that was completely occupied by Ansar Al Islam, an ally of Al Qaeda. This was a very significant battle and led to the death of a substantial number of terrorists and the uncovering of a chemical weapons facility at Sargat. These terrorists would have been in the subsequent insurgency had they not been eliminated during this battle. Sargat was the only facility of its type discovered in the Iraq war. This battle may have been the Tora Bora of Iraq, but it was a sound defeat for Al Qaeda and their ally Ansar Al Islam. This combined team then led the Peshmerga against Saddam's Northern Army. This effort kept Saddam's forces in the north and denied the ability to redeploy to contest the invasion force coming from the south. This effort may have saved the lives of hundreds if not thousands of coalition servicemen and women. At the launch of the Iraq War, dozens of 12-member Special Forces teams infiltrated southern and western Iraq to hunt for Scud missiles and pinpoint bombing targets. Scores of Navy SEALs seized oil terminals and pumping stations on the southern coast. Air Force combat controllers flew combat missions in MC-130H Combat Talon IIs and established austere desert airstrips to begin the flow of soldiers and supplies deep into Iraq. It was notably different from the Persian Gulf war of 1991, where Special Operations forces were mostly kept participating. But it would not be a replay of Afghanistan, where Army Special Forces and Navy SEALs led the fighting. After their star turn in Afghanistan, many special operators were disappointed to play a supporting role in Iraq. Many special operators felt restricted by cautious commanders. From that point, USSOCOM has since killed or captured hundreds of insurgents and Al-Qaeda terrorists. It has conducted several foreign internal defense missions successfully training the Iraqi security forces. Afghanistan United States Special Operations Command played a pivotal role in fighting the former Taliban government in Afghanistan in 2001 and toppling it thereafter, as well as combating the insurgency and capturing Saddam Hussein in Iraq. USSOCOM in 2004 was developing plans to have an expanded and more complex role in the global campaign against terrorism, and that role continued to emerge before and after the killing of Osama bin Laden in Pakistan in 2011. In 2010, "of about 13,000 Special Operations forces deployed overseas, about 9,000 [were] evenly divided between Iraq and Afghanistan." In the initial stages of the War in Afghanistan, USSOCOM forces linked up with CIA Paramilitary Officers from Special Activities Division to defeat the Taliban without the need for large-scale conventional forces. This was one of the biggest successes of the global War on Terrorism. These units linked up several times during this war and engaged in several furious battles with the enemy. One such battle happened during Operation Anaconda, the mission to squeeze the life out of a Taliban and Al-Qaeda stronghold dug deep into the Shah-i-Kot mountains of eastern Afghanistan. The operation was seen as one of the heaviest and bloodiest fights in the War in Afghanistan. The battle on an Afghan mountaintop called Takur Ghar featured special operations forces from all 4 services and the CIA. Navy SEALs, Army Rangers, Air Force Combat Controllers, and Pararescuemen fought against entrenched Al-Qaeda fighters atop a mountain. Subsequently, the entrenched Taliban became targets of every asset in the sky. According to an executive summary, the Battle of Takur Ghar was the most intense firefight American special operators have been involved in since 18 U.S. Army Rangers were killed in Mogadishu, Somalia, in 1993. During Operation Red Wings on 28 June 2005, four Navy SEALs, pinned down in a firefight, radioed for help. A Chinook helicopter, carrying 16 service members, responded but was shot down. All members of the rescue team and three of four SEALs on the ground died. It was the worst loss of life in Afghanistan since the invasion in 2001. The Navy SEAL Marcus Luttrell alone survived. Team leader Michael P. Murphy was awarded the Medal of Honor for his actions in the battle. Global presence In 2010, special operations forces were deployed in 75 countries, compared with about 60 at the beginning of 2009. In 2011, SOC spokesman Colonel Tim Nye (Army) was reported to have said that the number of countries with SOC presence will likely reach 120 and that joint training exercises will have been carried out in most or all of those countries during the year. One study identified joint-training exercises in Belize, Brazil, Bulgaria, Burkina Faso, Germany, Indonesia, Mali, Norway, Panama, and Poland in 2010 and also, through mid-year 2011, in the Dominican Republic, Jordan, Romania, Senegal, South Korea, and Thailand, among other nations. In addition, SOC forces executed the high-profile killing of Osama bin Laden in Pakistan in 2011. In November 2009 The Nation reported on a covert JSOC/Blackwater anti-terrorist operation in Pakistan. In 2010, White House counterterrorism director John O. Brennan said that the United States "will not merely respond after the fact" of a terrorist attack but will "take the fight to al-Qaeda and its extremist affiliates whether they plot and train in Afghanistan, Pakistan, Yemen, Somalia and beyond." Olson said, "In some places, in deference to host-country sensitivities, we are lower in profile. In every place, Special Operations forces activities are coordinated with the U.S. ambassador and are under the operational control of the four-star regional commander." The conduct of actions by SOC forces outside of Iraq and Afghan war zones has been the subject of internal U.S. debate, including between representatives of the Bush administration such as John B. Bellinger III, on one hand, and the Obama administration on another. The United Nations in 2010 also "questioned the administration's authority under international law to conduct such raids, particularly when they kill innocent civilians. One possible legal justification – the permission of the country in question – is complicated in places such as Pakistan and Yemen, where the governments privately agree but do not publicly acknowledge approving the attacks," as one report put it. Subordinate Commands Joint Special Operations Command Joint Special Operations Command (JSOC) is a component command of the USSOCOM and is charged to study special operations requirements and techniques to ensure interoperability and equipment standardization, plan and conduct special operations exercises and training, and develop Joint Special Operations Tactics. It was established in 1980 on the recommendation of Col. Charlie Beckwith, in the aftermath of the failure of Operation Eagle Claw.Units The U.S. Army's 1st Special Forces Operational Detachment-Delta, popularly known as Delta Force, is the first of the two counter-terrorism, special mission units that fall under the Joint Special Operations Command. Modeled after the British Special Air Service, Delta Force is regarded as one of the premier special operations forces in the world. Delta also includes a stringent training and selection process. Delta recruits primarily from the most proficient and highly skilled soldiers of the U.S. Army Special Operations Command, although it encompasses the capability of recruiting throughout the U.S. Armed Forces. Recruits must pass a rigid selection course before beginning training, known as the Operators' Training Course (OTC). Delta has received training from numerous U.S. government agencies and other tiers one SOF and has created a curriculum based on this training and techniques that it has developed. Delta conducts clandestine and covert special operations all over the world. It has the capability to conduct myriad special operations missions but specializes in counter-terrorism and hostage rescue operations. The Intelligence Support Activity (ISA, The Activity) is the support branch of JSOC and USSOCOM. Its primary missions are to provide Human Intelligence (HUMINT) and Signal Intelligence (SIGINT) mainly for Delta and DEVGRU's operations. Before the establishing of the Strategic Support Branch in 2001, the ISA required the permission of the CIA to conduct covert operations, which considerably lessened its effectiveness in its support of JSOC operations as a whole. The U.S. Navy's Naval Special Warfare Development Group (DEVGRU, SEAL Team Six) is the second of the two counter-terrorism, special mission units that fall under the Joint Special Operations Command. DEVGRU is the U.S. Navy's counterpart to Delta, specializing in maritime counter-terrorism. DEVGRU recruits the most proficient operators from Naval Special Warfare, specifically the U.S. Navy SEALs. Like Delta, DEVGRU can conduct a variety of special operations missions but trains primarily for maritime counter-terrorism and hostage rescue operations. DEVGRU has gained prolific notoriety in recent years, due to high-profile hostage rescue operations and their role in the killing of Osama Bin Laden. The Air Force 24th Special Tactics Squadron (24th STS) is the AFSOC component of JSOC. The 24th STS consists of specially selected AFSOC personnel, including Pararescuemen, Combat Controllers, and TACPs. These special operators usually serve with Delta Force and DEVGRU, because of the convenience of the 24th STS's ability to synchronize and control the different elements of airpower and enhance air operations deep in enemy territory; As well as providing needed medical assistance in the case of Pararescuemen. The Joint Communications Unit (JCU) is a technical unit of the United States Special Operations Command charged to standardize and ensure interoperability of communication procedures and equipment of the Joint Special Operations Command and its subordinate units. The JCU was activated at Ft. Bragg, NC in 1980, after the failure of Operation Eagle Claw. The JCU has earned the reputation of "DoD's Finest Communicators". Portions of JSOC units have made up the constantly changing special operations task force, operating in the U.S. Central Command area of operations. The Task Force 11, Task Force 121, Task Force 6-26 and Task Force 145 are creations of the Pentagon's post-11 September campaign against terrorism, and it quickly became the model for how the military would gain intelligence and battle insurgents in the future. Originally known as Task Force 121, it was formed in the summer of 2003 when the military merged two existing Special Operations units, one hunting Osama bin Laden in and around Afghanistan, and the other tracking Sadaam Hussein in Iraq. Special Operations Command – Joint Capabilities Special Operations Command – Joint Capabilities (SOC-JC) was transferred to USSOCOM from the soon-to-be disestablished United States Joint Forces Command in 2011. Its primary mission was to train conventional and SOF commanders and their staffs to support USSOCOM international engagement training requirements, and support the implementation of capability solutions in order to improve strategic and operational Warfighting readiness and joint interoperability. SOC-JC must also be prepared to support the deployed Special Operations Joint Task Force (SOJTF) Headquarters (HQ). The Government Accountability Office wrote that SOC-JC was disestablished in 2013, and positions were to be zeroed out in 2014. Army Special Operations Command On 1 December 1989, the United States Army Special Operations Command (USASOC) activated as the 16th major Army command. These special operations forces have been America's spearhead for unconventional warfare for more than 40 years. USASOC commands such units as the well known Special Forces (SF, or the "Green Berets"), the Rangers, and such relatively unknown units as two psychological operations groups, a special aviation regiment, a civil affairs brigade, and a special sustainment brigade. These are one of the USSOCOM's main weapons for waging unconventional warfare and counter-insurgency. The significance of these units is emphasized as conventional conflicts are becoming less prevalent as insurgent and guerrilla warfare increases.528th Special Operations Sustainment Brigade Organizational Chart 2020, 528th Sustainment Brigade History Handbook Published by the U.S. Army Special Operations Command History Office Fort Bragg, North Carolina 2020, by Chris Howard ARSOF Support Historian, dated 5 December 2020, last accessed 12 December 2020Units: United States Army Special Forces (SF) aka Green Berets perform several doctrinal missions: unconventional warfare, foreign internal defense, special reconnaissance, direct action, and counter-terrorism. These missions make Special Forces unique in the U.S. military because they are employed throughout the three stages of the operational continuum: peacetime, conflict, and war. Foreign internal defense operations, SF's main peacetime mission, are designed to help friendly developing nations by working with their military and police forces to improve their technical skills, understanding of human rights issues, and help with humanitarian and civic action projects. Special Forces unconventional warfare capabilities provide a viable military option for a variety of operational taskings that are inappropriate or infeasible for conventional forces. Special Forces are the U.S. military's premiere unconventional warfare force. Foreign internal defense and unconventional warfare missions are the bread and butter of Special Forces soldiers. For this reason, SF candidates are trained extensively in weapons, engineering, communications, and medicine. SF soldiers are taught to be warriors first and teachers second because they must be able to train their team and be able to train their allies during an FID or UW mission. Often SF units are required to perform additional, or collateral, activities outside their primary missions. These collateral activities are coalition warfare/support, combat search and rescue, security assistance, peacekeeping, humanitarian assistance, humanitarian de-mining, and counter-drug operations. The 1st Special Forces Operational Detachment-Delta (1st SFOD-D), commonly referred to as Delta Force, Combat Applications Group/"CAG", "The Unit", Army Compartmented Element, or within JSOC as Task Force Green, is an elite Special Mission Unit of the United States Army, under the organization of the USASOC but is controlled by the Joint Special Operations Command (JSOC). It is used for hostage rescue and counterterrorism, as well as direct action and reconnaissance against high-value targets. 1st SFOD-D and its U.S. Navy counterpart, DEVGRU, "SEAL Team 6", perform many of the most highly complex and dangerous missions in the U.S. military. These units are also often referred to as "Tier One" and special mission units by the U.S. government. The 75th Ranger Regiment (U.S. Army Rangers) is the premier light-infantry unit of the United States Army and is headquartered at Fort Benning, Georgia. The 75th Ranger Regiment's mission is to plan and conduct special missions in support of U.S. policy and objectives. The Rangers are a flexible and rapid-deployable force. Each battalion can deploy anywhere in the world within 18 hours of notice. The Army places much importance on the 75th Ranger Regiment and its training; it possesses the capabilities to conduct conventional and most special operations missions. Rangers are capable of infiltrating by land, sea, or air and direct action operations such as conducting raids or assaulting buildings or airfields. The 160th Special Operations Aviation Regiment (Night Stalkers) headquartered at Fort Campbell, Kentucky provides aviation support to units within USSOCOM. The Regiment consists of MH-6 and AH-6 light helicopters, MH-60 helicopters and MH-47 heavy assault helicopters. The capabilities of the 160th SOAR (A) have been evolving since the early 1980s. Its focus on night operations resulted in the nickname, "Night Stalkers." The primary mission of the Night Stalkers is to conduct overt or covert infiltration, exfiltration, and resupply of special operations forces across a wide range of environmental conditions. 4th Psychological Operations Group (Airborne) and 8th Psychological Operations Group (Airborne) Soldiers use persuasion to influence perceptions and encourage desired behavior. PSYOP soldiers support national objectives at the tactical, operational and strategic levels of operations. Strategic psychological operations advance broad or long-term objectives; global in nature, they may be directed toward large audiences or at key communicators. Operational psychological operations are conducted on a smaller scale. 4th POG(A) is employed by theater commanders to target groups within the theater of operations. 4th POG(A) purpose can range from gaining support for U.S. operations to preparing the battlefield for combat. Tactical psychological operations are more limited, used by commanders to secure immediate and near-term goals. In this environment, these force-enhancing activities serve as a means to lower the morale and efficiency of enemy forces. 95th Civil Affairs Brigade (Airborne) specialists identify critical requirements needed by local citizens in war or disaster situations. They also locate civilian resources to support military operations, help minimize civilian interference with operations, support national assistance activities, plan and execute noncombatant evacuation, support counter-drug operations and establish and maintain liaison with civilian aid agencies and other non-governmental organizations. In support of special operations, these culturally oriented, linguistically capable Soldiers may also be tasked to provide functional expertise for foreign internal defense operations, unconventional warfare operations and direct action missions. 528th Sustainment Brigade (Special Operations) (Airborne) (SO) (A) has a difficult mission supporting USASOC. In their respective fields, signal, intelligence, medical, and support soldiers provide communications, focused intelligence, medical Role II support, supplies, maintenance, equipment, and expertise allowing ARSOF to "shoot, move and communicate" on a continuous basis. Because USASOC often uses ARSOF-unique items, soldiers assigned to these units are taught to operate and maintain a vast array of specialized equipment not normally used by their conventional counterparts. The 528th also provides the USASOC with centralized and integrated material management of property, equipment maintenance, logistical automation and repair parts and supplies. John F. Kennedy Special Warfare Center (USAJFKSWCS) trains USSOCOM and Army Special Operations Forces through development and evaluation of special operations concepts, doctrines and training. Marine Forces Special Operations Command In October 2005, the Secretary of Defense directed the formation of United States Marine Forces Special Operations Command, the Marine component of United States Special Operations Command. It was determined that the Marine Corps would initially form a unit of approximately 2500 to serve with USSOCOM. On February 24, 2006 MARSOC activated at Camp Lejeune, North Carolina. MARSOC initially consisted of a small staff and the Foreign Military Training Unit (FMTU), which had been formed to conduct foreign internal defense. FMTU is now designated as the Marine Special Operations Advisor Group (MSOAG). As a service component of USSOCOM, MARSOC is tasked by the Commander USSOCOM to train, organize, equip, and deploy responsive U.S. Marine Corps special operations forces worldwide, in support of combatant commanders and other agencies. MARSOC has been directed to conduct foreign internal defense, direct action, and special reconnaissance. MARSOC has also been directed to develop a capability in unconventional warfare, counter-terrorism, and information operations. MARSOC deployed its first units in August 2006, six months after the group's initial activation. MARSOC reached full operational capability in October 2008.Units Marine Raider Regiment (Marine Raiders) consists of a Headquarters Company and three Marine Raider Battalions, the 1st, 2nd and 3rd. The Regiment provides tailored military combat-skills training and advisor support for identified foreign forces in order to enhance their tactical capabilities and to prepare the environment as directed by USSOCOM as well as the capability to form the nucleus of a Joint Special Operations Task Force. Marines and Sailors of the MRR train, advise and assist friendly host nation forces – including naval and maritime military and paramilitary forces – to enable them to support their governments' internal security and stability, to counter-subversion and to reduce the risk of violence from internal and external threats. MRR deployments are coordinated by MARSOC, through USSOCOM, in accordance with engagement priorities for Overseas Contingency Operations. Marine Raider Support Group (MRSG) trains, equips, structures, and provides specially qualified Marine forces, including, operational logistics, intelligence, Military Working Dogs, Firepower Control Teams, and communications support in order to sustain worldwide special operations missions as directed by Commander, U.S. Marine Forces Special Operations Command (COMMARFORSOC). Marine Raider Training Center (MRTC) performs the screening, recruiting, training, assessment and doctrinal development functions for MARSOC. It includes two subordinate Special Missions Training Branches (SMTBs), one on each coast. Naval Special Warfare Command The United States Naval Special Warfare Command (NAVSPECWARCOM, NAVSOC, or NSWC) was commissioned April 16, 1987, at Naval Amphibious Base Coronado in San Diego as the Naval component to the United States Special Operations Command. Naval Special Warfare Command provides vision, leadership, doctrinal guidance, resources and oversight to ensure component special operations forces are ready to meet the operational requirements of combatant commanders. Today, SEAL Teams and Special Boat Teams comprise the elite combat units of Naval Special Warfare. These teams are organized, trained, and equipped to conduct a variety of missions to include direct action, special reconnaissance, counter-terrorism, foreign internal defense, unconventional warfare and support psychological and civil affairs operations. Their highly trained operators are deployed worldwide in support of National Command Authority objectives, conducting operations with other conventional and special operations forces.Units United States Navy SEALs have distinguished themselves as an individually reliable, collectively disciplined and highly skilled special operations force. The most important trait that distinguishes Navy SEALs from all other military forces is that SEALs are maritime special operations, as they strike from and return to the sea. SEALs (SEa, Air, Land) take their name from the elements in and from which they operate. SEALs are experts in direct action and special reconnaissance missions. Their stealth and clandestine methods of operation allow them to conduct multiple missions against targets that larger forces cannot approach undetected. Because of the dangers inherent in their missions, prospective SEALs go through what is considered by many military experts to be the toughest training regime in the world. Naval Special Warfare Development Group (DEVGRU), referred to as SEAL Team Six, the name of its predecessor which was officially disbanded in 1987. SEAL Delivery Vehicle Teams are SEAL teams with an added underwater delivery capability who use the SDV MK VIII and the Advanced SEAL Delivery System (ASDS), submersibles that provide NSW with an unprecedented capability that combines the attributes of clandestine underwater mobility and the combat swimmer. Special Warfare Combatant-craft Crewmen (SWCC) operate and maintain state-of-the-art vessels and high-tech equipment to conduct coastal patrol and interdiction and support special operations missions. Focusing on infiltration and exfiltration of SEALs and other SOF, SWCCs provide dedicated rapid mobility in shallow water areas where larger ships cannot operate. They also bring to the table a unique SOF capability: Maritime Combatant Craft Aerial Delivery System—the ability to deliver combat craft via parachute drop. Like SEALs, SWCCs must have excellent physical fitness, highly motivated, combat-focused and responsive in high-stress situations. Air Force Special Operations Command Air Force Special Operations Command was established on May 22, 1990, with headquarters at Hurlburt Field, Florida. AFSOC is one of the 10 Air Force Major Commands or MAJCOMs, and the Air Force component of United States Special Operations Command. It holds operational and administrative oversight of subordinate special operations wings and groups in the regular Air Force, Air Force Reserve Command and the Air National Guard. AFSOC provides Air Force special operations forces for worldwide deployment and assignment to regional unified commands. The command's SOF are composed of highly trained, rapidly deployable airmen, conducting global special operations missions ranging from the precision application of firepower via airstrikes or close air support, to infiltration, exfiltration, resupply and refueling of SOF operational elements. AFSOC's unique capabilities include airborne radio and television broadcast for psychological operations, as well as aviation foreign internal defense instructors to provide other governments military expertise for their internal development. The command's core missions include battlefield air operations; agile combat support; aviation foreign internal defense; information operations; precision aerospace fires; psychological operations; specialized air mobility; specialized refueling; and intelligence, surveillance and reconnaissance.Components Combat Controllers (CCT) are ground combat forces specialized in a traditional pathfinder role while having a heavy emphasis on simultaneous air traffic control, fire support (via airstrikes, close air support and command, control, and communications in covert or austere environments. Pararescuemen (PJ) are the only Department of Defense specialty specifically trained and equipped to conduct conventional and unconventional personnel recovery operations. A PJ's primary function is as a personnel recovery specialist with emergency trauma medical capabilities in humanitarian and combat environments. Special Reconnaissance (SR) conduct long-range interdiction, surveillance and intelligence gathering. A subset of their responsibilities is to assess and interpret weather and environmental intelligence from forward-deployed locations, working alongside special operations forces.Organization' The 1st Special Operations Wing (1 SOW) is located at Hurlburt Field, Florida. Its mission focus is unconventional warfare: counter-terrorism, combat search and rescue, personnel recovery, psychological operations, aviation assistance to developing nations, "deep battlefield" resupply, interdiction, and close air support. The wing's core missions include aerospace surface interface, agile combat support, combat aviation advisory operations, information operations, personnel recovery/recovery operations, precision aerospace fires, psychological operations dissemination, specialized aerospace mobility, and specialized aerial refueling. Among its aircraft is the MC-130 Combat Talon II, a low-level terrain-following special missions transport that can evade radar detection and slip into enemy territory at a altitude for infiltration/exfiltration missions, even in zero visibility, dropping off or recovering men or supplies with pinpoint accuracy. It also operates the AC-130 Spooky and Spectre gunships that provide highly accurate airborne gunfire for close air support of conventional and special operations forces on the ground. The 24th Special Operations Wing (24 SOW) is located at Hurlburt Field, Florida. It is composed of the 720th Special Tactics Group, 724th Special Tactics Group, Special Tactics Training Squadron and 16 recruiting locations across the United States. The Special Tactics Squadrons, under the 720th STG and 724th STG, are made up of Special Tactics Officers, Combat Controllers, Combat Rescue Officers, Pararescuemen, Special Operations Weather Officers and Airmen, Air Liaison Officers, Tactical Air Control Party operators, and a number of combat support airmen which comprise 58 Air Force specialties. The 27th Special Operations Wing (27 SOW) is located at Cannon AFB, New Mexico. Its primary mission includes infiltration, exfiltration and re-supply of special operations forces; air refueling of special operations rotary wing and tiltrotor aircraft; and precision fire support. These capabilities support a variety of special operations missions including direct action, unconventional warfare, special reconnaissance, counter-terrorism, personnel recovery, psychological operations and information operations. The 193d Special Operations Wing (193 SOW) is an Air National Guard (ANG) unit, operationally gained by AFSOC, and located at Harrisburg International Airport/Air National Guard Station (former Olmsted Air Force Base), Pennsylvania. Under Title 32 USC, the 193 SOW performs state missions for the Governor of Pennsylvania as part of the Pennsylvania Air National Guard. Under Title 10 USC, the 193 SOW is part of the Air Reserve Component (ARC) of the United States Air Force. Its primary wartime and contingency operations mission as an AFSOC-gained unit is psychological operations (PSYOP). The 193 SOW is unique in that it is the only unit in the U.S. Air Force to fly and maintain the Lockheed EC-130J Commando Solo aircraft. The 919th Special Operations Wing (919 SOW) is an Air Force Reserve Command (AFRC) unit, operationally gained by AFSOC, and located at Eglin AFB Auxiliary Field #3/Duke Field, Florida. The 919 SOW flies and maintains the MC-130E Combat Talon I and MC-130P Combat Shadow special operations aircraft designed for covert operations. The 352d Special Operations Wing (352 SOW) at RAF Mildenhall, United Kingdom serves as the core to the United States European Command's standing Joint Special Operations Air Component headquarters. The squadron provides support for three flying squadrons, one special tactics squadron and one maintenance squadron for exercise, logistics, and war planning; aircrew training; communications; aerial delivery; medical; intelligence; security and force protection; weather; information technologies and transformation support and current operations. The 353d Special Operations Group (353 SOG) is the focal point for all U.S. Air Force special operations activities throughout the United States Pacific Command (USPACOM) theater. Headquartered at Kadena AB, Okinawa, Japan the group is prepared to conduct a variety of high-priority, low-visibility missions. Its mission is air support of joint and allied special operations forces in the Pacific. It maintains a worldwide mobility commitment, participates in Pacific theater exercises as directed and supports humanitarian and relief operations. The United States Air Force Special Operations School (USAFSOS) at Hurlburt Field, Florida is a primary support unit of the Air Force Special Operations Command. The USAFSOS prepares special operations Airmen to successfully plan, organize, and execute global special operations by providing indoctrination and education for AFSOC, other USSOCOM components, and joint/interagency/ coalition partners. Order of Battle List of commanders USSOCOM medal The United States Special Operations Command Medal was introduced in 1994 to recognize individuals for outstanding contributions to, and in support of, special operations. Some notable recipients include; Lieutenant General Samuel V. Wilson Colonel Ralph Puckett SCPO Kristin Beck Since it was created, there have been more than 50 recipients, only six of whom were not American, including; General Benoît Puga (France) † Kaptein Gunnar Sønsteby, 2008 (Norway) † Generał broni Włodzimierz Potasiński, 2010 (Poland) Generał dywizji Piotr Patalong, 2014 (Poland) Generał brygady Jerzy Gut, 2014 (Poland) Jungjang (Lieutenant General) Chun In-bum, 2016 (Republic of Korea) († posthumously) References Citations Bibliography Web USDOD. U.S. DOD Dictionary of Military Terms. United States of America: U.S. Department of Defense. 5 June 2003. USDOD. U.S. DOD Dictionary of Military Terms: Joint Acronyms and Abbreviations. United States of America: U.S. Department of Defense. 5 June 2003. External links U.S. Special Operations Command U.S. Army Special Operations Command U.S. Marine Corps Forces Special Operations Command U.S. Naval Special Warfare Command Air Force Special Operations Command Department of Defense Joint Special Operations University Military units and formations in Florida Counter-terrorism Military units and formations established in 1987
10400382
https://en.wikipedia.org/wiki/Gernot%20Heiser
Gernot Heiser
Gernot Heiser (born 1957) is a Scientia Professor and the John Lions Chair for operating systems at the University of New South Wales (UNSW). He is also leader of the Software Systems Research Group (SSRG) at NICTA. In 2006, he cofounded Open Kernel Labs (OK Labs, acquired in 2012 by General Dynamics) to commercialise his L4 microkernel technology. Life Heiser was born in 1957. He earned a BSc studying physics at the German University of Freiburg, an MSc at the Canadian Brock University, and a PhD at the Swiss ETH Zurich. Research Heiser's research focuses on microkernels, microkernel-based systems, and virtual machines, and emphasizes performance and reliability. His group produced Mungi, a single address space operating system, for clusters of 64-bit computers, and implementations of the L4 microkernel with very fast inter-process communication. His Gelato@UNSW team was a founding member of the Gelato Federation, and focused on performance and scalability of Linux on Itanium. They established theoretical and practical performance limits of message passing inter-process communication (IPC) on Itanium. Since joining NICTA at its creation in 2002, his research shifted away from high-end computing platforms, and toward embedded systems, with the aim of improving security, safety, and reliability via use of microkernel technology. This led to the development of a new microkernel called seL4, and its formal verification, claimed to be the first-ever complete proof of the functional correctness of a general-purpose OS kernel. His work on virtualization was motivated by the need to provide a complete OS environment on his microkernels. His Wombat project followed the approach taken with the L4Linux project at Dresden, but was a multi-architecture paravirtualized Linux running on x86, ARM and MIPS hardware. The Wombat work later formed the basis for the OKL4 hypervisor of his company Open Kernel Labs. The desire to reduce the engineering effort of paravirtualization led to the development of the soft layering approach of automated paravirtulization which was demonstrated on x86 and Itanium hardware. His vNUMA work demonstrated a hypervisor which presents a distributed system as a shared-memory multiprocessor as a possible model for many-core chips with large numbers of processor cores. Device drivers are another focus of his work, including the first demonstration of user-mode drivers with a performance overhead of less than 10%, an approach to driver development that eliminates most typical driver bugs by design, device drivers produced from device test benches, and a demonstration of the feasibility of generating device drivers automatically from formal specifications. Recent research also includes power management. In the past, he also worked on semiconductor device simulation, where he pioneered use of multi-dimensional modeling to optimize silicon-based solar cells. Operating system projects seL4 3rd-generation microkernel L4.verified formal verification of seL4 Dingo and Termite frameworks for reliable device drivers Koala framework for OS-level energy management vNUMA, a hypervisor providing shared virtual memory on a cluster Mungi and Iguana single address space operating systems Wombat portable Linux on L4 microkernel Gelato@UNSW performance and scalability of Linux on Itanium L4/MIPS 64-bit L4 microkernel on MIPS architecture Teaching Advanced Operating Systems at UNSW Awards Australian Academy of Technology and Engineering (ATSE) Fellow (2016). Institute of Electrical and Electronics Engineers (IEEE) Fellow (2016) "For contributions to security and safety of operating systems". Australian Computer Society (ACS) ICT Researcher of the Year (2015). Association for Computing Machinery (ACM) Fellow (2014) "For contributions demonstrating that provably correct operating systems are feasible and suitable for real-world use". Scientia Professor of the University of New South Wales 2010 Innovation Hero of The Warren Centre for Advanced Engineering at the University of Sydney NSW Scientist of the Year 2009 Category Engineering, Mathematics and Computer Sciences Best Paper at the 22nd ACM SIGOPS Symposium on Operating Systems Principles, 2009 Best Paper at the 13th IEEE Asia-Pacific Computer Systems Architecture Conference, 2008 Best Student Paper at the 2005 USENIX Annual Technical Conference References External links Gernot Heiser's blog Bio at CSIRO with full publication list 1957 births Living people Australian computer scientists German computer scientists Computer systems researchers University of New South Wales faculty Fellows of the Australian Academy of Technological Sciences and Engineering
65378782
https://en.wikipedia.org/wiki/Communication%20Troops%20of%20the%20Ministry%20of%20Defense%20of%20the%20Soviet%20Union
Communication Troops of the Ministry of Defense of the Soviet Union
The Communication Troops of the Ministry of Defense of the Soviet Union were generalized names for special forces intended for the deployment and operation of communication systems in order to provide command and control of troops and forces subordinate to the Ministry of Defense of the Soviet Union in all types of their activities. As a branch of the special forces, the Communication Troops were an integral part of all five branches of the Armed Forces of the Soviet Union (Ground Forces, the Navy, the Air Force, Air Defense Forces and Strategic Missile Forces). The general command of the Communication Troops of all five branches of the armed forces was carried out by the Chief of the Communication Troops of the Ministry of Defense of the Soviet Union. The Communication Troops, which were part of the Internal Troops of the Ministry of Internal Affairs of the Soviet Union, the Border Troops and the Government Communication Troops of the State Security Committee of the Soviet Union, were not part of the Communication Troops of the Ministry of Defense of the Soviet Union. History Civil War After the October Revolution, in the context of the outbreak of civil war and military intervention, in order to protect the Soviet Republic, the creation of the first units of the Red Army began. In the first half of 1918, measures were taken to create a system of governing bodies of the Red Army. On April 20, 1918, order No. 294 of the People's Commissariat for Military and Naval Affairs was issued, which determined the staff of the rifle division. In this state, a place was allocated for a separate communications battalion with a personnel of 977 people, and in rifle regiments – communications teams. The battalion commander combined the position of divisional communications commander, respectively, the regimental communications team commander – regiment communications commander. The lack of personnel, transport and equipment prevented the implementation of these steps. In November 1918, new staffs were introduced for the communications battalion of the rifle division, the communications company for the rifle brigade and the communications command for the rifle regiment. According to the new states, the communications battalion of the division and the communications teams of the rifle sections had less communications, transport and personnel, which made it possible to put them into practice. In December 1918, communications units began to be created in the military aviation and cavalry. The difference from the organizational structure of the troops of the Russian Empire was the independence of the battalions and communications teams, which, as in the tsarist period, were not included in the engineering troops. In October 1918, the issue of centralized management of radio communications in the Red Army was resolved, for which the position of a radiotelegraph inspector was established, subordinate to the headquarters of the Revolutionary Council of the Republic in operational terms and to the chief of the Main Military Engineering Directorate in technical terms. At the same time, on all fronts, the post of inspector of the front radiotelegraph was introduced, and in the combined arms armies – the head of the army radiotelegraph. At the headquarters of the front, post and telegraph departments of the People's Commissariat of Post and Telegraph were created (they provided postal communication and communication via permanent communication lines). The supply of the Red Army with communications equipment was assigned to the Main Military Engineering Directorate. On October 20, 1919, by Order No. 1736/362 of the Revolutionary Military Council, the Communications Directorate of the Red Army was created, headed by the Chief of Communications of the Red Army, as well as the communications directorate of fronts and armies, communications departments in divisions and brigades. Thus, the official registration of the unification of the communications leadership of the Red Army into a harmonious system took place. October 20, 1919 became the birthday of the communication troops of the Armed Forces of the country, as independent special forces. The Communications Directorate of the Red Army was responsible for organizing and maintaining communications between the Revolutionary Military Council of the Republic and the Field Headquarters of the Red Army with the fronts and armies, the formation of communications units, their staffing, training, provision of equipment and other equipment. Artemy Lyubovich (formerly People's Commissar of Posts and Telegraphs) was appointed the first head of the Red Army's Communications Department; from September 1920 to April 1924, Innokenty Khalepsky was the head of the Red Army's Communications Department (formerly the head of communications of the Caucasian Front), who did a lot for the formation and development of the communications troops. The Communications Department of the Workers' and Peasants' Red Army was responsible for organizing and maintaining communications between the Revolutionary Council and the field headquarters of the Red Army with the fronts and armies, the formation of communications units, their staffing, training, provision of equipment and other property. By December 1920, the communication troops consisted of 13 separate battalions and 46 communication battalions of divisions and brigades, a large number of companies and communications teams, warehouses, workshops and other units and subunits. The personnel of the troops exceeded 100,000 people. During the hostilities of the Civil War, general provisions for organizing communications at all levels of the Red Army were worked out, the main responsibilities of communications officials were defined, and ways of organizing communications by various means were developed. Continuously progress was made in the organizational structure of the linear and nodal units and communications units. For the first time in the history of military communications, communications trains were used to control the troops of the Red Army. The activities of the communication troops during the Civil War were highly appreciated in a special order of the Revolutionary Military Council of the Republic of February 17, 1921, which noted: "The heroic Red Army, which covered itself with unfading glory, owes much to the communication troops, who performed during the long struggle against enemies and had big and responsible tasks". At the end of hostilities, the communication troops were reduced to 32,600 people. The armament was mainly outdated and worn–out communication equipment of foreign production. Great difficulties in organizing communications were caused by the multi–type and deterioration of communications equipment. The issue of improving military communications has become topical. By order of the Revolutionary Military Council of June 6, 1920, under the head of communications of the Red Army, a full–time Military–Technical Communications Council was established, which was responsible for making decisions on all the main issues of organizing and developing military communications, including the management of scientific research and the creation of new technical means, as well as resolution of current pressing issues. Interwar period After the persistent efforts of the Chief of Communications of the Red Army, Innokenty Khalepsky, the Revolutionary Military Council of the Republic on April 15, 1923 established the Research Institute of the Military–Technical Communications Council of the Workers' and Peasants' Red Army (now the 16th Central Research and Testing Institute). A scientific center appeared in the communication troops, which, on the basis of constant analysis of scientific and technical achievements in the country and the world, began to search for and military–technical substantiation of specific ways of their use in military communications. From the first days of its formation to the present time, the Institute has become a reliable support for the leadership of the communication troops in the formation and implementation of technical policy in the field of improving and developing systems and technical complexes of military communications. On the basis of research by the institute and the country's communications industry (including its own personnel) in the pre–war period, the first generation of military field radio stations, telephone and telegraph devices, switching devices, communication cables, ground electronic reconnaissance equipment with which the Red Army entered into the Great Patriotic War. In terms of their technical level, these funds basically met the requirements of the troops of that time, but they were not enough. A significant number of obsolete communications equipment continued to remain in the Red Army. The problem of providing troops with communications equipment became especially acute with the beginning of the massive deployment of the army and navy in the fall of 1939. By the end of the 1920s, the communication troops remained qualitatively at the level of the final stage of the civil war. The subsequent industrialization of the country led to organizational and staff changes and an overall increase in the size of the Red Army, which was reflected in changes in the communication troops. In 1924, the First Congress of the chiefs of communications of military districts, formations and commanders of communications units was held. The congress examined theoretical and practical issues of the development of military communications. The recommendations of the congress were included in the Field Manual of the Red Army in 1925, which outlined the main principles and methods of organizing communications by various means, the responsibilities of commanders and staffs for command and control of troops and communications. In the early 1930s, the communication troops had in their composition (excluding corps and division battalions and communication squadrons): 9 separate communications regiments; 1 separate radio regiment; 12 separate radio battalions; 20 separate companies of rifle corps; 71 separate company of rifle divisions; 4 communications squadrons of the cavalry corps; 12 squadrons of cavalry divisions. By June 1941, the communication troops were: 19 regiments (14 district and 5 army); 25 separate line communications battalions; 16 separate radio divisions (including special purpose divisions); 4 separate companies. By the middle of 1941, the number of troops radio equipment was: in the General Staff – Front link up to 35%, in the Army – Corps link – 11%, in divisions – 62%, in regiments – 77%, in battalions – 58%. Of the total number of obsolete radio stations in the front–line radio networks were 75%, in the army – 24%, in the divisional – 89%, in the regimental – 63%. By this time, a set of communications units only of central and district subordination consisted of 19 separate communications regiments, 25 separate communications battalions and other units and organizations. Communication warriors in the pre–war period took part in hostilities on the Chinese–Eastern Railway (1929), near Lake Khasan (1938), near the Khalkhin–Gol River (1939), in the liberation of Western Belarus and Western Ukraine (1939), in the Soviet–Finnish War (1939–1940). The communication troops from April 1924 to June 1941 were consistently led by Nikolai Sinyavsky, Roman Longva, Alexey Aksyonov, Ivan Naydenov, Nikolay Gapich. Great Patriotic War With the outbreak of hostilities in the Great Patriotic War, in connection with the urgent need to provide communications in all levels of the Red Army's command and control, the number of communication troops increased sharply. For this reason, on August 5, 1941, the Communications Department of the Red Army was reformed into the Main Communications Department of the Red Army. In 1941, during the Great Patriotic War, by order of the People's Commissariat of Defense, the post of chief of the communications troops of the Workers' and Peasants' Red Army was created. In the first, most difficult period of the war, major shortcomings in the preparedness of the border areas with regard to communications, technical equipment and in the preparedness of the communication troops of the Workers' and Peasants' Red Army became obvious. The basing of wire communications on the network of state permanent air lines allowed aviation and enemy saboteurs to disable it. Radio communications were neither organizationally nor financially prepared to ensure stable command and control of troops. In the conditions of retreat and the most difficult defensive battles, the formations and units of the Red Army were not fully equipped with communications units and subunits. The staffing and equipment of the communications units and subdivisions were extremely insufficient. These circumstances were one of the reasons for the failures of the Workers' and Peasants' Red Army in the initial period of the war. At the same time, the scale of the unfolding battles from the very beginning demanded the use of all the country's capabilities in the interests of ensuring communication with the troops. In November 1942, special radio divisions, engaged in electronic reconnaissance, were allocated from the Communication Troops of the Red Army, which were transferred to the subordination of the People's Commissariat of Internal Affairs of the Soviet Union. The experience of combat operations brought changes to the organization of the communication forces. So, in the period from May to August 1943, separate communication divisions of the Reserve of the Supreme Command and special–purpose communication centers were created to provide communication between representatives of the Headquarters of the Supreme High Command with the General Staff and with the headquarters of the fronts. In order to centralize the leadership of communications in the country and the army, by the decision of the State Defense Committee of July 23, 1941, Colonel Ivan Peresypkin (since February 1944 – Marshal of the Communication Troops) was appointed Chief of Communications of the Red Army, who also retained the post of People's Commissar of Communications of the Soviet Union. As new front–line and army directorates were formed, the need for communication troops and technical means for them continuously increased. During the first year of the war, under the energetic leadership of Ivan Peresypkin, over 1,000 new communications units were formed, schools and courses were created for the accelerated training of various specialists to meet the needs of the front in them. Ivan Peresypkin managed to use all the resources and opportunities available in the country to establish mass production of communications equipment and supply them to the troops. As a result of all these efforts, it was possible to reverse the situation with providing communications for the active forces. In 1942, the first portable domestic ultrashort–wave radio station A–7 with frequency modulation for rifle and artillery regiments was developed, which received very high praise in the troops. A noticeable increase in the role of radio communications occurred already during the operations of the summer–autumn campaign of 1942. The experience of military operations has convincingly shown that radio, especially in an offensive, is becoming the main, and often the only, means of communication providing command and control of troops. In the course of the war, the equipment of troops with radio communications equipment increased sharply. In 1944, the industry supplied more than 64 thousand radio stations of all types to the troops. Further improvement of communications control bodies, organizational and staff structure of formations, units and subdivisions of communications, an increase in their number took place. New elements were introduced into the communications system of the General Staff – special–purpose communications centers, through which direct wire communications of the Supreme High Command Headquarters were provided with 2–4 fronts. Communication centers for special purposes were located 50–200 kilometers from the front line. Through them communication was also provided between adjacent fronts. Throughout the war, the proportion of signalmen in the total number of army personnel increased continuously. By the end of 1944, separate communications brigades were created, consisting of several separate battalions, and additional communications centers were deployed. Due to the increase in the number of active fronts and the increase in the distance between the General Staff and the headquarters of the fronts, the number of communication units of the Supreme High Command Reserve increased significantly, and communication brigades of the Supreme High Command Reserve were formed. By the end of the war, the Red Army had in its composition a large number of communication formations, the largest type of which was the communication regiment. In total, by May 1945, the Communication Troops of the Red Army had: 125 communication regiments (of which 10 communication regiments of the Air Defense and 20 communication regiments of the Air Force); 300 separate communication battalions (excluding corps and divisional); About 500 separate communication companies. 6 regiments were awarded the rank of guards. 294 signalmen became Heroes of the Soviet Union, more than 100 signalmen became full holders of the Order of Glory. Many thousands of military signalmen were awarded orders and medals. During the war, almost 600 communications units were awarded orders. A number of front–line and army communications units were awarded the title of Guards. In 1944, the rank of communications marshal appeared (3 more generals were appointed marshals of troops after the war). Post–war period In connection with the post–war mass demobilization in the armed forces and the reduction of the armed forces in the period from 1945 to 1946, more than 300 communications units were disbanded (not counting those that were part of the corps and divisions). In March 1946, the Main Directorate of Communications of the Red Army was reorganized into the Directorate of the Chief of Communications of the Ground Forces of the Armed Forces of the Soviet Union. Also in 1946, the Special Forces of the Intelligence Directorate of the Workers' and Peasants' Red Army, which carried out radio intelligence, were returned to the subordination of the War Ministry from the structure of the People's Commissariat of Internal Affairs – People's Commissariat of State Security. In April 1948, by a directive from the Minister of Defense of the Soviet Union, the Office of the Chief of Communication Troops of the Ground Forces was transformed into the Directorate of Communication Troops of the Soviet Army. In October 1958, the Directorate of the Communication Troops of the Soviet Army was transformed into the Directorate of the Chief of the Communication Troops of the Ministry of Defense of the Soviet Union. The main part of the formations and units of the Communication Troops supported the activities of the ground forces. The generalization and analysis of the experience of the combat use of communication troops convincingly showed that success in operations and battles depends to a decisive extent on the quality of command and control, and command and control – on the state of technical equipment, capabilities and level of preparedness of the communication troops. In the first post–war years in the communication troops of the Soviet Army, much attention was paid to the development and implementation into practice of new principles for organizing communications of operational formations and combined–arms formations on the basis of the richest experience of the Great Patriotic War, as well as the development and substantiation of operational–tactical requirements for new means and complexes of communications, capable of providing command and control in the new conditions of warfare. Marshal Ivan Peresypkin at the end of 1944 set the task of starting work on the preparation of the first post–war weapons system for military communications. In the late 40s and 50s, the troops began to receive new communication systems with qualitatively new tactical and technical characteristics. Short–wave car radio stations were created for the radio networks of the General Staff, for front–line, for army (corps) radio networks, as well as for divisional networks and a tank radio station. Portable ultra–short–wave radio stations were created, which provided search–free and tuningless communication in the tactical control link. At the same time, technical means were created for a fundamentally new type of communication for the Soviet Army – radio relay communication (multichannel station R–400 and low–channel station R–401), as well as frequency multiplexing and channelization complexes, qualitatively new samples of telephone and telegraph equipment, switching devices, several types of field communication cables. Equipping troops with radio relay stations was a qualitatively new stage in the development of communication systems of operational formations and combined–arms formations, increased their reliability, survivability and noise immunity, and also improved a number of other indicators. The introduction of new technology into the troops required a revision of the organizational and technical structure of communication centers. Based on the use of new means of communication, standard complexes of automobile control rooms were created for the formation of mobile field communication centers for various control points. For the first time, mobile communications units of industrial production began to enter service with the troops. The time for the deployment of such communication centers was sharply reduced, the mobility of communication systems in general increased significantly. In the second half of the 50s, the rapid development of nuclear missile weapons began, and the qualitative improvement of other means of armed struggle began, which led to significant changes in the structure of the Armed Forces of the Soviet Union. These circumstances, in turn, necessitated the development of new methods of command and control of troops and weapons. The period of the 60s, in general, is characterized by the beginning of practical work on the creation of automated complexes for command and control of troops and weapons (anti–aircraft, artillery and missile forces) and design work in the field of automation of control of the armed forces. Increased requirements for communication systems and channels began to appear in terms of their stability, noise immunity, secrecy and timeliness in the transmission of information. The Communication Troops were successfully solving these complex new tasks. With the retirement of Marshal of the Communication Troops Ivan Peresypkin in 1957, Alexei Leonov began to lead the Communication Troops (since 1961, Marshal of the Communication Troops). Under his leadership, work continued to improve the structure of the troops and create new means of communication. The introduction of new short–wave and ultra–short–wave single–band radio stations of high and medium power has significantly increased the quality characteristics of radio communication channels in operational and operational–tactical levels of command and control. Radio relay communications were further developed. Means were created for a new type of communication – tropospheric communication, which made it possible to provide high–quality multichannel communication directly between control points at a distance of up to 150–250 kilometers from each other (without retransmission). In the 60s, the first practical work on the creation of satellite communication lines was launched. Complexes of unified compression and channeling equipment common for cable, radio relay and tropospheric communication lines, new means of telephone, telegraph and facsimile equipment, data transmission equipment and complexes of equipment for classifying information for various purposes were created. Based on the use of a new generation of various communication technologies, a new generation of hardware field communication centers was created, as well as several types of command and staff vehicles on an automobile and armored transport base for commanders of motorized rifle (tank) regiments and battalions. Appropriate clarifications were also made to the organizational structure of the communication troops and to the system of training highly qualified command and engineering personnel. The next stage in the development of the communication troops since 1970 is associated with the activities of Andrei Belov (in 1973, he was awarded the military rank of Marshal of the Communication Troops). At the beginning of the 70s, on his initiative, a system of routine maintenance and controlled operation of communications equipment was developed and introduced into the troops. Vigorous measures were taken to solve the problem of managing the communication system itself and its elements. The industry at that time did not produce technical means for equipping communications control posts. In this regard, the 16th Central Scientific Research Testing Institute of the Ministry of Defense of the Soviet Union was instructed to promptly develop and manufacture atypical complex equipment for communications control posts. Based on the provisions of the theory and practice of the communication troops, it was concluded that it was necessary to create unified communication systems of large formations and formations while preserving the communication subsystems of the combat arms, special forces and services with a certain specificity of combat activities (reconnaissance, air defense and aviation, missile troops and artillery, rear and more). In this regard, and also taking into account the increasing role of communication systems and complexes in the management of the Armed Forces of the Soviet Union, on May 26, 1977, a directive of the General Staff No. 314/3/0534 was issued according to which the Office of the Chief of Communications of the Ministry of Defense of the Soviet Union was included in the General headquarters as the Office of the Chief of Communications of the Armed Forces of the Soviet Union. At the same time, the position of the head of this department began to be called as "Chief of Communications of the Armed Forces – Deputy Chief of the General Staff". At the end of the 1970s, measures were taken to develop comprehensive research in scientific organizations of the Ministry of Defense and Industry to substantiate conceptual issues of the creation and operation of a promising automated communication system of the Armed Forces. Based on the results of these studies, the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the Soviet Union in 1980 issued a special decree on the creation of a large cooperation of industrial and research organizations of the Ministry of Defense in order to launch work on the creation of a United Automated Communication System of the Armed Forces and the creation of technical equipment for it. At the same time, an automated front communications system, a unified satellite communications system of the Ministry of Defense (separately from the system of the Ministry of Communications – while maintaining the general system for launching spacecraft and command and measuring complexes) and promising technical means for them were created. Measures were taken to develop comprehensive research in scientific organizations of the Ministry of Defense and Industry to substantiate conceptual issues of creating and operating a promising automated communication system of the Armed Forces. Based on the results of these studies, by a special resolution of the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the Soviet Union in 1980, a large cooperation of industrial organizations and research organizations of the Ministry of Defense was created, work was launched on the development of a Joint Automated Communication System of the Armed Forces and the creation of complexes of technical means for it. At the same time, a unified satellite communications system of the Ministry of Defense and promising technical means for it were created. To ensure the functioning of the developed automated control systems for the Armed Forces, troops and weapons, special data transmission systems were created. The creation of automated control systems has caused a significant increase in the requirements for the technical characteristics of communication facilities and the communication system as a whole. In this regard, attention was constantly paid to the creation of new generations of basic communication means of general use and the modernization of some of the existing means. At the end of the 1980s, to ensure reliable radio communication at the tactical level, an automated complex of short–wave and ultra–short–wave radio stations R–163 (12 types) was created. At the end of the 1990s, to replace it, a perfect complex of noise–immune radio communications of the tactical level R–168 (17 types) entered the troops. New promising radio relay stations for multichannel and low–channel communications were created, including the first domestic microwave radio relay communications, as well as new effective means of tropospheric communications. Communication Troops at the last stage of the existence of the Soviet Union Classification of troops The communication troops were classified according to the following main features: Belonging to the control system that they provided with communication: Communication troops of the General Staff (central subordination); Communication troops of the main headquarters of the branches of the Armed Forces; Communication troops of operational–strategic commands (fronts, groups, districts); Operational commands (armies and corps); Associations of the branches and services of the Armed Forces; Communication units of formations and subunits of the arms and services of the Armed Forces. Organizational composition: Formations – brigades; Separate units – regiments, battalions (field communication centers), companies (centers, platoons, squads and crews); Institutions – research institutes, etc.; Educational establishments; Repair factories, storage bases and warehouses. The functional purpose of formations, units and communications units: Nodal; Linear; Territorial; Courier–postal service; Communication security control; Communication technical support and automated control systems. Tasks of the Communication Troops by belonging to the control system The main tasks of the Communication Troops of the Armed Forces of the Soviet Union were: Operation of existing communication systems; Carrying out measures to maintain communication systems in the established degrees of combat readiness; Creation, development and improvement of communication systems, ensuring their reliable operation; Strengthening and building up communication systems during the transfer of the Armed Forces from peacetime to wartime; Deployment of the field component of the communications system during operational deployment of troops. The communications troops of the central subordination allowed the leadership of the Armed Forces to respond in real time to changes in the military–political and operational–strategic situation in the world, to bring decisions and orders for the combat use of formations and units guaranteed and on time. The communication troops included: separate brigades and communication regiments, field and stationary communications centers, communications security control units, research institutions, educational institutions, repair plants, storage bases, warehouses. The communication troops of the main headquarters of the branches of the Armed Forces provided command and control of the commanders and the headquarters of the branches of the Armed Forces of groupings of troops (forces) in their daily activities, during the period of military danger and during the performance of combat missions. Their structure is similar to the composition of the communication troops of the central subordination. The communication troops of the operational–strategic command (strategic commands of directions, fronts, groups, districts, fleets) provided control to the commanders and headquarters of the operational–strategic commands of subordinate formations, formations and units in their daily activities, during a period of military danger and during the performance of combat missions. At this level, the troops included: a nodal communications brigade, a territorial communications brigade, a separate regiment (or battalion) of rear communications, a headquarters communications center, a center for automated control systems, a command post (communications and automated command and control system); courier–postal communication center, communication security control centers (points), communication equipment repair base, storage and repair base for military equipment. Liaison formations of operational commands (armies and corps) ensured command and control of subordinate formations and units both in daily activities and during a threatened period and during combat missions. They consisted of: a separate communications regiment, a junction, a communications center of the headquarters, a courier–postal communications center, a communications warehouse, a storage and repair base for military equipment. Communication formations and units of the central, operational–strategic and operational control levels were intended for the deployment and maintenance of stationary and field communication centers of the command posts of the General Staff, the main headquarters of the Armed Forces, operational formations, the deployment of communication lines by various means, the mutual exchange of communication channels with interconnected communication network of the country. Staff structure of troops By the final period of the existence of the Soviet Union, the communication troops of the central subordination (or the Supreme High Command) and the communication troops of the Ground Forces were most massively represented. Since the end of World War II, the largest formations of the signal troops were brigades, of which there were about 50. The massive creation of brigades began in the second half of the 1970s, when the existing regiments and separate battalions of central and district (group) subordination began to grow larger and reorganized into brigades. Centrally subordinate brigades differed in their tasks: Communication brigades of the Supreme High Command; Nodal brigades; Linear (territorial); Rear communications brigades; Reserve communications brigades; Communication training teams. Each of the brigades (except for the nodal ones) was a compound consisting of 3 to 5 separate battalions of various types (radio, tropospheric communications, long–distance communications, radio relay, radio relay cable, underground cable, line, nodal, construction and operational), as well as field and stationary nodes. Field and stationary communications centers were a battalion–level communications formation, consisting of communications units of various types listed above. The nodal brigades consisted of field communication centers. At the tactical level of command and control of troops (motorized rifle, tank and airborne divisions), as well as combat arms, special forces, technical support and rear, separate battalions and communication companies, communications platoons (command platoons), courier and postal stations, platoons of technical support, workshops for the repair of communications equipment were included in their regular structure. The main commands of each of the 4 directions (Western, Southwestern, Southern and Far Eastern) had 2 brigades and 2–3 separate communication battalions, and each of the border districts and the Group of Soviet Forces in Germany also had 2–3 communication brigades at their disposal, a regiment and 2–4 separate battalions, including a regiment or rear communications battalion. For each tank or combined arms army, there was an army communications regiment and a radio relay–cable battalion, and in the army corps there was a separate battalion. The basis of the army regiment was 2 field communications centers and a communications company. In the communication troops of the ground forces, the main combat unit of the communication troops were battalions of various types, which were both an integral part of brigades and regiments and parts of combined arms formations. These include the following types of battalions: As part of an army corps, tank, motorized rifle or airborne divisions – a separate communications battalion; As part of the communication brigades: Separate communications battalion (junction); Separate line (line–cable) communications battalion; Separate long–distance communications battalion; Separate construction and operational communications battalion; Separate battalion of heavy underground cable; As part of the combined arms (tank) army, army corps: Separate radio relay battalion (or tropospheric communications); Separate radio relay and cable battalion. Communication Troops armament During the Great Patriotic War, the ground forces were supplied with radio stations 12–RT, RBM, A–7, RSB–F, RAF–KV–3 and others, as well as many samples of telegraph and telephone equipment, including field telegraph devices 2BDA–43 made at that time. At the end of 1944, the signal troops began to receive the radio station RAF–KV–4 with the "Karbid" equipment, which ensured the operation of direct–printing telegraph devices over radio lines with protection against interference. The industry of the Soviet Union mastered the production of ultra–short–wave radio stations. In the troops, the saturation with radio communications equipment was constantly increasing at various command and control levels. For example, at the initial stage of the war, the rifle division had only 22 radio stations, but by the end of the war their number had grown to 130. From the late 1940s and into the 1950s, the communication troops began to receive more advanced means of communication. The following samples of shortwave car radios were created: For radio networks of the General Staff – R–100 and R–110; Front–line radio networks – R–101 and R–102; Army and corps – P–118 and R–103; Divisional – R–104 (portable and transportable modification); For tank troops – R–112. To ensure search–free and tuningless communication at the tactical level, the troops were supplied with small–sized ultra–shortwave radio stations: R–105, R–1Ob, R–108, R–109, R–114, R–116 and R–113 (tank). In the same period, a fundamentally new radio relay communication scheme was introduced for the Soviet army (multichannel station R–400 and low–channel station R–401), as well as frequency multiplexing and channelization complexes (P–310, P–304, P–311, P–312 , P–313, P–314). Improved samples of telephone and telegraph equipment, switching devices, several types of field communication cables were delivered. The first samples of command–staff vehicles R–125 "Alphabet", radio stations R–118 and radio relay stations R–403 and R–405 appeared in the troops, the installation of which was carried out on GAZ–69 and GAZ–bZ vehicles, and later on UAZ–469 and GAZ–66. In the postwar years, emphasis was placed on the mobility of field communication centers. In the 1950s and 1960s, the troops were supplied with complexes of mobile communication centers for command posts of different command levels: Mobile Communication Center No. 1 – front command post on 22 vehicles; Mobile Communication Center No. 2 – front–line on 6 vehicles; Mobile Communication Center No. 3 – army on 9 vehicles; Mobile Communication Center No. 4 – corps on 4 cars; Mobile Communication Center No. 5 – divisional on 1 vehicle. For operational and operational–tactical units of command and control, new shortwave and ultrashortwave single–band radio stations of high and medium power were delivered: R–135, R–136, R–137, R–140. For the tactical level of control, portable and transportable broadband ultra–shortwave radio stations R–107 and R–111 with automatic tuning to previously prepared frequencies were developed. Radio relay communications have also made progress. New types of radio stations such as R–121, R–122, R–408 made it possible to provide high–quality multi–channel communication directly between control points at a distance of up to 150–250 kilometers from each other (without retransmission), including through hard–to–reach terrain. Since the beginning of the 1970s, the signal forces began a radical modernization and rearmament to more advanced models, which is associated with the assumption of the post of chief of the Communication Troops of the Ministry of Defense of the Soviet Union, Colonel–General Andrei Belov. The troops began to receive command–staff vehicles made on the basis of military equipment (BMP–1KSh and BMD–1KSh), new models of Command and Staff vehicles on an automobile base (R–141, R–142, R–148), a mobile field communications center R–146A, a unified complex for sealing communication lines "Topaz" (P–300, P–301, P–302); classified communication equipment (T–206–ZM). In 1972, atypical communication equipment was developed and manufactured for the first samples of air command posts of division, army, front, which made it possible to control troops from board aircraft and helicopters. In the 1970s, the armament of tropospheric communication units was updated, in which the old complexes on several vehicles (R–408 on 3 ZIL–157 vehicles with long trailers) were replaced by mobile and compact stations on one vehicle (R–410 and R–412). In 1972, the troops began to supply the complex of ground stations for satellite space communications R–440 "Kristall". Hardware and stations of all types of field communication centers were improved, which received a new automotive base and modified equipment: Telephone exchanges P–225; Complex hardware rooms P–240 and P–241; Equipment for long–distance communication P–234, P–255 and P–257; Control rooms of the secret communications equipment P–242 and P–244; And other. A large list of stations and hardware, power plants and antenna devices for various purposes for the communication troops were installed on the chassis of off–road vehicles. These included: GAZ–63, GAZ–69, GAZ–66, ZIL–157, ZIL–131, Ural–375, Ural–4320 and KamAZ–4320. For these chassis, standard box bodies were developed that made it possible to place communication equipment (such as KUNG–1M, KM–66, KM–131). In order to unify, part of the communications equipment was installed on the basis of armored personnel carriers and infantry fighting vehicles. For example, the following samples of armored vehicles became the basis for the following command and staff vehicles and radio stations: BTR–50 → BTR–50PU and BTR–50PUM; BMP–1 → BMP–1KSh; BMD–1 → BMD–1KSh; BRDM → BRDM–5; BTR–60 → R–137B, R–140BM, R–145BM, R–156BM, R–238BT, R–240BT, R–241BT, R–409BM, PU–12. Chiefs of Communications Troops List of chiefs of Communications Troops: Artemy Lyubovich – 1919–1920; Corps Commander Innokenty Khalepsky – 1920–1924; Army Commander of the 2nd Rank Nikolai Sinyavsky – 1924–1935; Corps Commander Roman Longva – 1935–1937; Corps Commander Alexey Aksyonov – May 21, 1937 – December 29, 1937; Lieutenant General of the Communications Troops Ivan Naydenov – February 1938 – July 26, 1940; Major General of the Communications Troops Nikolay Gapich – July 26, 1940 – July 22, 1941; Marshal of the Communications Troops Ivan Peresypkin – 1941–1957; Colonel General of the Communications Troops Ivan Bulychev – 1957–1958; Marshal of the Communications Troops Alexei Leonov – 1958–1970; Marshal of the Communications Troops Andrei Belov – 1970–1987; Colonel General Konstantin Kobets – 1987–1990; Colonel General Oleg Lisovsky – 1990–1991. Personnel training Officer training The training of junior officers took place in the higher military command and engineering schools of communications. These included: Kemerovo Higher Military Command School of Communications Named After Marshal of the Communications Troops Ivan Peresypkin; Novocherkassk Higher Military Command Red Banner School of Communications Named After Marshal of the Soviet Union Vasily Sokolovsky; Poltava Higher Military Command School of Communications Named After Marshal of the Soviet Union Kirill Moskalenko; Ryazan Higher Military Command School of Communications Named After Marshal of the Soviet Union Matvey Zakharov; Tomsk Higher Military Command of the Order of the Red Star Communications School; Ulyanovsk Higher Military Command of the Order of the Red Star School of Communications Named After Grigory Ordzhonikidze; Kiev Higher Military Engineering Twice Red Banner School of Communications Named After Mikhail Kalinin; Leningrad Higher Military Engineering School of Communications named after the Leningrad Council. Advanced training and further training of senior communications officers was carried out in the Military Order of Lenin, the Red Banner Communications Academy Named After Marshal of the Soviet Union Semyon Budyonny in Leningrad. Training of junior specialists In addition to brigades and battalions, the communications troops of the central command were subordinate to training units (both centrally subordinate and district), for example: 151st Communications Training Brigade (Military Unit 52922, Samarkand) – deployed on the basis of the 1617th Battalion; 208th School of Warrant Officers of the Communications Troops of the Ground Forces (Military Unit 83320, Barybino Settlement); 31st Separate Training Communications Regiment of the Group of Soviet Forces in Germany (Military Unit – Field Mail 73046, Werder); 52nd Separate Training Communications Regiment of the Turkestan Military District (Military Unit 96699, Ashgabat); 58th Separate Training Communications Regiment of the Military Academy of Communications (Military Unit 52052, Sertolovo Settlement); 158th Separate Training Communications Regiment of the Far Eastern Military District (Military Unit 52924, Khabarovsk); 162nd Separate Training Communications Regiment (Military Unit 22165, Murom) – deployed on the basis of the 1608th Battalion; 1609th Separate Training Communications Battalion of the Northern Group of Forces (Military Unit – Field Mail 79066, Legnica); 1610th Separate Training Communications Battalion of the Moscow Military District (Military Unit 75269, Murom); 1611th Separate Training Communications Battalion of the Leningrad Military District (Military Unit 52919, Chornaya Rechka Settlement); 1612th Separate Training Communications Battalion of the Baltic Military District (Military Unit 75270, Vilnius); 1613th Separate Training Communications Battalion of the Belarusian Military District (Military Unit 52920, Minsk); 1614th Separate Training Communications Battalion of the Carpathian Military District (Military Unit 75271, Zhitomir); 1615th Separate Training Communications Battalion of the Odessa Military District (Military Unit 52921, Odessa); 1616th Separate Training Communications Battalion of the Transcaucasian Military District (Military Unit 75272, Hoktemberyan); 1618th Separate Training Communications Battalion of the Kiev Military District (Military Unit 75273, Gostomel); 1619th Separate Training Communications Battalion of the North Caucasus Military District (Military Unit 52923, Rostov–on–Don); 1620th Separate Training Communications Battalion of the Trans–Baikal Military District (Military Unit 75274, Ulan–Ude); 1686th Separate Training Communications Battalion of the Ural Military District (Military Unit 07170, Sverdlovsk). Junior specialists for general military formations and units were trained in separate communication training battalions of training motorized rifle and tank divisions, which were available in each district. References Military of the Soviet Union Military communications corps Military units and formations established in 1919
56579889
https://en.wikipedia.org/wiki/Russian%20interference%20in%20the%202018%20United%20States%20elections
Russian interference in the 2018 United States elections
The United States Intelligence Community concluded in early 2018 that the Russian government was continuing the interfence it started during the 2016 elections and was attempting to influence the 2018 United States mid-term elections by generating discord through social media. Primaries for candidates of parties began in some states in March and would continue through September. The leaders of intelligence agencies have noted that Russia is spreading disinformation through fake social media accounts in order to divide American society and foster anti-Americanism. Timeline February In February 2018 Director of National Intelligence Dan Coats claimed during a congressional testimony that "the United States is under attack" from Russian authorities. As of February 13, 2018, six US intelligence agencies unanimously assessed that Russian hackers are scanning American electoral systems and using bot armies to promote partisan causes on social media. Previously, Secretary of State Rex Tillerson also warned that Russia is interfering in the 2018 midterm election. In testimony before the Senate Intelligence Committee on February 13, Coats noted that voting in some elections will begin as early as March 2018 for primaries. He stated: "We need to inform the American public that this is real, that this is going to happen." At the same hearing, CIA Director Mike Pompeo told the committee that Russia has already been observed engaging in such tactics. March During a press conference in the White House on March 6, 2018, President Trump was questioned on the topic of possible interference in the upcoming midterm election, responding “We won’t allow that to happen. We’re doing a very, very deep study, and we’re coming out with, I think, some very strong suggestions on the ’18 election," adding, “we’ll counteract whatever they do.” April The National Republican Congressional Committee (NRCC) discovers that the email accounts of four senior officials have been hacked and monitored for months by a probable foreign agent. The hack is kept secret, even from the GOP leadership, until the NRCC is contacted for a December story by Politico. April 23, 2018, saw a report of a possible Russian hack of the website for State Senate candidate Kendall Scudder, a Dallas Democrat candidate. The hack attempted to redirect visitors to another site, and included text in Russian. The incident was reported to the FBI. May On May 23, 2018, United States Secretary of State Mike Pompeo, in a committee hearing, warned that the US government was not protected from Russian interference in the 2018 midterms elections, saying, "No responsible government official would ever state that they have done enough to forestall any attack on the United States of America". July On July 10, 2018, the Utah State Elections Director Justin Lee reported that the registration database in Utah recorded a huge uptick in hack attempts, upwards of one billion attempts per day (12,000/second) were seen after Mitt Romney announced his return to campaign for the Senate. Romney's views of Russia as the United States' "biggest geopolitical threat," were widely panned in the 2012 US presidential campaign, but mark him as one of few outspoken Republican opponents of Russian threats. A July 15 Business Insider article revealed a new Russian intelligence-linked "news" site, USAReally, which follows in the footsteps of previous Russian IRA-backed troll farms, and appears to be an attempt to "test the waters" ahead of the mid-terms. On July 17, commentator David A. Love said that the WalkAway social media campaign, originally created by New York resident Brandon Straka, had been co-opted by Russian bots in an attempt to discourage Democrats from voting in the mid-term elections, citing Hamilton 68. He cited the #WalkAway hashtag as an example of astroturfing. On July 20, Microsoft VP for Customer Security and Trust revealed at the Aspen Security Forum in Aspen, Colorado that Russian hackers had already specifically targeted three Congressional candidates running in the 2018 mid-term elections, using sophisticated spearphishing techniques spoofing a Microsoft website. On July 26, Missouri's Democratic Senator Claire McCaskill revealed that Russian hackers attempted to break into her Senate email account unsuccessfully, confirming a report in The Daily Beast. On July 31, Facebook announced they had detected and removed 32 pages and fake accounts being used for "coordinated inauthentic behavior," and was "working with the Federal Bureau of Investigation and other intelligence agencies". August On August 2, 2018, the Director of National Intelligence, Dan Coats announced along with FBI Director Christopher A. Wray at a White House press conference that Russia is actively interfering in the 2018 elections, saying "It is real. It is ongoing." At the same time, NPR reported that Democratic Senator Jeanne Shaheen reported to the FBI several attempts to compromise her campaign including both spearphishing attempts on her staff, and a disturbing incident where someone called her offices "impersonating a Latvian official, trying to set up a meeting to talk to me about Russian sanctions and about Ukraine." Her opposition to Russian aggression and support of sanctions has placed her on an official Russian blacklist. On August 6, Democratic candidate Tabitha Isner, running for Alabama's 2nd congressional district, reported 1,300+ unsuccessful attempts to break into her campaign website from Russian sourced IP addresses, mostly happening between July 17–18, prompting additional website security measures. On August 8, Florida Senator Bill Nelson told the Tampa Bay Times that Russian operatives have penetrated some of Florida's election systems ahead of the 2018 midterm elections. "They have already penetrated certain counties in the state and they now have free rein to move about," Nelson told the newspaper. He also stated that more detailed information is classified. The Russian hackers may be able to prevent some voters from casting votes by removing people from the voter rolls. October On October 19, the Department of Justice charged Russian accountant Elena Khusyaynova with attempting to interfere with the midterm elections. She was involved with handling the money for Internet Research Agency and related entities who had previously been charged with interfering in the 2016 elections. November On November 6, the 2018 US midterm elections took place. December On December 4, Politico reports that the email accounts of four senior officials at the National Republican Congressional Committee (NRCC) were hacked and monitored for months by a probable foreign agent. The hack was kept secret by the NRCC, even from the GOP leadership, until it was contacted by Politico for their story. On December 22, Director of National Intelligence Dan Coats reported that there was no evidence of vote tampering, but influence operations had persisted. "The activity we did see was consistent with what we shared in the weeks leading up to the election. Russia, and other foreign countries, including China and Iran, conducted influence activities and messaging campaigns targeted at the United States to promote their strategic interests." See also Yevgeny Prigozhin Russian espionage in the United States Russian interference in the 2016 United States elections Russian interference in the 2020 United States elections Social media in the United States presidential election, 2016 Timeline of Russian interference in the 2016 United States elections Timelines related to Donald Trump and Russian interference in United States elections References 2018 controversies in the United States 2018 elections in the United States Russia–United States relations Foreign electoral intervention Internet manipulation and propaganda Trump administration controversies Information operations and warfare
17689493
https://en.wikipedia.org/wiki/LRP%20ration
LRP ration
The Food Packet, Long Range Patrol or "LRP ration" (pronounced "lurp") was a U.S. Army freeze-dried dehydrated field ration. It was developed in 1964 during the Vietnam War (1955–75) for use by Special Operations troops; small, heavily armed long-range reconnaissance teams that patrolled deep in enemy-held territory, where bulky canned MCI rations (formerly known as C-Ration's) proved too heavy for extended missions on foot. Origins Before the outbreak of World War II, Army commanders had recognized the inadequacy of heavy canned wet rations when employed for infantry marching on long patrols, especially in extreme environments such as mountain or jungle terrain. To this end, the Jungle ration was developed and briefly issued during early World War II. The Jungle ration was a dry, lightweight multi-component daily meal that could be stored in light waterproof bags, easily carried by a foot soldier, and which would not spoil when exposed to heat and humidity for an extended period of time. Importantly, the Jungle ration was specifically designed to provide an increased amount of dietary energy despite its lighter weight, ideal for a soldier operating in difficult jungle terrain on foot while carrying all of his equipment on his back. By all accounts the Jungle ration was successful; however, cost concerns led to its replacement, first by substitution of increasingly heavier and less expensive canned components, followed by complete discontinuance in 1943. After the war, U.S. army logisticians again re-standardized field rations, eliminating all lightweight rations in favor of heavy canned wet rations such as the C ration and the Meal, Combat, Individual ration. The overuse of heavy canned wet rations reached a ludicrous extreme during the early years of U.S. involvement in the Vietnam war, when American soldiers on extended infantry patrol were forced to stack their canned rations in socks to minimize weight and noise. In response, the Food Packet, Individual, Combat (FPIC), was developed in the early-1960s, though not fielded until 1966. The FPIC was designed to be nutritious, lightweight, and easily portable, the descendant of the dehydrated rations used by NASA's astronauts. The ration was originally a response to complaints about the weight of the canned ration. Carrying a multi-day supply of heavy wet canned MCI or C-rations, "a special operations team could become virtually immobile due to the weight of needed supplies. Mobility and stealth are decreased when loads become too heavy, and the soldier is too often worn down by midday. Fatigue affects alertness, making him more vulnerable to detection and error." The ration's final weight was a compromise between the original packet's target weight of and the base target weight of the larger experimental Meal, Ready-to-Eat, Individual (MRE-I), a forerunner of the later MRE. The ration differed from the standard wet-pack Meal, Combat, Individual (MCI) in that it was a freeze-dried, vacuum-packed individual ration meal weighing packed in a waterproof grey-green canvas envelope lined with aluminum foil. Due to its tendency to spoil in a wet or humid environment (e.g., all of Southeast Asia), later ration packs came enclosed in an outer zip-lock clear-plastic bag to keep out the moisture. This drawback made it less than desirable as a standard ration. Contents LRP rations of the mid-1960s were packed in a large cardboard box of twenty-four meals in eight varieties: 1) Beef hash, 2) Beef and rice, 3) Beef stew, 4) Chicken and rice, 5) Chicken stew, 6) Chili con carne, 7) Pork and scalloped potatoes, and 8) Spaghetti with meat sauce. Each meal came in a tinfoil packet covered with olive-drab cloth, with a brown-foil accessory packet. The accessory packet contained instant coffee (2 packets), cream substitute (1 4-gram packet), sugar (1 6-gram packet), salt (1 packet), Candy-Coated Gum (2 pieces), toilet paper, a book of cardboard matches, and a pack of 4 commercial-grade cigarettes (eliminated in 1975). There was also either a compressed fruitcake bar or a tropical chocolate bar. Although compact, the LRP daily ration was 'energy depleted'. That is, it supplied less energy per day than the MCI. Criticisms As it was a freeze-dried (dehydrated) ration, it required of water to cook and reconstitute it. This was not a problem where water supplies were plentiful. However, the water sources in Vietnam were usually teeming with parasites (e.g., blood flukes and tapeworms) and viruses, so the water had to be boiled or mixed with iodine tablets, the latter leaving an unwanted taste in a ration. Fresh water could also be collected from rainwater or, in an emergency, a LRP ration could be consumed 'dry', but the soldier doing so had to consume extra water to prevent dehydration. Some soldiers mixed its contents with canned C-Rations to reduce monotony and to supply extra dietary energy, as the ration was insufficient for an active soldier. However, this defeated the purpose of deploying the LRP ration in the first place. Another complaint was the absence of cigarettes found in C-rations. Food Packet, Long Range Patrol Due to these drawbacks, the original concept of its wide adoption was shelved in favor of its limited use by Special Operations units like the Long Range Patrols, Special Forces, and Navy SEALs. It then acquired the new designation of Food Packet, Long Range Patrol (LRP), also known as "Lurp meals" or "long rats". Production was limited to five million units in 1967, rising to just nine million in 1968. It was considered a novelty by line soldiers, who usually "acquired" as many as they could before going on field operations. The LRP ration continued to be procured in small quantities until the mid-1980s, when it was replaced by a thermo-stabilized ration, the Meal, Ready-to-Eat (MRE). Quartermaster Command and Army Food Services viewed the new ration as a suitable replacement for issue in all combat environments. Despite the long history of operational failures previously encountered in standardizing on a single type of individual ration, the new MRE was duly adopted with the intention of replacing all the field rations and ration supplements in use. Revisions While the MRE was lighter than the canned MCI and had more dietary energy than the LRP ration, it had certain problems. US Special Operations forces found it too bulky, and troops on maneuvers found some menu items were unsuited for easy digestion in cold-weather / high-altitude or high-temperature / high-humidity environments. While unofficial practice was to strip out items deemed "unnecessary", this also reduced the ration's dietary energy content. Faced with these problems, this forced the adoption of a specialized ration for light troops or commando units on extended field operations. In 1994, a new version of the LRP ration called the LRP-I (Food Packet, Long-Range Patrol - Improved) was created. It was an ration that came in a brown plastic retort pouch that allowed the user to reconstitute and cook the ration directly in the pouch. This was an improvement over the earlier LRP packet, which had to be boiled or soaked in a canteen cup or other cookware. In 2001, the LRP-I was merged with the Meal, Cold-Weather (MCW) ration to create the consolidated MCW/LRP ration. As in years past, this was done in order to further standardize supply and save costs, as both were considered compact, high-energy meals that were designed for use by active soldiers in the field. The meal weighs 1 pound (454 g) and comes in 12 different entrees. The meals differ only in the accessory packs. One is geared for use by light infantry and commando units operating in temperate or hot climates and comes in brown or tan packaging. The other is geared more for use in cold weather or high elevations and comes in white packaging. See also C ration K-ration Jungle ration Long-range reconnaissance patrol Meal, Combat, Individual ration (MCI) Meal, Ready-to-Eat (MRE) Mountain ration United States military ration References Military food of the United States Military equipment of the Vietnam War
37652802
https://en.wikipedia.org/wiki/Source%20Filmmaker
Source Filmmaker
Source Filmmaker (often abbreviated as SFM) is a 3D computer graphics software toolset used for creating animated films, utilizing the Source game engine. The tool, created by Valve, was used to create over 50 animated shorts for its Source games, including Team Fortress 2, the Left 4 Dead series, and Half-Life 2. On June 27, 2012, Valve released a free open beta version of the SFM to the gaming community via its Steam service. Overview The Source Filmmaker is a tool for animating, editing and rendering 3D animated videos using assets from games which use the Source platform, including sounds, models and backdrops. The tools also allow the creation of still images, art and posters. SFM gives the user a "work camera" that enables them to preview their work without altering the scene cameras. It also uses three main user interfaces for making films with: The Clip Editor is used for recording, editing and arranging shots, which can contain recorded gameplay and user-placed assets. The Clip Editor also allows the user to place and arrange sound files and video filters. The Motion Editor is used for motion adjustments over time, such as blending two animations together. Motion presets (e.g. jittering, smoothing) can also be applied to selected motion paths. The Graph Editor is used for editing motion through creating keyframes; which can be used for pose-to-pose animation. SFM allows users to record and edit motion from gameplay or scratch, as well as record a character many times over in the same scene, creating the illusion of multiple entities. SFM can support a wide range of cinematographic effects and techniques such as motion blur, Tyndall effects, Dynamic Lighting, and depth of field. It also allows manual animation of bones and facial features, allowing the user to create movements that don't occur in-game (as in games, nearly all character animation sequences are stored in a set of different movements, and the amount of different animation sequences is limited). Production and updates Pre-release SFM was developed internally at Valve from as early as 2005, forked from the Source engine's in-game demo playback tool and used to make Day of Defeat: Source trailers with experimental effects that could not be achieved in real-time. The tool's full potential was finally realized with the release of The Orange Box, particularly with the Meet the Team featurettes for Team Fortress 2. This version of SFM, which ran using Source's in-game tools framework, was inadvertently leaked during the public beta of TF2 in September 2007. By 2010, the entire interface was re-implemented using Qt 4, and given its own engine branch for further development. Before Source Filmmaker was officially released to the public, Team Fortress 2 carried a simplified version of the tool called the Replay Editor; it is limited to capturing the actual events occurring over the course of a player's life with no ability to modify actions, repeat segments, nor apply special effects beyond those already used in-game. However, arbitrary camera angles are possible, like tracking the actions of other players in action at the time. Replay incorporates the ability to upload completed videos to YouTube. Beta versions On June 27, 2012, the same day as the final Meet the Team video, "Meet the Pyro", was released, Source Filmmaker became available on a limited-basis through the Steam network. It has been in open beta for Windows . On April 1, 2013, Valve implemented support for the Steam Workshop, which allows users to upload their own custom-made assets onto the Steam community; these assets range from video game models and sounds to raw animation project files. A port of Source Filmmaker to the Source 2 engine was released on May 15, 2020 alongside development tools for Half-Life: Alyx. See also Saxxy Awards Machinima References External links Steam Store page Source Filmmaker in Valve Developer Community 2012 software 3D animation software 3D graphics software 3D graphics software that uses Qt C++ software Machinima Proprietary software that uses Qt Python (programming language) software Software articles needing attention Source (game engine) Video game development software Windows-only freeware
52729391
https://en.wikipedia.org/wiki/YIIK%3A%20A%20Postmodern%20RPG
YIIK: A Postmodern RPG
YIIK: A Postmodern RPG, (), is an indie role-playing video game by American developer Ackk Studios for Microsoft Windows, MacOS, PlayStation 4, and Nintendo Switch. The game was released on January 17, 2019. It has received below-average reviews from critics. Gameplay The game is a colorful 3D Japanese-style RPG. It is set in the 1990s and is based around a mystery in a small town. There are eight characters who are message board friends. They work together to investigate the mystery around a viral video star called Semi Pak, who goes missing in a supernatural event. The player can control the characters in turn-based battles where normal everyday objects are used as weapons. The combat consists of turn-based moves with timing-based actions which can increase the damage of an attack. There are six dungeons to explore which include battles, puzzles which have to be solved and traps that have to be avoided. There are approximately twenty five hours of gameplay. Plot On April 4, 1999, Alex Eggleston (Chris Niosi) returns to his hometown of Frankton, New Jersey after receiving his B.L.A. While out on an errand, a cat steals Alex's shopping list and leads him into a surreal abandoned factory, where he meets and befriends Semi "Sammy" Pak (Kelley Nicole Dugan). In the factory's elevator, Sammy is suddenly kidnapped by two otherworldly beings and vanishes. The next day, he returns to the factory with his neighbor Michael K. (Clifford Chapin), and they obtain photographic evidence of what Alex describes as "a being made of stars". After uploading the photos online, they are led to Vella Wilde (Melanie Ehrlich), an employee of the town's amusement arcade. She identifies the beings as "Soul Survivors", manifestations of souls that have left their realities and search for a physical form in another. To aid Alex and Michael in engaging with Soul Survivors, Vella gives them a phone number that allows them to access a metaphysical space known as the Mind Dungeon, where Vella had developed an ability to manipulate and weaponize sound using her keytar. Alex receives a message from someone who claims that his sister had vanished in similarly strange circumstances to Sammy and beckons the group for further investigation. Alex, Vella, and Michael meet Rory Mancer (Andrew Fayette), who leads them into the town's sewer system, where he believes the soul of his sister Carrie resides. Instead, they encounter the Soul Survivor of an alternate version of Rory. He confesses that Carrie committed suicide, and he tells of glimpsing the Soul Survivor in the "Soul Space" – the space between realities – during a despair-induced out-of-body experience. Following a confrontation with the Soul Survivor and a bizarre golden alpaca, a resentful Alex insensitively calls out Rory for his deceit, pushing him away from the group. Alex has a recurring dream of a motionless woman made of plastic. A Soul Survivor materializes in Alex's house and leads him to a radio tower, where he finds an empty jacket for the record Mystical Ultima LP Legend. Suspecting that the Entity wants the record to be broadcast, Alex searches for a copy of the record. During this time, he may or may not apologize to Rory depending on the player's choice, and he befriends record store chain owner Claudio Unkrich (Anthony Sardinha) and his sister Chondra (Michaela Laws), who feel that the circumstances surrounding Sammy overlap with that of their missing young brother Aaron. After learning that Vella is the record's artist and retrieving it from her Mind Dungeon, the group broadcasts the record from the radio tower. The plastic woman from Alex's dream, the Essentia 2000 (also Ehrlich), awakens in an isolated van. Alex and Vella psychically witness the Essentia 2000 eliminate a pair of Soul Survivors, and they resolve to find her. At this point, Alex can optionally allow Rory to privately discuss his ongoing depression. When the group locates Essentia's van, she brings Alex into her Mind Dungeon, where she claims to be a parallel version of both Sammy and Vella. Sammy, who had made the decision to leave her physical body and enter the Soul Space, was taken by two thirds of her soul that had already made the transition. Prior to Sammy's disappearance, Essentia's physical body was captured and held captive by Soul Survivors, and her soul entered Alex's house when Sammy was taken. A tour of Essentia's parallel lives culminates in the revelation that Alex is the destroyer of other realities, and she foretells the end of his own upon the New Year. She suggests that Alex and his friends escape to the Soul Space to help prevent the end of other realities, but Alex insists on attempting to save his own reality despite the futility. Re-emerging from Essentia's Mind Dungeon days later, Alex gathers his friends to prepare them for their reality's impending end. During Alex's time away, Michael had accessed the Soul Space and experienced parallel lives, achieving an enlightened state named Proto-Michael. The group decides to train themselves in their Mind Dungeons during the course of a month. Depending on how Alex had treated Rory throughout the game, Rory may or may not commit suicide in a fit of misanthropy within this time. On New Year's Eve, reality begins to collapse, and Alex finds his friends supplanted by crude replications who have no recollection of their mission. The group goes to Times Square to watch the ball drop. When a meteorite-like form of Alex, Comet Alex, approaches the Square, Proto-Michael re-emerges, jogging the group's memories. Although they fight valiantly against Comet Alex, they ultimately fail, and Alex's reality is destroyed. Alex is left alone to drift in the Soul Space until he comes across a floating rock inhabited by other versions of Alex, who had abandoned their realities instead of resisting. On the other side of the rock, Alex encounters versions of Proto-Alex, the pilot of Comet Alex, planning on destroying another reality. Endings Alex can join Proto-Alex, in which case the game's credits rush by and close with a passive-aggressive message. If Alex does not join Proto-Alex, he instead follows Comet Alex, watching it destroy many other realities. Drifting away, Alex eventually finds a planet not yet destroyed by Comet Alex and taking place in the present day. He decides to recruit the player - another version of Alex - and versions of his friends from that reality in a bid to defeat Proto-Alex once and for all. Proto-Alex, who is mechanically conjoined with Essentia, reveals that Essentia is not a version of Sammy and Vella, but another version of Alex. Essentia had been manipulating Alex in the hopes of eliminating Proto-Alex and taking control of their collective soul, using Sammy as bait. Alex again fails, but is encouraged to return to life by Roy, the protagonist of Ackk Studios' first video game Two Brothers. Alex and the player simultaneously deactivate both Essentia and Proto-Alex, and all Alexes are absorbed into the player. If on New Year's Eve Alex reads an online message from Sammy confirming Essentia's duplicity, he can instead go to a news studio that Sammy was to intern at prior to her disappearance and perform her job. At the studio's elevator, Alex finally reunites with Sammy, who addresses him as the player and urges him to abandon this reality and explore the Soul Space with her. Development The game is built on the Unity game engine. It began development in 2013, with brothers Andrew and Brian Allanson creating a fully-featured prototype in 52 days for showcase at PAX East 2013. After seeing the game in action at the event, publisher Ysbryd Games entered an agreement to publish YIIK: A Postmodern RPG. Around this time, the Allansons' mother was diagnosed with cancer, which would later result in her death. This caused the team to rework parts of the game's plot, making an effort to "[have] a more 'humanistic' perspective" and "sand down some of the game’s more cynical edges." By the end of this period, nearly a quarter of the game had been remade, including a slight change of the game's ending. A pre-release demo was released in June 2016 on Microsoft Windows and MacOS. In 2020, Ackk Studios announced multiple free updates for the game that would improve the game's combat mechanics while adding additional quality of life features. New story content would also be added through these updates under the titles of "Deviation Perspectives". The first update was released for PC in January 2021 and included the new combat system as well as a reimagined Golden Alpaca boss battle as the first Deviation Perspective. Reception YIIK: A Postmodern RPG received "mixed or average reviews" on all versions according to review aggregator Metacritic. Reviewers had mixed reactions to the game's combat; while the presentation and variety of the attacks was commended, the steep learning curve for timing the attacks, clunky input response and lack of power balance were said to result in slow-paced battles. Derek Heemsbergen of RPGFan found the controls to be universally "slippery". Jason Faulkner of GameRevolution appreciated the absence of random encounters, which allows the player to choose when to engage in combat. Reactions to the leveling system were generally lukewarm; reviewers summarized the system as initially novel, but ultimately tedious and obtuse. Neal Ronaghan of Nintendo World Report found the puzzles to be ingenious and enjoyable. Eric Bailey of Nintendo Life, while determining the puzzles to never be random or unfair, experienced occasional frustration from the "real lateral-thinking" challenges. Cian Maher of GameSpot acknowledged that the early puzzles were enjoyable and fit the game's style, but warned that later puzzles had more arbitrary solutions and were susceptible to bugs. Ronaghan criticized the frequency of the load times and their detriment to the game's pacing. Kevin Mersereau of Destructoid was also irritated by the "archaic and sluggish" save screen. Maher and Kyle McClair of Hardcore Gamer criticized the finale for its monotonous grinding. The visuals were widely praised as bright, colorful and reminiscent of fifth-generation video games. Faulkner compared the environments to Mega Man Legends and the characters' combat animations to Final Fantasy VII. However, he considered the text font to be unpleasant. The soundtrack was also commended as catchy and eclectic, with Hiroki Kikuta and Toby Fox's contributions receiving particular notice. Zach Welhouse of RPGamer found the voice acting to be honest and relatable, which he said balances out the story's surreal moments, and considered Fayette, Sardinha and Ehrlich to be the standout performances. Heemsbergen also appreciated Ehrlich's performance, but faulted the voice direction for occasionally fumbled lines and "tonally dissonant" conversations. He elaborated that "Alex's lines are generally delivered well, but when he retreats into Murakami-esque introspection, he has a tendency to come across awkwardly due to the more poetic timbre of his narration". Reviewers were polarized by the game's writing. Mersereau, Faulkner and Heemsbergen lambasted the main character Alex as an unlikable hipster stereotype. While they felt this characterization made narrative sense, they nevertheless found the gameplay experience unpleasant as a result. Faulkner additionally dismissed the rest of the cast for their "hip" characteristics, but found plentiful dry humor in interpreting the characterizations as deliberate. Bailey, however, considered Alex to be relatable in his loneliness and solace in subcultures. LeClair praised the plot for its elaborate mythos, thematic abundance and humor, and he enjoyed the game's cast, singling out Claudio as a personal favorite for his enthusiasm. Maher was intrigued by the characters, but found difficulty in caring for them due to their lack of development. He also deemed the game's "post-modern" elements to be pretentious and inadequately built upon. Bailey and Ronaghan faulted the dialogue for its excessive exposition and digressions. Faulkner, LeClair and Ronaghan compared the game's presentation to EarthBound, with Faulkner remarking that it "is a lot like if the Earthbound kids had never gone and saved the world and instead grew up to be weird hipster adults", and Ronaghan interpreting the game's setting as a nihilistic and cynical counterpart to EarthBounds idealistic and upbeat depiction of Americana. The game was criticized for referencing the death of Elisa Lam. References 2019 video games Indie video games MacOS games Nintendo Switch games Role-playing video games Video games developed in the United States Video games involved in plagiarism controversies Video games scored by Hiroki Kikuta Video games scored by Toby Fox Video games set in 1999 Video games set in New Jersey Windows games
29012674
https://en.wikipedia.org/wiki/Sakhr%20Software%20Company
Sakhr Software Company
Sakhr Software Company () is an Arabic language technology company based in Kuwait. It deals with products for the Middle East in e-governance, education, wireless, and security. The Company currently has 200 employees worldwide. Its research and engineering activities are in Silicon Valley and Egypt, with sales offices in the U.S. (Washington, DC and California), Kuwait, UAE (Dubai and Abu Dhabi), Oman, Egypt and Saudi Arabia. History Sakhr was founded in 1982 by Mohammed AlSharekh after branching out of his computer hardware company Alamiah Electronics by producing Arabic versions of the MSX computers. After nearly three decades in the industry and more than US$100 million investments, it now leads the field in Arabic language machine translation, OCR, speech recognition, speech synthesis, search, and localization. In 1990, following the first events of the Gulf War, Sakhr relocated to Heliopolis, Cairo. After the relocation, the company changed its approach by terminating all computer manufacturing projects to focus exclusively on developing software products. Sakhr provided Arabic language localization services to the Saudi's Ministry of Education, Egypt's Ministry of Commerce, Oman's Ministry of Education, and the Dubai Chamber of Commerce and Industry. Dial Directions In 2009, Sakhr acquired Dial Directions, Inc., a U.S. Silicon Valley software company providing language applications for mobile cloud-computing environments, including wireless carriers, telematics, and “smartphones” such as the Apple iPhone to enhance its market position in the emerging mobile application & cloud computing market. Awards Sakhr software has won the following International awards: eContent Award – Arabic Language Buddy for smart phones (2010) Oman Portal – Arab Web Awards (2010) World Summit Award – (Screen reader for the blind) (2007) World Summit Award – Best e-content (Web Content Translation Engine – Tarjim) (2005) Arabian Business E-Achievement Award E-Visionary of the Year (2002) Internet Shopper (T4S International Group): Best Arabic Language Supported Site (GITEX Cairo 2000) Damtex Middle East Group: Best Arabic Application Development Company (1999) See also Sakhr Computers Arabic machine translation References Notes Sakhr Software – Arabic Language Technology #1 in Arabic Translation by U.S. Government metrics Arabic language app walks the talk – Government Computer News Etisalat inks deal to bring internet to the blind Getting smart on wireless in the classroom The Latest App: Smartphone Interpreters Speech Recognition iPhone App Translates Arabic On the Fly Merger Could Bring Major Tool To Military: Arabic-English Translation Via Cell Phone External links Arabic Machine Translation 1982 establishments in Kuwait Companies established in 1982 Software companies of Kuwait Kuwaiti brands MSX
21409728
https://en.wikipedia.org/wiki/SAP%20Business%20ByDesign
SAP Business ByDesign
SAP Business ByDesign (ByD) is a cloud enterprise resource planning software (Cloud ERP) that is sold and operated as software as a service by SAP SE. It is designed for small and medium-sized enterprises. The software is designed to provide business processes across application areas from financials to human resources with embedded business analytics, mobility, e-learning, and support. SAP Business ByDesign is built on the principles of a service oriented architecture (SOA). Integration between business capabilities is accomplished via messages. The underlying technology stack is a multi-tenancy enabled SAP NetWeaver stack, leveraging SAP's in-memory HANA database. SAP Business ByDesign is used by almost 10.000 companies in more than 140 countries and supports 41 languages (13 standard and 28 partner translated, including simplified Chinese, Japanese, Korean, Polish, Hebrew). It is localized for 65 countries (standard localizations, pre-localizations and partner localizations). In addition customers and partners can create custom country and language versions using the Localization and Language Toolkit provided by SAP. Examples for the 72 localizations by customers or partners are Taiwan, Malaysia, Vietnam, Chile and Peru. Business areas and scenarios SAP Business ByDesign processes are grouped by application areas interlinked by so-called business scenarios. Scenarios provide business processes which span across companies, partners, departments and its employees. The main application areas and core business scenarios are: Financial management (FIN): Cash and Liquidity Management, Financial Closing, Fixed Asset Management Customer relationship management (CRM): Marketing to Opportunity, Field Service and Repair, Order to cash (projects, standard service, materials) Project management and professional service automation (PSA / PRO): Project Management Supplier relationship management and procurement (SRM): Procure to pay (stock and non stock) Supply chain management (SCM): Demand Planning, Strategic sourcing, Make to stock, Physical Inventory Management, Human resources management (HRM): Expense Reimbursement, Resource Management, Time and Labor Management Business analytics (BA or BI) Executive support and compliance management. History SAP announced SAP Business ByDesign on 19 September 2007 during an event in New York. It was previously known under the code name "A1S". Since its initial and general available release in 2007 (so-called feature pack 1.2) it has been enhanced in quarterly releases. See also Cloud computing Comparison of accounting software Customer relationship management List of ERP software packages List of embedded CRM List of SAP products Project management Supply chain management References External links ERP software Customer relationship management software Accounting software Cloud applications As a service SAP SE
39241350
https://en.wikipedia.org/wiki/Resilio%20Sync
Resilio Sync
Resilio Sync (formerly BitTorrent Sync) by Resilio, Inc. is a proprietary peer-to-peer file synchronization tool available for Windows, Mac, Linux, Android, iOS, Windows Phone, Amazon Kindle Fire and BSD. It can sync files between devices on a local network, or between remote devices over the Internet via a modified version of the BitTorrent protocol. Although not touted by the developers as an intended direct replacement nor competitor to cloud-based file synchronization services, it has attained much of its publicity in this potential role. This is mainly due to the ability of Resilio Sync to address many of the concerns in existing services relating to file storage limits, privacy, cost, and performance. History On 24 January 2013, BitTorrent, Inc. announced a call for pre-alpha testers to help test a new "distributed syncing product to help manage personal files between multiple computers". Several private pre-alpha builds of "SyncApp" were subsequently made available to a limited group of alpha testers between January 2013 and April 2013. In mid-April 2013, the name "SyncApp" was dropped in favor of "BitTorrent Sync". On 23 April 2013, the previously private "alpha" was opened up to general users. As of 6 May 2013, more than a petabyte of anonymous data had been synced between users, with over 70 terabytes synced daily. As of 16 July 2013, more than eight petabytes of data had been synced using the software. On 17 July 2013, BitTorrent Sync migrated from "alpha" to "beta", released an Android app, and introduced versioning. On 27 August 2013, BitTorrent Sync for iOS was announced. On 5 November 2013, BitTorrent announced the release of BitTorrent Sync Beta API and version 1.2 of the client, along with the milestone, having over 1 million monthly active users synced over 30 petabytes of data to date. As of 26 August 2014, there have been more than 10 million user installs and more than 80 petabytes of data synced between users. On 3 March 2015, the product finally exited beta as a commercial product, with the inclusion of a paid Pro version. On 9 September 2015, with the release of Sync 2.2, in the free version, the 10 folder limit that had been introduced in 2.0 was removed. On 21 January 2016, the release of Sync 2.3 introduced the Encrypted Folder, as well as the ability to run as a Windows Service, to use SD cards on Android and, for paid users, Selective Sync support in Linux. On 1 June 2016, product and team were spun out of BitTorrent Inc. as an independent company, Resilio Inc. which will continue development of the product under the name Resilio Sync. Former Bittorrent CEO Eric Klinker became the head of the new company. Technology Resilio Sync synchronizes files using BitTorrent. The user's data is stored on the user's local device instead of in a "cloud", therefore requiring at least two user devices, or "nodes," to be online to synchronize files between them. Resilio Sync encrypts data with an Advanced Encryption Standard AES-128 key in counter mode which may either be randomly generated or set by the user. This key is derived from a "secret" which can be shared to other users to share data. Data is sent between devices directly unless the target device is unreachable (e.g. behind a firewall), in which case the data will first be relayed via an intermediary node. Many devices can be connected simultaneously and files shared between them in a mesh networking topology. There is no limit on the amount of data that can be synced, other than the available free space on each device. Compatibility Current builds of Resilio Sync are available for the following operating systems: Microsoft Windows (Windows 7 or later 32 bit and 64 bit) OS X (10.8 or later) Linux (web interface, also GUI available for Debian derived systems) FreeBSD NAS Devices Android Amazon Kindle iOS Windows Phone See also Comparison of file synchronization software References External links Data synchronization File sharing services Peer-to-peer file sharing
7557321
https://en.wikipedia.org/wiki/Deluge%20%28software%29
Deluge (software)
Deluge is a free and open-source, cross-platform BitTorrent client written in Python. Deluge uses a front and back end architecture where libtorrent, a software library written in C++ which provides the application's networking logic, is connected to one of various front ends including a text console, the web interface and a graphical desktop interface using GTK through the project's own Python bindings. Deluge is released under the terms of the GPL-3.0-or-later license. Features Deluge aims to be a lightweight, secure, and feature-rich client. To help achieve this, most of its features are part of plugin modules which were written by various developers. Starting with version 1.0, Deluge separated its core from its interface, running it instead in a daemon (server/service), allowing users to remotely manage the application over the web. Deluge has supported magnet links since version 1.1.0 released on January 2009. History Deluge was started by two members of ubuntuforums.org, Zach Tibbitts and Alon Zakai, who previously hosted and maintained the project at Google Code, but who subsequently moved it to its own website. In its first stages, Deluge was originally titled gTorrent, to reflect that it was targeted for the GNOME desktop environment. When the first version was released on September 25, 2006, it was renamed to Deluge due to an existing project named gtorrent on SourceForge, in addition to the fact that it was finally coded to work not only on GNOME but on any platform which could support GTK. The 0.5.x release marked a complete rewrite from the 0.4.x code branch. The 0.5.x branch added support for encryption, peer exchange, binary prefix, and UPnP. Nearing the time of the 0.5.1 release, the two original developers effectively left the project, leaving Rory Mobley and Andrew "andar" Resch to continue Deluge's development. Version 0.5.4.1 saw support for both Mac OS X (via MacPorts) and Windows being introduced. Around this time, Deluge became notable for its resistance to Comcast's bandwidth throttling without a change in code, while clients like Vuze (Azureus) and μTorrent had to borrow the method implemented by Deluge. From version 1.1.1 through version 1.1.3, Windows installers were unavailable due to the Windows packager leaving the project. Windows installers again have become and remain unavailable since the move to GTK3 in 2019. Following 1.1.3, packages for all non-Windows operating systems are no longer provided by the developers; instead, source tars and community provided packages were released. See also Comparison of BitTorrent clients Usage share of BitTorrent clients Notes References External links BitTorrent clients for Linux Cross-platform software File sharing software that uses GTK Free BitTorrent clients Free software programmed in Python MacOS file sharing software Software that uses PyGTK Windows file sharing software
55200177
https://en.wikipedia.org/wiki/IPhone%20X
IPhone X
The iPhone X (Roman numeral "X" pronounced "ten") is a smartphone designed, developed and marketed by Apple Inc. The 11th generation of the iPhone, it was available to pre-order on October 27, 2017, and was released on November 3, 2017. The iPhone X used a glass and stainless-steel form factor and "bezel-less" design, shrinking the bezels while not having a "chin", unlike many Android phones. It was the first iPhone to use an OLED screen. The home button's fingerprint sensor was replaced with a new type of authentication called Face ID, which used sensors to scan the user's face to unlock the device. This face-recognition capability also enabled emojis to be animated following the user's expression (Animoji). With a bezel-less design, iPhone user interaction changed significantly, using gestures to navigate the operating system rather than the home button used in all previous iPhones. At the time of its November 2017 launch, its price tag of US$999 also made it the most expensive iPhone ever, with even higher prices internationally due to additional local sales and import taxes. The iPhone X received positive reviews. Its display and build quality were strongly praised, and the camera also scored positively on tests. However, the sensor housing "notch" at the top of the screen and the introduction of an all-new authentication method were polarizing for critics and consumers. The notch was heavily mocked by users on social media, although app developers responded either neutrally or positively to the changes it brought to the user experience in their apps and games. Face ID facial recognition was praised for its simple setup, but criticized for requiring direct eyes on the screen, though that option can be disabled within the system preferences. Along with the iPhone 6S, its Plus variant, and the first-generation iPhone SE, the iPhone X was discontinued on September 12, 2018, following the announcement of the iPhone XS, iPhone XS Max and iPhone XR devices. On November 22, 2018, Apple reportedly resumed production of the iPhone X due to weak sales of its successors. The iPhone X remains discontinued, but as of February 2019, Apple started selling refurbished models of the iPhone X. History The technology behind the iPhone X was in development for five years, starting as far back as 2012. Rumors of a drastic iPhone redesign began circulating around the time of iPhone 7 announcement in the third quarter of 2016, and intensified when a HomePod firmware leak in July 2017 suggested that Apple would shortly release a phone with a nearly bezel-less design, lack of a physical home button, facial recognition, and other new features. A near-final development version of the iOS 11 operating system was also leaked in September 2017, confirming the new design and features. On August 31, 2017, Apple invited journalists to a September 12 press event, the first public event held at the Steve Jobs Theater on the company's new Apple Park campus in Cupertino, California. The iPhone X was unveiled during that keynote. Its US$999 starting price was the most expensive iPhone launch price ever at the time. The price is even higher in international markets due to currency fluctuations, import fees and sales taxes. An instrumental version of the song Keep On Lovin’ by MagnusTheMagnus was used in the reveal of the device, and the song "Best Friend" by Sofi Tukker was featured in the introductory film and ads. An unlocked version of the phone was made available for purchase in the United States on December 5, 2017. In April 2018, the Federal Communications Commission divulged images of an unreleased gold-colored iPhone X model. As opposed to the space gray and silver color options that the iPhone X ships with, it was divulged that there were initial plans to release a gold option for the device. However, it was put on hold due to production issues. Apple released a revised B model for the iPhone X that fixed NFC issues for users in Japan, China, and the United States. Specifications Hardware Display The iPhone X has a 5.85 inch (marketed as 5.8 inch) OLED color-accurate screen that supports DCI-P3 wide color gamut, sRGB, and high dynamic range, and has a contrast ratio of 1,000,000:1. The Super Retina display has the True Tone technology found on the iPad Pro, which uses ambient light sensors to adapt the display's white balance to the surrounding ambient light. Although the iPhone X does not feature the same "ProMotion" technology used in the displays of the second-generation iPad Pro, where the display delivers a refresh rate of 120 Hz, it does sample touch input at 120 Hz. OLED screen technology has a known negative trend of "burn-in" effects, in which particular elements consistently on the screen for long periods of time leave a faint trace even after new images appear. Apple acknowledged that its OLED screens were not excluded from this issue, writing in a support document that "This is also expected behavior". Greg Joswiak, Apple's vice president of product marketing, told Tom's Guide that the OLED panels Apple used in the iPhone X had been engineered to avoid the "oversaturation" of colors that using OLED panels typically results in, having made color adjustments and "subpixel"-level refinements for crisp lines and round corners. For out-of-warranty servicing for damages not relating to manufacturing defects, screen repairs of iPhone X cost US$279, while other damage repairs cost US$549. Color options The iPhone X has two color options; silver and space gray. The sides of the phone are composed of surgical-grade stainless steel to improve durability, and the front and back are made of glass. The design is intended to be IP67 water and dust resistant. Chipsets The iPhone X contains Apple's A11 Bionic system-on-chip, also used in the iPhone 8 and 8 Plus, which is a six-core processor with two cores optimized for performance (25% faster than the A10 Fusion processor), along with four cores optimized for efficiency (70% faster than the previous generation). It also features the first Apple-designed graphics processing unit and a Neural Engine, which powers an artificial intelligence accelerator. Biometric authentication Face ID replaces the Touch ID authentication system. The facial recognition sensor consists of two parts: a "Romeo" module that projects more than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the pattern. The pattern is sent to the Secure Enclave in the A11 Bionic chip to confirm a match with the phone owner's face. By default, the system will not work with eyes closed, in an effort to prevent unauthorized access but this requirement can be disabled in settings. Cameras The iPhone X has two cameras on the rear. One is a 12-megapixel wide-angle camera with f/1.8 aperture, with support for face detection, high dynamic range and optical image stabilization. It is capable of capturing 4K video at 24, 30 or 60 frames per second, or 1080p video at 30, 60, 120 or 240 frames per second. A secondary, telephoto lens features 2× optical zoom and 10× digital zoom with an aperture of f/2.4 and optical image stabilization. A Portrait Mode is capable of producing photos with specific depth-of-field and lighting effects. It also has a quad-LED True Tone flash with 2× better light uniformity. Still photos with 6.5 megapixels (3412×1920) can be captured during video recording. Front camera On the front of the phone, a 7-megapixel True Depth camera has an f/2.2 aperture, and features face detection and HDR. It can capture 1080p video at 30 frames per second, 720p video at 240 frames per second, and exclusively allows for the use of Animoji; animated emojis placed on top of the user's face that intelligently react to the user's facial expressions. Mono audio Criticism has been aimed at video footage being recorded with mono audio (only one audio channel), and at a low bit rate of 96 kbit/s, while earlier mobile phones by competing vendors have been recording with stereo audio (two audio channels for spatiality) and higher bit rates, such as the Samsung Galaxy S3 and Sony Xperia S, both unveiled in 2012. Wireless charging iPhone X also supports Qi-standard wireless charging. In tests conducted by MacRumors, the iPhone X's charging speeds varies significantly depending on what types of cables, powerbanks, adapters, or wireless chargers are used. Software Due to its different screen layout, iOS developers are required to update their apps to make full use of the additional screen real estate. Such changes include rounded corners, sensor "notch" at the top of the screen, and an indicator area at the bottom for accessing the home screen. Apple published a "Human Interface Guidelines" document to explain areas of focus, and discouraged developers from attempting to mask or call special attention to any of the new changes. Additionally, text within the app needs to be configured to properly reference Face ID rather than Touch ID where the authentication technology is used on iPhone X. In anticipation of the release of the phone, most major apps were quickly updated to support the new changes brought by iPhone X, though the required changes did cause delayed app updates for some major apps. The traditional home button, found on all previous devices in the iPhone lineup, has been removed entirely, replaced by touch-based gestures. To wake up the device, users can tap the display or use the side button; to access the home screen, users must swipe up from the bottom of the display; and to access the multitasking window, users must swipe up similarly to the method of accessing the home screen, but stop while the finger is in the middle of the screen, causing an app carousel to appear. Reception General reviews iPhone X's rear camera received an overall rating of 97 from DxOMark, a camera testing company, short of the highest score of 99, awarded to Samsung's Galaxy S9+ smartphone. Google's Pixel 2 received a rating of 98. Consumer Reports, a non-profit, independent organization aiming to write impartial reviews of consumer products, ranked iPhone X below iPhone 8 and iPhone 8 Plus, as well as below Samsung's Galaxy S8, S8+ and Note 8, due to less durability and shorter battery life, although it praised the X's camera as "the highest-rated smartphone camera" it had ever tested. Nilay Patel of The Verge praised the display, calling it "polished and tight" and "bright and colorful". He criticized the repeated lack of a headphone jack, the device's fragility despite Apple's claims of durability, and the sensor notch, calling it "ugly". Patel highlighted the fact that apps required updates to fit the new screen, writing that not all popular apps had received updates by the time of the review, resulting in some apps with "huge black borders" resembling iPhone 8. He especially criticized the positioning of the sensor notch while holding the phone in landscape mode, causing the notch to go "from being a somewhat forgettable element in the top status bar to a giant interruption on the side of the screen". The cameras were given positive feedback for maintaining detail in low-light. Patel particularly praised Animoji, calling it "probably the single best feature on the iPhone X", writing that "they just work, and they work incredibly well". Finally, he wrote that Face ID was the whole foundation of iPhone X, and stated that it "generally works great", though acknowledging the occasional misstep, in which users must "actively move the phone closer to your face to compensate". He specifically criticized the limited range of Face ID, with authentication only working when holding the phone 25–50 centimeters away from the face. Chris Velazco of Engadget also praised the display, writing that, in his experience, the sensor "notch" goes from being "weird at first" to not being noticeable due to action in videos usually happening in the center. The build quality was given particular acclaim, being called "a beautifully made device" with the construction that "seamlessly" connects the front and back glass with the stainless-steel frame. Velazco noted that the new gesture-based interaction takes time to get used to, particularly the Control Center being moved from the bottom to the top right of the display. The camera, processor performance, and battery life were also given positive thoughts. The cost of repairing an iPhone is also very large compared to its predecessors. If the iPhone X is damaged by user damage (not a manufacturing defect), screen repairs cost US$279, and other repairs like replacing iPhone X batteries are more expensive. In a heavily negative review, Dennis Green of Business Insider significantly criticized the impossible one-handed use of iPhone X, writing that the new gestures to use the phone, such as swiping from the top down to access notifications and the Control Center, did not work when using the phone with only one hand due to not being able to reach the top. His review sparked outrage among Twitter users, many of whom used condescending tones, which Green reasoned as "I don't know whether the anger was directed toward me out of loyalty to Apple or to justify their own choice to spend $1,000 on a phone. It was obvious that much of the criticism came from people who had never used the phone". Macworlds Roman Loyola praised the Face ID authentication system, writing that the setup process was "easy" and that its system integration was "more seamless" than the Touch ID fingerprint authentication of the past. That said, Loyola did note the "half-second" slower unlocking time than Touch ID as well as needing to look directly at the screen, making it impossible to unlock with the phone next to the user on a desk. Face ID security and privacy concerns Face ID has raised concerns regarding the possibility of law enforcement accessing an individual's phone by pointing the device at the user's face. United States Senator Al Franken asked Apple to provide more information on the security and privacy of Face ID a day after the announcement, with Apple responding by highlighting the recent publication of a security white paper and knowledge base detailing answers. Inconsistent results have been shown when testing Face ID on identical twins, with some tests showing the system managing to separate the two, while other tests have failed. However, despite Apple's promise of increased security of Face ID compared to the Touch ID fingerprint authentication system, there have been multiple media reports indicating otherwise. The Verge noted that courts in the United States have granted different Fifth Amendment rights in the United States Constitution to biometric unlocking systems as opposed to keycodes. Keycodes are considered "testimonial" evidence based on the contents of users' thoughts, whereas fingerprints are considered physical evidence, with some suspects having been ordered to unlock their phones via fingerprint. Many attempts to break through Face ID with sophisticated masks have been attempted, though all have failed. A week after iPhone X was released, Vietnamese security firm Bkav announced in a blog post that it had successfully created a $150 mask that tricked Face ID, though WIRED noted that Bkav's technique was more of a "proof-of-concept" rather than active exploitation risk, with the technique requiring a detailed measurement or digital scan of the iPhone owner's face, putting the real risk of danger only to targets of espionage and world leaders. Additionally, Reuters reported in early November 2017 that Apple would share certain facial data on users with third-party app developers for more precise selfie filters and for fictional game characters to mirror real-world user facial expressions. Although developers are required to seek customer permission, are not allowed to sell the data to others nor create profiles on users nor use the data for advertising, and are limited to a more "rough map" rather than full capabilities, they still get access to over 50 kinds of facial expressions. The American Civil Liberties Union (ACLU) and the Center for Democracy and Technology raised privacy questions about Apple's enforcement of the privacy restrictions connected to third-party access, with Apple maintaining that its App Store review processes were effective safeguards. The "rough map" of facial data third-parties can access is also not enough to unlock the device, according to Reuters. However, the overall idea of letting developers access sensitive facial information was still not satisfactorily handled, according to Jay Stanley, a senior policy analyst with the ACLU, with Stanley telling Reuters that "the privacy issues around of the use of very sophisticated facial recognition technology for unlocking the phone have been overblown. ... The real privacy issues have to do with the access by third-party developers". Sensor housing controversy Much of the debate about the iPhone X has revolved around the design of the sensor housing, dubbed "notch" by the media, at the top of the display. The Outline described it as "a visually disgusting element", and The Verge posted a report focusing on public criticism and people mocking Apple's "odd design choice", but not every reviewer was equally negative in their opinions. Third-party iOS developers interviewed by Ars Technica said that, despite the work of restructuring design elements in their apps, the notch did not cause any problems, with some even arguing that the notch was a good push to simplify their designs. Just two weeks after iPhone X's release, Apple approved a "notch remover" app through the App Store, that places black bars across the top of the home screen to make the notch visually disappear. The approval was done despite the company's user interface guidelines discouraging developers from specifically masking the design. iPhone X was not the first device with a notch; both the Essential Phone and Sharp Aquos S2 were announced before it and had a display notch, albeit much smaller, but the iPhone X arguably popularized it. Issues Early activation issues In November 2017, early adopters of the new phone reported that they were experiencing activation issues on certain cellular carriers, most notably AT&T. AT&T announced within hours that the issue had been fixed on their end, and a spokesperson for the Verizon carrier told the media none of its customers were affected despite some reports of problems. Cold weather issues In November 2017, iPhone X users reported on Reddit that the device's screen would become unresponsive after experiencing rapid temperature drops. Apple released the iOS 11.1.2 update on November 16, 2017, fixing the issue. Forbes contributor Gordon Kelly reported in March 2018 that over 1,000 users experienced problems using camera flash in cold weather, with the problem being fixed in a later software update. Cellular modem differences Apple has been engaged in a legal battle with Qualcomm over allegedly anti-competitive practices and has been dual-sourcing cellular modem chips to reduce reliance on the semiconductor manufacturer. Starting with iPhone 7 in 2016, Apple has used about half Qualcomm modem chips and half Intel. Professional measurement tests performed by wireless signal testing firm Cellular Insights indicated that, as in the previous-gen iPhone 7, Qualcomm's chips outperform Intel's in LTE download speeds, up to 67% faster in very weak signal conditions, resulting in some sources recommending the purchase of an unlocked iPhone X or one bought through cellular carrier Verizon, in order to get the models featuring the faster Qualcomm modem. Additionally, CNET reported in September 2017 that the new iPhone models, including X, 8 and 8 Plus, do not have the ability to connect to the next-generation of wireless LTE data connection, despite 10 new Android devices, including flagships from main smartphone competitor Samsung, all having the capability to do so. While Apple's new smartphones have support for "LTE Advanced", with a theoretical peak speed of 500 megabits per second, the Android models have the ability to connect to "Gigabit LTE", allowing theoretical speeds up to 1 gigabit per second, doubling Apple's speed. NFC problems After releasing the iPhone X in Japan and China, customers experienced issues related to the phone's NFC while trying to access public transit smart card readers. In April 2018, Apple released a revision to the iPhone X, that included a vastly improved NFC chip. This solved the problem of NFC reader errors in most cases. Previously around 1 out of 3 NFC attempts would fail after initial reports. This issue also affected users in America. Display Module Replacement Program Apple has determined an issue with certain iPhone X devices where the display wouldn't respond to the user's touch, due to a component that might fail on the display module. Apple stated that they will repair the affected devices free of charge, so long as the device is under 3 years old. See also List of iOS devices History of iPhone Comparison of smartphones Timeline of iPhone models References External links (archived) Computer-related introductions in 2017 IOS Mobile phones introduced in 2017 Mobile phones with multiple rear cameras Discontinued iPhones Mobile phones with 4K video recording Mobile phones with pressure-sensitive touch screen
2260420
https://en.wikipedia.org/wiki/Military%20Assistance%20Command%2C%20Vietnam%20%E2%80%93%20Studies%20and%20Observations%20Group
Military Assistance Command, Vietnam – Studies and Observations Group
Military Assistance Command, Vietnam – Studies and Observations Group (MACV-SOG) was a highly classified, multi-service United States special operations unit which conducted covert unconventional warfare operations prior to and during the Vietnam War. Established on 24 January 1964, it conducted strategic reconnaissance missions in the Republic of Vietnam (South Vietnam), the Democratic Republic of Vietnam (North Vietnam), Laos, and Cambodia; took enemy prisoners, rescued downed pilots, conducted rescue operations to retrieve prisoners of war throughout Southeast Asia, and conducted clandestine agent team activities and psychological operations. The unit participated in most of the significant campaigns of the Vietnam War, including the Gulf of Tonkin incident which precipitated increased American involvement, Operation Steel Tiger, Operation Tiger Hound, the Tet Offensive, Operation Commando Hunt, the Cambodian Campaign, Operation Lam Son 719, and the Easter Offensive. The unit was downsized and renamed Strategic Technical Directorate Assistance Team 158 on 1 May 1972, to support the transfer of its work to the Strategic Technical Directorate of the Army of the Republic of Vietnampart of the Vietnamization effort. Foundation The Studies and Observations Group (also known as SOG, MACSOG, and MACV-SOG) was a top secret, joint unconventional warfare task force created on 24 January 1964 by the Joint Chiefs of Staff as a subsidiary command of the Military Assistance Command, Vietnam (MACV). It eventually consisted primarily of personnel from the United States Army Special Forces, the United States Navy SEALs, the United States Air Force (USAF), the Central Intelligence Agency (CIA), and elements of the United States Marine Corps Force Reconnaissance units. The Studies and Observation Group, as the unit was initially titled, was in fact controlled by the Special Assistant for Counterinsurgency and Special Activities (SACSA) and his staff at the Pentagon. This arrangement was necessary since SOG needed some listing in the MACV table of organization and the fact that MACV's commander, General William Westmoreland, had no authority to conduct operations outside territorial South Vietnam. This command arrangement through SACSA also allowed tight control (up to the presidential level) of the scope and scale of the organization's operations. Its mission was: ...to execute an intensified program of harassment, diversion, political pressure, the capture of prisoners, physical destruction, acquisition of intelligence, generation of propaganda, and diversion of resources, against the Democratic Republic of Vietnam. These operations (OPLAN 34-Alpha) were conducted in an effort to convince North Vietnam to cease its sponsorship of the Viet Cong (VC) insurgency in South Vietnam. Similar operations had been under the purview of the CIA, which placed agent teams in North Vietnam with airdrops and over-the-beach insertions. Under pressure from Secretary of Defense Robert S. McNamara, the program, and all other agency para-military operations, was turned over to the military in the wake of the disastrous Bay of Pigs Invasion operation in Cuba. Colonel Clyde Russell (SOG's first commander) had difficulty creating an organization to fulfill his mission since, at the time, United States Special Forces were unprepared doctrinally or organizationally to carry it out. At this point the Special Forces' mission was to conduct guerrilla operations behind enemy lines in the event of an invasion by conventional forces, not conducting agent, maritime, or psychological operations. Russell expected to take over a fully functional organization and assumed that the CIA (which would maintain a representative on SOG's staff and contribute personnel to the organization) would see the military through any teething troubles. His expectations and assumptions were incorrect. The contribution of the South Vietnamese came in the form of SOG's counterpart organization (which used a plethora of titles, and was finally called the Strategic Technical Directorate [STD]). After a slow and shaky start, the unit got its operations underway. Originally, these consisted of a continuation of the CIA's agent infiltrations. Teams of South Vietnamese volunteers were parachuted into the North, but most were quickly captured. Maritime operations against the coast of North Vietnam resumed after the delivery of Norwegian-built "Nasty" Class Fast Patrol Boats to the unit, but these operations also fell short of expectations. Gulf of Tonkin Incident On the night of 30–31 July 1964, four SOG vessels shelled two islands, Hon Me and Hon Ngu, off the coast of North Vietnam. It was the first time SOG vessels had attacked North Vietnamese shore facilities by shelling from the sea. The next afternoon, the destroyer began an electronic intelligence-gathering mission along the coast, in the Gulf of Tonkin. On the afternoon of 2 August, three s of the Vietnam People's Navy came out from Hon Me and attacked the Maddox. The American vessel was undamaged, and the U.S. claimed that one of the attacking vessels had been sunk and that the others were damaged by U.S. carrier-based aircraft. On the night of 3–4 August, three SOG vessels shelled targets on the mainland of North Vietnam. On the night of 4 August, after being joined by the destroyer , Maddox reported to Washington that both ships were under attack by unknown vessels, assumed to be North Vietnamese. This second reported attack led President Lyndon B. Johnson to launch Operation Pierce Arrow, an aerial attack against North Vietnamese targets on 5 August. Johnson also went to the United States Congress that day and requested the passage of the Southeast Asia Resolution (better known as the Gulf of Tonkin Resolution), asking for the unprecedented authority to conduct military actions in Southeast Asia without a declaration of war. Johnson's announcement of the incidents involving the destroyers did not mention that SOG vessels had been conducting operations in the same area as the Maddox immediately before, and during, that cruise; nor did it mention that, on 1 and 2 August, Laotian aircraft, flown by Thai pilots, carried out bombing raids in North Vietnam itself, or that a SOG agent team had been inserted into the same relative area and been detected by the North Vietnamese. Hanoi, which may have assumed that all of these actions signaled an increased level of U.S. aggression, decided to respond in what it claimed as its territorial waters. Thus, the three P-4s were ordered to attack the Maddox. The second incident, in which Maddox and Turner Joy were claimed to be attacked, never took place. Although some confusion reigned at the time of the second attack, the facts were clear to the administration by the time it went to Congress to obtain the resolution. When confronted by Senator Wayne Morse, who had discovered the existence of SOG's 34-Alpha raids, McNamara lied to him, stating, "Our Navy played absolutely no part in, was not associated with, and was not aware of any South Vietnamese actions." Yet both Commander in Chief, Pacific Command (CINCPAC) and he were well aware of the possible connections, at least insofar as they might have existed in the minds of the Hanoi leadership. These events were not disclosed until the publication of the Pentagon Papers in 1970. The last aspect of SOG's original missions consisted of psychological operations conducted against North Vietnam. The unit's naval arm picked up northern fishermen during searches of coastal vessels and detained them on Cu Lao Cham Island off Da Nang, South Vietnam (the fishermen were told that they were, in fact, still within their homeland). The South Vietnamese crews and personnel on the island posed as members of a dissident northern communist group known as the Sacred Sword of the Patriot League (SSPL), which opposed the takeover of the Hanoi regime by politicians who supported the People's Republic of China (PRC). The kidnapped fishermen were well fed and treated, but they were also subtly interrogated and indoctrinated in the message of the SSPL. After a two-week stay, the fishermen were returned to northern waters. This fiction was supported by the radio broadcasts of SOG's "Voice of the SSPL", leaflet drops, and gift kits containing pre-tuned radios which could only receive broadcasts from the unit's transmitters. SOG also broadcast "Radio Red Flag," programming purportedly directed by a group of dissident communist military officers also within the north. Both stations were equally adamant in their condemnations of the PRC, the South and North Vietnamese regimes, and the U.S. and called for a return to traditional Vietnamese values. Straight news, without propaganda embellishment, was broadcast from South Vietnam via the Voice of Freedom, another SOG creation. These agent operations and propaganda efforts were supported by SOG's air arm, the First Flight Detachment. The unit consisted of four heavily modified C-123 Provider aircraft flown by Nationalist Chinese aircrews in SOG's employ. The aircraft flew agent insertions and resupply, leaflet and gift kit drops, and carried out routine logistics missions for SOG. Shining Brass On 21 September 1965, the Pentagon authorized MACV-SOG to begin cross-border operations in Laos in areas contiguous to South Vietnam's western border. MACV had sought authority for the launching of such missions (Operation Shining Brass) since 1964 in an attempt to put boots on the ground in a reconnaissance role to observe, first hand, the enemy logistical system known as the Ho Chi Minh Trail (the Truong Son Road to the North Vietnamese). MACV, through the Seventh Air Force, had begun carrying out a strategic bombardment of the logistical system in southern Laos in April (Operation Steel Tiger) and had received authorization to launch an all-Vietnamese recon effort (Operation Leaping Lena) that had proven to be a disaster. U.S. troops were necessary and SOG was given the green light. On 18 October 1965, MACV-SOG conducted its first cross-border mission against target D-1, a suspected truck terminus on Laotian Route 165, inside Laos. The team consisted of two U.S. Special Forces soldiers and four South Vietnamese. The mission was deemed a success with 88 bombing sorties flown against the terminus resulting in multiple secondary explosions, but also resulted in SOG's first casualty, Special Forces Captain Larry Thorne in a helicopter crash. William H. Sullivan, U.S. Ambassador to Laos, was determined that he would remain in control over decisions and operations that took place within the supposedly neutral kingdom. The Laotian Civil War that raged intermittently between the communist Pathet Lao (supported by People's Army of Vietnam (PAVN) troops) and the Royal Lao armed forces (supported by the CIA-backed Hmong army of General Vang Pao and USAF aircraft) compelled both sides to maintain as low a profile as possible. Hanoi was interested in Laos due only to the necessity of keeping its supply corridor to the south open. The U.S. was involved for the opposite reason. Both routinely operated inside Laos, but both also managed to keep their operations out of sight due to Lao's supposed neutrality pursuant to the 1962 International Agreement on the Neutrality of Laos. Ambassador Sullivan had the task of juggling the bolstering of the inept Lao government and military, the CIA and its clandestine army, the USAF and its bombing campaign, and now the incursions of the U.S.-led reconnaissance teams of SOG. His limitations on SOG's operations (depth of penetration, choice of targets, length of operations) led to immediate and continuous enmity between the embassy in Vientiane and the commander and troops of SOG, who promptly labelled Sullivan the "Field Marshal." The ambassador responded in kind. Regardless, MACV-SOG began a series of operations that would continue to grow in size and scope over the next eight years. The Laotian operations were originally run by a Command and Control (C&C) headquarters at Da Nang. The teams, usually three Americans and three to 12 indigenous mercenaries, were launched from Forward Operating Bases (FOBs) in the border areas (originally at Kham Duc, Kontum, and Khe Sanh). After in-depth planning and training, a team was airlifted over the border by aircraft provided by the U.S. Marine Corps (who operated in the I Corps area) or by dedicated Republic of Vietnam Air Force (RVNAF) H-34 Kingbee helicopters of the 219th Squadron, which would remain affiliated with MACV-SOG for its entire history. The team's mission was to penetrate the target area, gather intelligence, and remain undetected as long as possible. Communication was maintained with a forward air control (FAC) aircraft, which would communicate with USAF fighter-bombers if the necessity, or the opportunity to strike lucrative targets, arose. The FAC was also the lifeline through which the team would communicate with its FOB and through which it could call for extraction if compromised. By the end of 1965, MACV-SOG had shaken itself out into operational groups commanded from its Saigon headquarters. These included Maritime Operations (OPS-31), which continued harassment raids and support for psychological operations (via kidnapped fishermen); Airborne Operations (OPS-34), which continued to insert agent teams and supplies into the north; Psychological Operations (OPS-33), which continued its "black" radio broadcasts, leaflet and gift kit drops, and running the operation at Cu Lao Cham; the revised Shining Brass program; and Air Operations (OPS-32), which supported the others and provided logistical airlift. Training for SOG's South Vietnamese agents, naval action teams, and indigenous mercenaries (usually Nùng or Montagnards of various tribes) was conducted at the ARVN Airborne training center (Camp Quyet Thang) at Long Thành, southeast of Bien Hoa. Training for the U.S. personnel assigned to recon teams (RTs) was conducted at Kham Duc. Daniel Boone During 1966 and 1967, it became obvious to MACV that the North Vietnamese were using neutral Cambodia as a part of their logistical system, funneling men and supplies to the southernmost seat of battle. Unknown was the extent of that use. The answer shocked intelligence analysts. Prince Norodom Sihanouk, trying to balance the threats facing his nation, had allowed Hanoi to set up a presence in Cambodia. Although the extension of Laotian Highway 110 into Cambodia in the tri-border region was an improvement to its logistical system, North Vietnam was now unloading communist-flagged transports in the port of Sihanoukville and trucking the cargo to its base areas on the eastern border. Beginning in 1966, SOG conducted prisoner snatch missions of PAVN soldiers behind enemy lines along the Hồ Chí Minh Trail. No matter the team's primary mission, capturing enemy soldiers always remained the team's secondary mission when the opportunity presented itself due to valuable intelligence gained related to PAVN troop movements, size, and base locations. Teams also received rewards including free R&R trips to Taiwan or Thailand aboard a SOG C-130 Blackbird, a $100 bonus for each American, and a new Seiko watch and cash to each indigenous member. Recon teams succeeded in capturing 12 enemy soldiers in Laos during that year. In April 1967, MACV-SOG was ordered to commence Operation Daniel Boone, a cross-border recon effort in Cambodia. Both SOG and the 5th Special Forces Group had been preparing for just such an eventuality. The 5th SF had gone so far as to create Projects B-56 Sigma and B-50 Omega, units based on SOG's Shining Brass organization, which had been conducting in-country recon efforts on behalf of the field forces, awaiting authorization to begin the Cambodian operations. A turf war broke out between the 5th and SOG over missions and manpower. The Joint Chiefs decided in favor of MACV-SOG, since it had already successfully conducted covert cross-border operations. Operational control of Sigma and Omega was eventually handed over to SOG. The first mission was launched in September and construction was begun on a new C&C at Ban Me Thuot, in the Central Highlands. The recon teams (RTs) inserted into Cambodia faced even more restrictions than those in Laos. Initially, they had to cross the border on foot, had no tactical air support (neither helicopters nor fixed wing), and were not to be provided with FAC coverage. The teams were to rely on stealth and were usually smaller in size than those that operated in Laos. Daniel Boone was not the only addition to SOG's size and missions. During 1966, the Joint Personnel Recovery Center (JPRC) was established. The JPRC was to collect and coordinate information on POWs, escapees, and evadees, to launch missions to free U.S. and allied prisoners, and to conduct post-search and rescue (SAR) operations when all other efforts had failed. SOG provided the capability to launch Brightlight rescue missions anywhere in Southeast Asia at a moments notice. The Air Operations Group had been augmented in September 1966 by the addition of four specially-modified MC-130E Combat Talon (deployed under Combat Spear) aircraft, officially the 15th Air Commando Squadron, which supplemented the C-123s (Heavy Hook) of the First Flight Detachment already assigned to SOG. Another source of aerial support came from the CH-3 Jolly Green Giant helicopters of D-Flight, 20th Special Operations Squadron (20th SOS) (callsign Pony Express), which had arrived at Nakhon Phanom Royal Thai Navy Base during the year. These helicopters had been assigned to conduct operations in support of the CIA's clandestine operations in Laos and were a natural for assisting SOG in the Shining Brass area. When helicopter operations were finally authorized for Daniel Boone, they were provided by the dedicated support of the Huey gunships and transports of the 20th SOS (callsign Green Hornets). MACV-SOG reconnaissance teams were also bolstered by the creation of exploitation forces, which could either support the teams in time of need, or launch their own raids against the trail. They consisted of two (later three) Haymaker battalions (which were never used) divided into company-sized "Hatchet" forces which were, in turn, sub-divided into "Hornet" platoons. The commanders and non-commissioned officers of these forces were U.S. personnel, usually assigned on a temporary duty basis in "Snakebite" teams from the 1st Special Forces Group on Okinawa. By 1967, MACV-SOG had also been given the mission of supporting the new Muscle Shoals portion of the electronic and physical barrier system under construction along the Demilitarized Zone (DMZ) in I Corps. SOG recon teams were tasked with reconnaissance and the hand emplacement of electronic sensors both in the western DMZ (Nickel Steel) and in southeastern Laos. Due to the disclosure of the cover name Shining Brass in a U.S. newspaper article, SOG decided that new cover designations were necessary for all of its operational elements. The Laotian cross-border effort was renamed Prairie Fire and it was combined with Daniel Boone in the newly created Ground Studies Group (OPS-35). All operations conducted against North Vietnam were now designated Footboy. These included Plowman maritime missions, Humidor psychological operations, Timberwork agent operations, and Midriff air missions. Never happy with its long-term agent operations in North Vietnam, SOG decided to initiate a new program whose missions would be shorter in duration, conducted closer to South Vietnam, and carried out by smaller teams. Every effort would be expended to retrieve the teams when their missions were accomplished. This was the origin of STRATA, the all-Vietnamese Short Term Roadwatch and Target Acquisition teams. After a slow initial start, the first agent team was recovered from the north. The following missions were plagued with difficulties, but, after additional training, the team's performance improved dramatically. On 2 June 1967 SOG launched an operation against Oscar Eight, a PAVN base area located approximately south-southwest of Khe Sanh Combat Base (), believed to contain a PAVN field army headquarters. The target area was hit by nine B-52s which caused numerous secondary explosions, but an aerial observer could see PAVN troops in the area immediately afterwards. A Hatchet Force was then landed by nine H-34 Kingbees and five United States Marine Corps (USMC) CH-46s from HMM-165. The Hatchet Force was soon pinned down in the bomb craters and close air support aircraft were called in. One A-1 Skyraider was hit by flak and collided with another A-1 losing its tail and crashing into the ground killing its pilot, Lieutenant Colonel Lewis M. Robinson. The fighting continued throughout the night and the next morning it was decided to pull the force out. During the extraction two USMC UH-1E helicopter gunships from VMO-3 were shot down as was a Kingbee H-34. A CH-46 succeeded in extracting part of the force, then a USAF F-4 Phantom was shot down. Another CH-46 came and extracted more of the force, but it was hit by antiaircraft fire and crashed from a height of . The PAVN fired on the survivors in the wreckage killing many of them. One of the survivors, Sergeant first class Charles Wilklow was dragged into a clearing covered by PAVN machine guns to be used as bait to attract a U.S. rescue mission. After four days Wilklow escaped into the jungle, was seen by a reconnaissance plane and then rescued by a Kingbee. The raid had cost 7 U.S. dead and missing, only one of the missing, USMC Corporal Frank Cius, was released on 5 March 1973 as part of Operation Homecoming. More than 40 Nùngs were also killed or missing. Black year – 1968 For MACV and SOG, 1968 was a black year. The year saw the Tet Offensive, the largest PAVN/Viet Cong offensive thus far in the conflict, but the collapse of SOG's northern operations. Although the Tet Offensive was contained and rolled back, and significant casualties were inflicted upon the enemy, the mood of the American people and government had turned irrevocably against an open-ended commitment by the United States. For most of the year MACV-SOG's operations centered around in-country missions in support of field forces. Since the enemy had to come out from his cover and launched conventional operations, the U.S. and South Vietnam lost no opportunity in engaging them. General Westmoreland, encouraged by the Joint Chiefs of Staff, requested 200,000 more troops, under the stipulation that they would be used to conduct cross-border operations to pursue the foe. This was the logical military move at this point in the conflict, but it was already too late. In 1968, SOG recon teams conducted hundreds of missions gathering valuable intelligence but suffered 79 SF troops killed in action or missing. MACV-SOG captured three PAVN soldiers from Cambodia and one from Laos. President Johnson sought a way out of the commitment that he had originally escalated. Politically, this was late in coming, but Washington had finally awakened to its predicament. Johnson attempted to get Hanoi to reopen peace negotiations and the carrot he offered was the cessation of all U.S. operations against North Vietnam north of the 20th parallel. Hanoi had only sought an end to the air campaign against the north (Operation Rolling Thunder), but Johnson went one further by calling a halt to all northern operations, both overt and covert. This order effectively ended MACV-SOG's agent team, propaganda, and aerial operations. In reality, for MACV-SOG, the point was moot. Suspicions abounded within the organization that Operation Timberwork had been penetrated by North Vietnamese dich van agents. Intelligence returns from the northern agent teams had been disappointing and more than three-quarters of the agents inserted had been captured either during or not long after their insertion. The fact that SOG had followed the CIA's failed formula for three years was not considered a contributing factor. The unit was more concerned over Washington's continuous rejection of one of the original goals of the operation: the formation of a resistance movement by potential dissident elements in North Vietnam. Washington's stated goal in the conflict was a free and viable South Vietnam, not the overthrow of the Hanoi regime. The conundrum was what would happen had the program succeeded. The best possible outcome would have been a repeat of the ill-fated Hungarian revolution of 1956, crushed by the Soviet Union, and about which the U.S. could do nothing. Some American writers on the subject (including many ex-SOG personnel) blamed the failure of the operations on the penetration of the unit by enemy spies – a claim not entirely unsupported by facts. Others, however, laid more of the blame on the operational ineptitude of SOG, which simply continued to repeat a failed formula. Changes to the infiltration program (in the form of the diversionary Operation Forae), spurred by suspicions at headquarters, came only in 1967. The security apparatus of North Vietnam had decades in which to learn to cope with not only the CIA's program, but with the unconventional and covert operations of its French predecessors. The CIA had been loath to conduct such operations in the north, since similar operations in the Soviet Union, Eastern Europe, and the PRC had been abject failures and North Vietnam was considered an even tougher target to penetrate. North Vietnamese security forces simply captured a team, turned its radio operator, and continued to broadcast as though nothing had happened. Supplies and reinforcements were requested, parachuted in to the requesting team's location, and were likewise captured. During the period 1960–1968 both the CIA and MACV-SOG dispatched 456 South Vietnamese agents to their deaths or long incarcerations in northern prisons. Hanoi continued this process year after year, learning SOG's operational methods and bending them to its purpose. In the end, it was running one of the most successful counterintelligence operations of the post-Second World War period. On the night of 22–23 August as part of the Phase III Offensive a company from the VC R20 Battalion and a sapper platoon infiltrated MACV-SOG's Forward Operating Base 4, a compound just south of Marble Mountain Air Facility, killing 17 Special Forces soldiers (their largest one-day loss of the war) and wounding another 125 Allied troops. Thirty-two VC were killed. Commando Hunt With the deflation of its northern operations (although the JCS demanded that SOG retain the capability of reinitiating them), SOG concentrated its efforts on supporting Commando Hunt, the Seventh/Thirteenth Air Force's anti-infiltration campaign in Laos. By 1969 the Ground Studies Group was running its operations from C&Cs at Da Nang for operations in southeastern Laos and at Ban Me Thuot for its Cambodian operations. That year they were joined by a new C&C at Kontum, for operations launched into the triborder region of the Prairie Fire and the northern area of Daniel Boone, which was renamed Salem House that year. Each of the C&Cs was now fielding battalion-size forces, and the number of missions rose proportionately. Command and Control North (CCN) at Da Nang, commanded by a lieutenant colonel, used 60 recon teams and two exploitation battalions (four companies of three platoons). Command and Control Central (CCC) at Kontum, also commanded by a lieutenant colonel, used 30 teams and one exploitation battalion. During 1969 404 recon missions and 48 exploitation force operations were conducted in Laos. To give an example of the cost of such operations, during the year 20 Americans were killed, 199 wounded, and nine went missing in the Prairie Fire area. Casualties among the Special Commando Units (SCUs – pronounced Sues), as the indigenous mercenaries were titled, were: 57 killed, 270 wounded, and 31 missing. Command and Control South (CCS) at Ban Me Thuot, also commanded by a lieutenant colonel, consisted of 30 teams and an exploitation battalion. Since the use of exploitation forces was forbidden in Cambodia, these troops were utilized in securing launch sites, providing installation security, and conducting in-country missions. During the year, 454 reconnaissance operations were conducted in Cambodia. The teams were ferried into action by RVNAF H-34 Kingbees and assorted U.S. Army aviation units in the Prairie Fire area, and by the USAF helicopters of the 20th SOS in the Salem House area. By the end of 1969, SOG was authorized 394 U.S. personnel, but it is useful to compare those numbers to the actual strengths of the operational elements. There were 1,041 Army, 476 USAF, 17 USMC and seven CIA personnel assigned to those units. They were supported by 3,068 SCUs, and 5,402 South Vietnamese and third-country civilian employees, leading to a total of 10,210 military personnel and civilians either assigned to or working for MACV-SOG. The mission of the Ground Studies Group was to support the sensor-driven Operation Commando Hunt, which saw the rapid expansion of the bombing of the Ho Chi Minh Trail. This was made possible by the close-out of Rolling Thunder, which freed up hundreds of aircraft for interdiction missions. Intelligence for the campaign was supplied by both the recon teams of MACV-SOG and by the strings of air-dropped electronic sensors of Operation Igloo White (the successor to Muscle Shoals), controlled from Nakhon Phanom. 1969 saw the apogee of the bombing campaign, when 433,000 tons of bombs were dropped on Laos. SOG supported the effort with ground reconnaissance, sensor emplacement, wiretap, and bomb damage assessment missions. The cessation of the bombing of the north also freed the North Vietnamese to reinforce their anti-aircraft defenses of the trail system and aircraft losses rose proportionately. By 1969, the North Vietnamese had also worked out their doctrine and techniques for dealing with the recon teams. Originally, the PAVN had been caught unprepared and had been forced to respond in whatever haphazard manner local commanders could organize. Soon, however, an early warning system was created by placing radio-equipped air watch units within the flight paths between the launch sites and Base Areas. Within the Base Areas, lookouts were placed in trees and platforms to watch likely landing zones while the roads and trails were routinely swept by security forces. The PAVN also began to organize and develop specialized units that would both drive and then fix the teams so that they could be destroyed. By 1970, they had created a layered and effective system, and SOG recon teams found their time on the ground both shortened and more dangerous. The mauling or wiping out of entire teams began to become a less uncommon occurrence. Laos and Cambodia Since his election in 1968, President Richard M. Nixon had been seeking a negotiated settlement to the Vietnam War. In 1970, he saw an opportunity to buy time for the Saigon government during Vietnamization, the phased withdrawal of U.S. troops that began in the previous year. He also sought to convince Hanoi that he meant business. That opportunity was provided by the overthrow of Cambodia's Prince Sihanouk by the pro-American General Lon Nol. Nixon had escalated U.S. involvement in Cambodia by authorizing the secret Operation Menu bombings and by the time of Sihanouk's ouster, the program had been in operation for 14 months. Lon Nol promptly ordered North Vietnamese personnel out of the country. North Vietnam responded with an invasion of the country launched at the explicit request of the Khmer Rouge following negotiations with Nuon Chea. Nixon then authorized a series of incursions by U.S. and South Vietnamese ground forces that began on 30 April. With intelligence on communist Base Areas in eastern Cambodia gleaned from MACV-SOG, huge stockpiles of PAVN arms, ammunition, and supplies were overrun and captured. In May, Operation Freedom Deal, a continuous aerial campaign against the PAVN/Viet Cong and the Khmer Rouge was initiated. SOG recon teams in Cambodia now had all the air support that they needed. As a result of U.S. political reaction, on 29 December the Cooper-Church Amendment was passed by Congress, prohibiting participation by U.S. ground forces in any future operations in either Cambodia or Laos. U.S. participation in Cambodian operations (which were already being turned over to all-Vietnamese teams) ended on 1 July 1970 and the same stipulation was to apply in Laos no later than 8 February 1971 (the only qualifications to the restrictions, in both operational areas, were in case of either POW rescue missions or aircraft crash site inspections). Although unknown to the U.S. public, many MACV-SOG veterans participated in Operation Ivory Coast, the Son Tay POW camp raid carried out in North Vietnam on 21 November 1970. The deputy commander of the joint rescue force was Colonel Arthur "Bull" Simons, who had created SOG's cross-border effort in 1965. By 1971 the U.S. was steadily withdrawing from Southeast Asia. As a test of Vietnamization, Washington decided to allow the South Vietnamese to launch Operation Lam Son 719, the long-sought incursion into Laos whose aim would be the cutting the Ho Chi Minh Trail. MACV and the South Vietnamese had been planning just such an operation as far back as August 1964, but the concept was continuously turned down due to the fallout that would have been incurred by the invasion of supposedly "neutral" Laos. The Laotian government (supported by Ambassador Sullivan and the State Department) was adamantly opposed to such an operation. On 8 February, 16,000 (later 20,000) South Vietnamese troops, backed by U.S. helicopter and air support, rolled into Laos along Route 9 and headed for the PAVN logistical hub at Tchepone. Unlike the Cambodian incursion, however, the North Vietnamese stood and fought, gradually mustering 60,000 troops. By 25 March, the South Vietnamese forces retreated. Ironically, MACV-SOG's role in the operation was only peripheral. Recon teams conducted diversionary operations prior to the invasion and helped cover the South Vietnamese withdrawal, but they were otherwise forbidden from participation in the very operation that both MACV-SOG and MACV had come to consider its raison d'etre. In Laos, the North Vietnamese cleared their logistical corridor to the west for security reasons and increased their aid and support for the Pathet Lao. Fighting that once was seasonal became continuous and conventional. The Cambodian Civil War would escalate with the PRC backed Khmer Rouge (also backed by the exiled Sihanouk), fighting Lon Nol's central government. Following US withdrawal from Indochina, its allies in Laos and Cambodia would collapse to the North Vietnamese backed forces. Withdrawal The American withdrawal from South Vietnam began to directly affect SOG in 1971. By early 1972 U.S. military personnel were forbidden from conducting operations in either Laos or Cambodia, its teams of mercenary SCUs continued those operations (in the newly renamed Phu Dung/Prairie Fire and Thot Not/Salem House areas). The organization did, however, maintain its strength in U.S. personnel, who continued to conduct in-country missions. It was also continuously tasked by the JCS with maintaining forces in readiness to once again take up northern operations if called upon to do so. The Easter Offensive, launched by the PAVN on 30 March 1972, made cross-border operations irrelevant. As with Tet, all of MACV-SOG/STD's efforts were concentrated on in-country missions to support the Field Forces. In late March 1971, when the 5th Special Force Group was redeployed to the U.S., the Command and Control elements were renamed Task Force Advisory Elements (TF1AE, TF2AE and TF3AE). They originally consisted of 244 U.S. and 780 indigenous personnel each, but they were quickly drawn down by the elimination of the exploitation forces. For SOG, Vietnamization was finally nigh. On 1 May 1972, the unit was reduced in strength and renamed the Strategic Technical Directorate Assistance Team 158 (STDAT-158). The Ground Studies Group was disestablished and replaced by the Liaison Service Advisory Detachments. SOG's air elements stood down for redeployment, the JPRC was turned over to MACV and redesignated the Joint Casualty Resolution Center, while the psychological operations personnel and installations were turned over to either the STD or JUSPAO. The final casualty of SOG ground operations occurred on 11 October 1971 when Sergeant First Class Audley D. Mills was killed when a booby-trap he was trying to disarm detonated. The function of STDAT-158 was to assist the STD in a complete takeover of SOG's operations. The operational elements had already been absorbed and were expanded by the inclusion of troops from the now-disbanded South Vietnamese Special Forces. The task of the American personnel was to provide technical support (in logistics, communications, etc.) and advice to the STD. This the unit did until its disbandment on 12 March 1973. The South Vietnamese Joint General Staff, strapped for cash and equipment in the final stand-down period, never used the STD in a strategic reconnaissance role. Instead, the STD's units were launched on in-country missions until the dissolution of their parent organization in March 1973. In January 1973, President Nixon ordered a halt to all U.S. combat operations in South Vietnam and, on the 27th of that month, the Paris Peace Accords were signed by the belligerent powers in Paris. On 21 February, a similar accord was signed on Laos, ending the bombing of that country and instituting a cease fire. On the 29th, MACV was disestablished and remaining U.S. troops began leaving the south. On 14 August the U.S. Air Force ceased its bombing of Cambodia, bringing all military actions by the U.S. in Southeast Asia to an end. Recognition The U.S. military (and MACV-SOG personnel) kept tight security over knowledge of the unit's operations and existence until the early 1980s. Although there had been some small leaks by the media during the conflict, they were usually erroneous and easily dismissed. More specific was the release of documents dealing with the early days of the operation in the Pentagon Papers and by the testimony of ex-SOG personnel during congressional investigations into the bombing campaigns in Laos and Cambodia in the early 1970s. Historians interested in the unit's activities had to wait until the early 1990s, when MACV-SOG's Annexes to the annual MACV Command Histories and a Pentagon documentation study of the organization were declassified for the Senate Select Committee on POW/MIA Affairs' hearings on the Vietnam War POW/MIA issue. One early source of information (if one read between the lines) were the citations issued for the award of the Medal of Honor to MACV-SOG personnel (although they were never recognized as such). One USAF helicopter pilot, two U.S. Navy SEALs, one U.S. Army medic, and nine Green Berets earned the nation's highest award on SOG operations: Staff Sergeant Roy P. Benavidez (who had to wait until he received his award from President Ronald Reagan) Staff Sergeant Jon Cavaiani First Lieutenant James P. Fleming (USAF 20th Special Operations Squadron) First Lieutenant Loren D. Hagen (posthumous), CCN/TF1AE Sergeant First Class Robert L. Howard (awarded on his third separate recommendation) Specialist 5 John J. Kedenburg (posthumous) Staff Sergeant Franklin D. Miller (5th Special Forces Group) Lieutenant Thomas R. Norris (Navy SEAL) Sergeant Gary M. Rose First Lieutenant George K. Sisler (posthumous) Engineman Second Class Michael E. Thornton (Navy SEAL), STDAT-158 Sergeant First Class Fred W. Zabitosky Twenty-two other members of the unit received the Distinguished Service Cross, the nation's second highest award for valor. On 4 April 2001, the U.S. Army officially recognized the bravery, integrity, and devotion to duty of its covert warriors by awarding the unit a Presidential Unit Citation during a ceremony at Fort Bragg, North Carolina, the home of U.S. Army Special Forces. Technology McGuire rig Fulton surface-to-air recovery system In popular culture The Studies and Observations Group makes an appearance in Francis Ford Coppola's film Apocalypse Now based on Captain Willard's assignment to SOG and then being sent after Colonel Kurtz who has taken his Montagnard (Degar) force into Cambodia. The Studies and Observations Group makes an appearance in the 2010 Activision first-person shooter video game title Call of Duty: Black Ops being an element in the story line as well as a playable multiplayer faction. On the TV series Tour of Duty, the third season sees the main characters reassigned to SOG in order to conduct covert operations in Vietnam and in Cambodia. In the tabletop role-playing game Fall of DELTA GREEN, the eponymous organization employees Studies and Observations Group operatives and assets for their own operations. The game also suggests that MACV-SOG might have been inspired by DELTA GREEN, as both organizations have similar principles: combining operatives from various military and civilian agencies for covert operations, Top Secret classification and plausible deniability. The MACV-SOG is a playable faction in the ARMA 3 S.O.G Prairie Fire DLC expansion. The co-op campaign follows SOG team Spike Team Columbia as they participate in covert operations in the Ho Chi Minh trail such as Project Eldest Son. See also North Vietnamese invasion of Laos Central Intelligence Agency's Special Activities Division CIA activities in Cambodia CIA activities in Laos Hughes–Ryan Amendment Case-Church Amendment Footnotes Notes References Sources Unpublished government documents Published government documents Memoirs and autobiographies Secondary sources External links MACV SOG Presidential Unit Citation article MACV SOG Homepage MACV-SOG KIA Lists by year (e.g. 1971) Viet Nam Bibliography: SOG MACV-SOG "Over the Fence" Uniform Article Military units and formations of the United States in the Vietnam War Military history of the United States during the Vietnam War Special operations units and formations of the United States Military units and formations established in 1964 Military units and formations disestablished in 1971 1964 establishments in the United States 1971 disestablishments in the United States
54514106
https://en.wikipedia.org/wiki/Sammy%20Jo%20Prudhomme
Sammy Jo Prudhomme
Samantha Jo Prudhomme (born October 25, 1993) is an American soccer coach and former professional player who is an assistant coach for the Loyola Greyhounds women's soccer team. Prudhomme played as a goalkeeper for Reign FC, Washington Spirit, Houston Dash, and Boston Breakers in the National Women's Soccer League (NWSL). Early life Born in Torrance, CA and raised in Aliso Viejo, California, Prudhomme attended Aliso Niguel High School, where she lettered all four years and started her Junior and Senior seasons, became the MVP and Team Captain for both years. Her senior year she led her team to a CIF championship, State Championship and ranked #1 high school school soccer team by ESPN. Won CIF women's soccer player of the year 2012. She is the daughter of Jon and Jo Beth Prudhomme, and has one brother, Nick. Prudhomme attended Aliso Niguel High School, where she lettered all four years and started her Junior and Senior seasons, became the MVP and Team Captain for both years. Her senior year she led her team to a CIF championship, State Championship and ranked #1 high school school soccer team by ESPN. Prudhomme was named CIF women's soccer player of the year 2012, as well as, Orange County Register's Player of the Year 2012. College career Oregon State, 2012–2013 Prudhomme began her college career as an Oregon State Beaver in 2012. During her two seasons with the Beavers, Prudhomme played a total of 31 matches, giving up 41 goals, producing 188 saves, and had a combined record of 11–15–5. USC Trojans, 2014–2016 Prudhomme transferred to the University of Southern California after her Sophomore season at Oregon State, having been recruited by USC, and citing frustration with the Oregon State program. During her first season with the program she helped convert USC into a top PAC-12 team At the beginning of second season she captured ESPNW Player of the week honors Combined seasons with USC, Prudhomme played a total of 48 matches, allowing 27 total goals, producing 175 saves, 25 shutouts, and led the Trojans to the NCAA Championship in her senior season. That year, she was named the 2016 Pac-12 Goalkeeper of the Year, and to the Pac-12 First Team. Club career PALI Blues, 2014 Team Won National Championship and Prudhomme won W-league Goalkeeper with lowest GAA for the W-league's last season 2014 So Cal FC, 2015 Prudhomme lead team to the Finals in their first season as a WPSL team Boston Breakers, 2017 Prudhomme was selected by the Boston Breakers with the 31st overall pick in the 2017 NWSL College Draft. She signed with the team on April 4, 2017. Prudhomme started five games in her rookie year, filling in for starting goalkeeper Abby Smith when she was injured. She helped the Breakers stop a seven-game losing streak. Prudhomme was the first Breaker to record three consecutive shutouts and broke the club record for shutout minutes. Houston Dash, 2018 After the Breakers folded ahead of the 2018 NWSL season, the NWSL held a dispersal draft to distribute Breakers players across the league. Her rights were selected 24th overall by the Houston Dash. Prudhomme didn't appear in any games for Houston in 2018 as Jane Campbell played every minute of the season in goal. Prudhomme was waived by the Houston Dash prior to the 2019 NWSL season so she could join the Washington Spirit preseason camp as a non-roster invitee. Washington Spirit, 2019 Prudhomme was named to Washington's final roster ahead of the 2019 NWSL season. Reign FC, 2019 On July 15, 2019, Prudhomme was acquired by Reign FC in a trade with Washington Spirit in exchange for Elise Kellond-Knight. On February 13, 2020, Prudhomme announced her retirement from professional soccer. References External links Boston Breakers player profile Oregon State player profile Living people 1993 births Boston Breakers (NWSL) draft picks Boston Breakers (NWSL) players Houston Dash players USC Trojans women's soccer players Oregon State Beavers women's soccer players Soccer players from California American women's soccer players National Women's Soccer League players Women's association football goalkeepers Sportspeople from Aliso Viejo, California Aliso Niguel High School alumni Washington Spirit players OL Reign players
1310997
https://en.wikipedia.org/wiki/Demand%20paging
Demand paging
In computer operating systems, demand paging (as opposed to anticipatory paging) is a method of virtual memory management. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it and that page is not already in memory (i.e., if a page fault occurs). It follows that a process begins execution with none of its pages in physical memory, and many page faults will occur until most of a process's working set of pages are located in physical memory. This is an example of a lazy loading technique. Basic concept Demand paging follows that pages should only be brought into memory if the executing process demands them. This is often referred to as lazy evaluation as only those pages demanded by the process are swapped from secondary storage to main memory. Contrast this to pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process startup. Commonly, to achieve this process a page table implementation is used. The page table maps logical memory to physical memory. The page table uses a bitwise operator to mark if a page is valid or invalid. A valid page is one that currently resides in main memory. An invalid page is one that currently resides in secondary memory. When a process tries to access a page, the following steps are generally followed: Attempt to access page. If page is valid (in memory) then continue processing instruction as normal. If page is invalid then a page-fault trap occurs. Check if the memory reference is a valid reference to a location on secondary memory. If not, the process is terminated (illegal memory access). Otherwise, we have to page in the required page. Schedule disk operation to read the desired page into main memory. Restart the instruction that was interrupted by the operating system trap. Advantages Demand paging, as opposed to loading all pages immediately: Only loads pages that are demanded by the executing process. As there is more space in main memory, more processes can be loaded, reducing the context switching time, which utilizes large amounts of resources. Less loading latency occurs at program startup, as less information is accessed from secondary storage and less information is brought into main memory. As main memory is expensive compared to secondary memory, this technique helps significantly reduce the bill of material (BOM) cost in smart phones for example. Symbian OS had this feature. Disadvantages Individual programs face extra latency when they access a page for the first time. Low-cost, low-power embedded systems may not have a memory management unit that supports page replacement. Memory management with page replacement algorithms becomes slightly more complex. Possible security risks, including vulnerability to timing attacks; see (specifically the virtual memory attack in section 2). Thrashing which may occur due to repeated page faults. See also Page cache Memory management Virtual memory Lazy evaluation References Tanenbaum, Andrew S. Operating Systems: Design and Implementation (Second Edition). New Jersey: Prentice-Hall 1997. Virtual memory
157180
https://en.wikipedia.org/wiki/Oresteia
Oresteia
The Oresteia () is a trilogy of Greek tragedies written by Aeschylus in the 5th century BC, concerning the murder of Agamemnon by Clytemnestra, the murder of Clytemnestra by Orestes, the trial of Orestes, the end of the curse on the House of Atreus and the pacification of the Erinyes. The trilogy—consisting of Agamemnon (), The Libation Bearers (), and The Eumenides ()—also shows how the Greek gods interacted with the characters and influenced their decisions pertaining to events and disputes. The only extant example of an ancient Greek theatre trilogy, the Oresteia won first prize at the Dionysia festival in 458 BC. The principal themes of the trilogy include the contrast between revenge and justice, as well as the transition from personal vendetta to organized litigation. Oresteia originally included a satyr play, Proteus (), following the tragic trilogy, but all except a single line of Proteus has been lost. Agamemnon Agamemnon (, Agamémnōn) is the first of the three plays within the Oresteia trilogy. It details the homecoming of Agamemnon, King of Mycenae, from the Trojan War. After ten years of warfare, Troy had fallen and all of Greece could lay claim to victory. Waiting at home for Agamemnon is his wife, Queen Clytemnestra, who has been planning his murder. She desires his death to avenge the sacrifice of her daughter Iphigenia, to exterminate the only thing hindering her from commandeering the crown, and to finally be able to publicly embrace her long-time lover Aegisthus. The play opens to a watchman looking down and over the sea, reporting that he has been lying restless "like a dog" for a year, waiting to see some sort of signal confirming a Greek victory in Troy. He laments the fortunes of the house, but promises to keep silent: "A huge ox has stepped onto my tongue." The watchman sees a light far off in the distance—a bonfire signaling Troy's fall—and is overjoyed at the victory and hopes for the hasty return of his King, as the house has "wallowed" in his absence. Clytemnestra is introduced to the audience and she declares that there will be celebrations and sacrifices throughout the city as Agamemnon and his army return. Upon the return of Agamemnon, his wife laments in full view of Argos how horrible the wait for her husband, and King, has been. After her soliloquy, Clytemnestra pleads with and persuades Agamemnon to walk on the robes laid out for him. This is a very ominous moment in the play as loyalties and motives are questioned. The King's new concubine, Cassandra, is now introduced and this immediately spawns hatred from the queen, Clytemnestra. Cassandra is ordered out of her chariot and to the altar where, once she is alone, she is heard crying out insane prophecies to Apollo about the death of Agamemnon and her own shared fate. Inside the house a cry is heard; Agamemnon has been stabbed in the bathtub. The chorus separate from one another and ramble to themselves, proving their cowardice, when another final cry is heard. When the doors are finally opened, Clytemnestra is seen standing over the dead bodies of Agamemnon and Cassandra. Clytemnestra describes the murder in detail to the chorus, showing no sign of remorse or regret. Suddenly the exiled lover of Clytemnestra, Aegisthus, bursts into the palace to take his place next to her. Aegisthus proudly states that he devised the plan to murder Agamemnon and claim revenge for his father (the father of Aegisthus, Thyestes, was tricked into eating two of his sons by his brother Atreus, the father of Agamemnon). Clytemnestra claims that she and Aegisthus now have all the power and they re-enter the palace with the doors closing behind them. The Libation Bearers In The Libation Bearers (, Choēphóroi)—the second play of Aeschylus' Oresteia trilogy—many years after the murder of Agamemnon, his son Orestes returns to Argos with his cousin Pylades to exact vengeance on Clytemnestra, as an order from Apollo, for killing Agamemnon. Upon arriving, Orestes reunites with his sister Electra at Agamemnon's grave, while she was there bringing libations to Agamemnon in an attempt to stop Clytemnestra's bad dreams. Shortly after the reunion, both Orestes and Electra, influenced by the Chorus, come up with a plan to kill both Clytemnestra and Aegisthus. Orestes then heads to the palace door where he is unexpectedly greeted by Clytemnestra. In his response to her he pretends he is a stranger and tells Clytemnestra that he (Orestes) is dead, causing her to send for Aegisthus. Unrecognized, Orestes is then able to enter the palace where he then kills Aegisthus, who was without a guard due to the intervention of the Chorus in relaying Clytemnestra's message. Clytemnestra then enters the room. Orestes hesitates to kill her, but Pylades reminds him of Apollo's orders, and he eventually follows through. Consequently, after committing the matricide, Orestes is now the target of the Furies' merciless wrath and has no choice but to flee from the palace. The Eumenides The final play of the Oresteia, called The Eumenides (, Eumenídes), illustrates how the sequence of events in the trilogy ends up in the development of social order or a proper judicial system in Athenian society. In this play, Orestes is hunted down and tormented by the Furies, a trio of goddesses known to be the instruments of justice, who are also referred to as the "Gracious Ones" (Eumenides). They relentlessly pursue Orestes for the killing of his mother. However, through the intervention of Apollo, Orestes is able to escape them for a brief moment while they are asleep and head to Athens under the protection of Hermes. Seeing the Furies asleep, Clytemnestra's ghost comes to wake them up to obtain justice on her son Orestes for killing her. After waking up, the Furies hunt down Orestes again and when they find him, Orestes pleads to the goddess Athena for help. She responds by setting up a trial for him in Athens on the Areopagus. This trial is made up of a group of twelve Athenian citizens and is supervised by none other than Athena herself. Here Orestes is used as a trial dummy by Athena to set-up the first courtroom trial. He is also the object of central focus between the Furies, Apollo, and Athena. After the trial comes to an end, the votes are tied. Athena casts the deciding vote and determines that Orestes will not be killed. This does not sit well with the Furies, but Athena eventually persuades them to accept the decision and, instead of violently retaliating against wrongdoers, become a constructive force of vigilance in Athens. She then changes their names from the Furies to "the Eumenides" which means "the Gracious Ones". Athena then ultimately rules that all trials must henceforth be settled in court rather than being carried out personally. Proteus Proteus (, Prōteus), the satyr play which originally followed the first three plays of The Oresteia, is lost except for a two-line fragment preserved by Athenaeus. However, it is widely believed to have been based on the story told in Book IV of Homer's Odyssey, where Menelaus, Agamemnon's brother, attempts to return home from Troy and finds himself on an island off Egypt, "whither he seems to have been carried by the storm described in Agam.674". The title character, "the deathless Egyptian Proteus", the Old Man of the Sea, is described in Homer as having been visited by Menelaus seeking to learn his future. In the process, Proteus tells Menelaus of the death of Agamemnon at the hands of Aegisthus as well as the fates of Ajax the Lesser and Odysseus at sea; and is compelled to tell Menelaus how to reach home from the island of Pharos. "The satyrs who may have found themselves on the island as a result of shipwreck . . . perhaps gave assistance to Menelaus and escaped with him, though he may have had difficulty in ensuring that they keep their hands off Helen" The only extant fragment that has been definitively attributed to Proteus was translated by Herbert Weir Smyth as "A wretched piteous dove, in quest of food, dashed amid the winnowing-fans, its breast broken in twain." In 2002, Theatre Kingston mounted a production of The Oresteia and included a new reconstruction of Proteus based on the episode in The Odyssey and loosely arranged according to the structure of extant satyr plays. Analysis of themes In this trilogy there are multiple themes carried through all three plays. Other themes can be found and in one, or two, of the three plays, but are not applicable to the Trilogy as a whole and thus are not considered themes of the trilogy. Justice through retaliation Retaliation is seen in the Oresteia in a slippery slope form, occurring subsequently after the actions of one character to another. In the first play Agamemnon, it is mentioned how in order to shift the wind for his voyage to Troy, Agamemnon had to sacrifice his innocent daughter Iphigenia. This then caused Clytemnestra pain and eventually anger which resulted in her plotting revenge on Agamemnon. Therefore, she found a new lover Aegisthus. And when Agamemnon returned to Argos from the Trojan War, Clytemnestra killed him by stabbing him in the bathtub and would eventually inherit his throne. The death of Agamemnon thus sparks anger in Orestes and Electra and this causes them to now plot the death of their mother Clytemnestra in the next play Libation Bearers, which would be considered matricide. Through much pressure from Electra and his cousin Pylades Orestes eventually kills his mother Clytemnestra and her lover Aegisthus in "The Libation Bearers". Now after committing the matricide, Orestes is being hunted down by the Furies in the third play "The Eumenides", who wish to exact vengeance on him for this crime. And even after he gets away from them Clytemnestra's spirit comes back to rally them again so that they can kill Orestes and obtain vengeance for her. However this cycle of non-stop retaliation comes to a stop near the end of The Eumenides when Athena decides to introduce a new legal system for dealing out justice. Justice through the law This part of the theme of 'justice' in The Oresteia is seen really only in The Eumenides, however its presence still marks the shift in themes. After Orestes begged Athena for deliverance from 'the Erinyes,' she granted him his request in the form of a trial. It is important that Athena did not just forgive Orestes and forbid the Furies from chasing him, she intended to put him to a trial and find a just answer to the question regarding his innocence. This is the first example of proper litigation in the trilogy and illuminates the change from emotional retaliation to civilized decisions regarding alleged crimes. Instead of allowing the Furies to torture Orestes, she decided that she would have both the Furies and Orestes plead their case before she decided on the verdict. In addition, Athena set up the ground rules for how the verdict would be decided so that everything would be dealt with fairly. By Athena creating this blueprint the future of revenge-killings and the merciless hunting of the Furies would be eliminated from Greece. Once the trial concluded, Athena proclaimed the innocence of Orestes and he was set free from the Furies. The cycle of murder and revenge had come to an end while the foundation for future litigation had been laid. Aeschylus, through his jury trial, was able to create and maintain a social commentary about the limitations of revenge crimes and reiterate the importance of trials. The Oresteia, as a whole, stands as a representation of the evolution of justice in Ancient Greece. Revenge The theme of revenge plays a large role in the Oresteia. It is easily seen as a principal motivator of the actions of almost all of the characters. It all starts in Agamemnon with Clytemnestra, who murders her husband, Agamemnon, in order to obtain vengeance for his sacrificing of their daughter, Iphigenia. The death of Cassandra, the princess of Troy, taken captive by Agamemnon in order to fill a place as a concubine, can also be seen as an act of revenge for taking another woman as well as the life of Iphigenia. Later on, in The Libation Bearers, Orestes and Electra, siblings as well as the other children of Agamemnon and Clytemnestra, plot to kill their mother and succeed in doing so due to their desire to avenge their father's death. The Eumenides is the last play in which the Furies, who are in fact the goddesses of vengeance, seek to take revenge on Orestes for the murder of his mother. It is also in this part of the trilogy that it is discovered that the god Apollo played a part in the act of vengeance toward Clytemnestra through Orestes. The cycle of revenge seems to be broken when Orestes is not killed by the Furies, but is instead allowed to be set free and deemed innocent by the goddess Athena. The entirety of the play's plot is dependent upon the theme of revenge, as it is the cause of almost all of the effects within the play. Relation to the Curse of the House of Atreus The House of Atreus began with Tantalus, son of Zeus, who murdered his son, Pelops, and attempted to feed him to the gods. The gods, however, were not tricked and banished Tantalus to the Underworld and brought his son back to life. Later in life Pelops and his family line were cursed by Myrtilus, a son of Hermes, catalyzing the curse of the House of Atreus. Pelops had two children, Atreus and Thyestes, who are said to have killed their half-brother Chrysippus, and were therefore banished. Thyestes and Aerope, Atreus’ wife, were found out to be having an affair, and in an act of vengeance, Atreus murdered his brother's sons, cooked them, and then fed them to Thyestes. Thyestes had a son with his daughter and named him Aegisthus, who went on to kill Atreus. Atreus’ children were Agamemnon, Menelaus, and Anaxibia. Leading up to here, we can see that the curse of the House of Atreus was one forged from murder, incest and deceit, and continued in this way for generations through the family line. To put it simply, the curse demands blood for blood, a never ending cycle of murder within the family. Those who join the family seem to play a part in the curse as well, as seen in Clytemnestra when she murders her husband Agamemnon, in revenge for sacrificing their daughter, Iphigenia. Orestes, goaded by his sister Electra, murders Clytemnestra in order to exact revenge for her killing his father. Orestes is said to be the end of the curse of the House of Atreus. The curse holds a major part in the Oresteia and is mentioned in it multiple times, showing that many of the characters are very aware of the curse's existence. Aeschylus was able to use the curse in his play as an ideal formulation of tragedy in his writing. Contemporary background Some scholars believe that the trilogy is influenced by contemporary political developments in Athens. A few years previously, legislation sponsored by the democratic reformer Ephialtes had stripped the court of the Areopagus, hitherto one of the most powerful vehicles of upper-class political power, of all of its functions except some minor religious duties and the authority to try homicide cases; by having his story being resolved by a judgement of the Areopagus, Aeschylus may be expressing his approval of this reform. It may also be significant that Aeschylus makes Agamemnon lord of Argos, where Homer puts his house, instead of his nearby capitol Mycenae, since about this time Athens had entered into an alliance with Argos. Adaptations Key British productions In 1981, Sir Peter Hall directed Tony Harrison's adaptation of the trilogy in masks in London's Royal National Theatre, with music by Harrison Birtwistle and stage design by Jocelyn Herbert. In 1999, Katie Mitchell followed him at the same venue (though in the Cottesloe Theatre, where Hall had directed in the Olivier Theatre) with a production which used Ted Hughes' translation. In 2015, Robert Icke's production of his own adaptation was a sold out hit at the Almeida Theatre and was transferred that same year to the West End's Trafalgar Studios. Two other productions happened in the UK that year, in Manchester and at Shakespeare's Globe. The following year, in 2016, playwright Zinnie Harris premiered her adaptation, This Restless House, at the Citizen's Theatre to five-star critical acclaim. Other adaptations 1895: Composer Sergei Taneyev adapted the trilogy into his own operatic trilogy of the same name, which was premiered in 1895. 1965-66: Composer Iannis Xenakis adapted vocal work for chorus and 12 instruments. 1967: Composer Felix Werder adapted Agamemnon as an opera. 1969: The Spaghetti Western The Forgotten Pistolero, is based on the myth and set in Mexico following the Second Mexican Empire. Ferdinando Baldi, who directed the film, was also a professor of classical literature who specialized in Greek tragedy. 1974: Rush Rehm's translation of the trilogy was staged at The Pram Factory in Melbourne. 2008: Theatre professor Ethan Sinnott directed an ASL adaptation of Agamemnon. 2008: Dominic Allen and James Wilkes, The Oresteia, for Belt Up Theatre Company. 2009: Anne Carson's An Oresteia, an adaptation featuring episodes from the Oresteia from three different playwrights: Aeschylus' Agamemnon, Sophocles' Electra, and Euripides' Orestes. 2009: Yael Farber's Molora, a South African adaptation of the Oresteia. 2019: Playwright Ellen McLaughlin and director Michael Khan's The Oresteia, premiered on April 30, 2019 at the Shakespeare Theatre Company, Washington, DC. The adaptation was shown as a digital production by Theater for a New Audience in New York City during the COVID-19 Pandemic and was directed by Andrew Watkins. Translations Thomas Medwin and Percy Bysshe Shelley, 1832–1834 – verse (Pagan Press reprint 2011) Anna Swanwick, 1886 – verse: full text Robert Browning, 1889 – verse: Agamemnon Arthur S. Way, 1906 – verse John Stuart Blackie, 1906 – verse Edmund Doidge Anderson Morshead, 1909 – verse: full text Herbert Weir Smyth, Aeschylus, Loeb Classical Library, 2 vols. Greek text with facing translations, 1922 – prose Agamemnon Libation Bearers Eumenides Gilbert Murray, 1925 – verse Agamemnon, Libation Bearers Louis MacNeice, 1936 – verse Agamemnon Edith Hamilton, 1937, Three Greek Plays: Prometheus Bound, Agamemnon, The Trojan Women Richmond Lattimore, 1953 – "verse" F. L. Lucas, 1954 – verse Agamemnon Robert A. Johnston, 1955 – verse, an "acting version" Philip Vellacott, 1956 – verse Paul Roche, 1963 – verse Peter Arnott, 1964 – verse George Thomson, 1965 – verse Howard Rubenstein, 1965 – verse Agamemnon Hugh Lloyd-Jones, 1970 – verse Rush Rehm, 1978 – verse, for the stage Robert Fagles, 1975 – verse Robert Lowell, 1977 – verse Tony Harrison, 1981 – verse David Grene and Wendy Doniger O'Flaherty, 1989 – verse Peter Meineck, 1998 – verse Ted Hughes, 1999 – verse Ian C. Johnston, 2002 – verse: full text George Theodoridis, Agamemnon, Choephori, Eumenides 2005–2007 – prose Alan Sommerstein, Aeschylus, Loeb Classical Library, 3 vols. Greek text with facing translations, 2008 Peter Arcese, 2010 – Agamemnon, in syllabic verse Sarah Ruden , 2016 – verse David Mulroy, 2018 – verse Oliver Taplin, 2018 – verse Jeffrey Scott Bernstein and Tom Phillips (illustrator), 2020 – verse See also The Oresteia in the arts and popular culture Mourning Becomes Electra – a modernized version of the story by Eugene O'Neill, who shifts the action to the American Civil War The Flies – an adaptation of the Libation-Bearers by Jean-Paul Sartre, which focuses on human freedom Live by the sword, die by the sword – a line from the trilogy Citations General references MacLeod, C. W. (1982). "Politics and the Oresteia. The Journal of Hellenic Studies, vol. 102. . . pp. 124–144. Further reading Barbara Goward (2005). Aeschylus: Agamemnon. Duckworth Companions to Greek and Roman Tragedy. London: Duckworth. . External links See the triumphant ending of The Oresteia. MacMillan Films staging 2014. 5 minutes. BBC audio file. The Oresteia discussion on the BBC Radio 4 programme In Our Time. 45 minutes. La Tragedie d'Oreste et Electre: Album by British band Cranes which is a musical adaptation of Jean-Paul Sartre's The Flies. Oresteia (2011): an avant-garde work inspired by Aeschylus' trilogy, written and directed by Jonathan Vandenberg. Athens in fiction Libation Literary trilogies Mythology of Argolis Plays by Aeschylus Plays set in ancient Greece Trojan War literature Agamemnon Plays based on classical mythology
3747039
https://en.wikipedia.org/wiki/Roland%20MC-909
Roland MC-909
The discontinued Roland MC-909 Sampling Groovebox combines the features of a synthesizer, sequencer, and sampler, with extensive hands-on control of both the sound engine and the sequencing flow. It was intended primarily for live performance of pre-programmed patterns consisting of up to 16 tracks of MIDI data. It was released by Roland Corporation on October 8, 2002. This product was announced at the AES Fall Convention in 2002. It is the direct successor to the Roland MC-505 and is the predecessor to the Roland MC-808. Which eventually ended the "Groovebox by year 2010" line of products by Roland which began in the year 1996 with the Original Roland MC-303 groovebox. The Roland Groovebox began again resurgence in the year 2019 with a two new modern & redesign Roland MC-707 GROOVEBOX/Roland MC-101 GROOVEBOX. The Roland MC-909 was developed from the blueprint of Roland's own "Roland Fantom-S Workstation & Roland Fantom-X Workstation" and uses the same structure and operating system, with some differences regarding the Patterns section, not implemented in the Roland Fantom S/X6/X7/X8 Workstation. Sound generation The MC-909 has a ROM-based sound generator (sometimes referred to as a rompler.) Its patches are built from up to four tones. The tones are based on waves stored in the machine. Patches can also utilize user-created samples. Roland's literature states that the MC-909 has "new-generation XV synthesis", the synth in the MC-909 is a very similar sound engine to that of the XV-5050 64-Voice Synthesizer Module. The number of PCM waveforms is 693, ranging from vintage synths to strings, drums, guitars and pianos. It can be expanded by adding one SRX card* from 12 different cards available. *Note: [If you choose the Roland SRX-05 "Supreme Dance" expansion card, you'll get special Patches that can only be accessed on the MC-909.] The MC-909 is always in sequencer mode, as opposed to other workstations that have also a simple Voice or Combination mode for straight playing. Straight playing via an external keyboard is however possible directly from the sequencer mode by simply selecting one of the 16 tracks (parts) where a patch (voice, sound) is stored. In this case the MC-909 performs as a regular, 16-part multitimbral sound module, that happens to have a sequencer, too. In essence, the MC-909 can be used as a very capable sound module without ever needing to fire up its sequencer. The MC-909 is the first Roland groovebox to feature a sampler. It can record audio from any of the external audio inputs, SPDIF connectors, or import wav and aiff files from a computer using a USB port. The sampler can be upgraded up to a total of 272 MB RAM (16 MB User + 256 MB PC-100 or PC-133 168-Pin DIMM Module), and the samples can also be stored on a 128 MB 3.3 volt Smartmedia card. The unit is also able to store on two 128 MB Smartmedia cards if there is more than 256 MB DATA in its user memory. There are tricks from user forum sites that have found ways to go beyond this limitation using xD-Picture Cards as other means for storage. Sequencer The MC-909's sequencer is based on pattern composition. Each pattern has 16 tracks (parts) and can have up to 999 measures (bars). The "pattern" in the groovebox concept as developed by Roland (and thence adopted by other manufacturers) is intended to be a 4-to-16 bars-long small musical phrase made up of 8 to 16 tracks. The chaining of several patterns together (with seamless passage between one another) will create a full song, or the patterns can be looped as wanted and messed with using the on-board real-time controls. However, if this was more properly true in the older MC-303 and MC-505, in the MC-909 the massive capacity of the sequencer makes the patterns capable of storing almost 1000 bars, and 16 tracks, thus capable of storing complete songs with arrangements and drum styles. Each of the 16 parts (tracks) is set to a specific patch, with its own mixer settings (pan, volume, key, effect, routing, and so on). There are a variety of editing modes: The main modes allow real-time recording, step recording and TR-REC recording. In step recording, notes or chords can be added one at a time. In TR-REC mode, each of the 16 pads represents a point along a musical measure. This speeds up the entry of percussion tracks. Patterns can be strung together into "songs", which, in fact, are mislabelled, merely being chains of patterns played in a specific order. In fact, there is no recording or sequencing capability in Song mode besides pattern chaining and some playback settings. The sequencer can load Standard MIDI Files (albeit with some workarounds to avoid loading bugs that have never been fixed) and play them back. Additionally, the sequencer will also include samples stored into its memory in the pattern tracks. Performance The MC-909 includes a number of features for real-time performance. These include: Muting/unmuting parts Adjusting the level of parts Turning on and off effects Adjusting the tempo Playing notes Adjusting various parameters such as filter cutoff Playing phrases on the fly (RPS mode) Using Arpeggiator on the fly (with editable ARP presets) Twin D-Beam with 4 modes (incl. assignable mode) Features Sound generator with 64 note - voice polyphony 16-track sequencer+Tempo/Mute Ctrl Track 16MB sample memory (expandable to 272 MB max.) SmartMedia card handling (8-128 MB) to store audio and midi files and backup (16MB) Effects generator (24-bit reverb, two multi-effects processors, compression/EQ and mastering effects) Large LCD screen Expandable with SRX-series wave expansion boards, SmartMedia cards (smf, wav) USB port for MIDI and transfer of data (full duplex) and for remote editing (groovemanager) S/PDIF input and output plus coaxial I/O (digital) Line In with selectable sources (line in, mic) for sampling, re-sampling Dual D-Beam controllers (solosynth, cut+reso, turntable, assignable) Turntable emulation (pitch control, BPM control, hold, push) Velocity sensitive pads V-LINK connecting audio & video in performance Comparison to the MC-505 The following features are not available on the MC-505 synth: Sampling ability. A very large editing LCD screen. A second D-Beam controller. Two extra filter modes. Stereo waveforms for your patches. Matrix Control, Random Modify and Fat controls. A mastering stage that features three-band compression and equalization. Turntable emulation. Morphing LFO waveforms. Sample machine gun feature. Velocity sensitive pads. Fully editable arpeggios. Chord memory. SMF loading The following features are available on the MC-505 but not on the MC-909: The MEGAMix function. The Portamento knob. The Groove knob. However, the groove/quantize functions are preserved under an editing menu. The ability to control the individual amount of delay assigned to each part. The Ad-Lib function from the D-Beam. Most MC-909 users agree that both machines are different enough to justify keeping both of them and integrate them using a MIDI cable. Users The Roland MC-909 was used by the hip hop producer RZA while working on the movies Blade: Trinity and Kill Bill. RZA uses many Roland products including the Roland Fantom, MV-8000 & Roland MV-8800. Another artist that uses the Roland MC-909 is Switchfoot keyboardist Jerome Fontamillas for live setups. Criticisms The Roland MC-909 received good reviews at tech magazines like Future Music and Sound on Sound. However, it faced serious competition from the equally powerful Yamaha RS7000. Many MC-909 users complained about several operating system bugs at the Yahoo! Groups forum and also Roland Clan Forums. In fact, even when the machine was released in 2002, it took Roland Corporation 5 years until some of the more complex bugs (like the inability to store RPS patterns) were fixed in the operating system upgrade v1.23 in early 2007. Another common complaint refers to the unit's size, which makes it less portable than a laptop with a midi controller. The unit has been designed with only a 2-prong power inlet, without a ground lift; hence, there have been complaints of light electrical discharges from its metallic body when handled with less-than-dry hands. Further criticism pointed out the uneven volume ranges of its voices, waveforms and sounds. Additionally, critics noted that, for a machine aimed at the dance/techno/electronica market, the sound engine was excessively rich in sounds from ethnic, classical and band instruments. The sampler, although powerful, has a very complex access-and-editing route and lacks the ability to set keyboard ranges for different samples, making it difficult to create realistic sounds from a set of multisamples. There is, however, a work-around for this via an external editor on the PC & Mac called: MC-909 Editor Update v3.1, that is freely available for download at this site. The inputs, used for either sampling or sound processing of an external sound source, are routed through the effects engine and heard at the outputs during real-time. However, re-sampling is necessary in order for the sample to contain the effects as a part of the sample. Following any re-sampling, the sample playback can be further re-sampled or processed by more effects at the outputs during playback. The Roland MC-909 is no longer in production by Roland Corporation. The Roland MC-909 is consequently called to become a cult item, as Mellotron or the TB-303. While no longer in production, the MC-909 can be bought these days second-hand at places like eBay, with a typical second-hand purchase price, as of 2021, around US$1000. The original MSRP price set by Roland in 2002 was US$1,795.99. Unsolved bugs Several operating system bugs were gradually solved over time; the last operating system upgrade (Version 1.23) was introduced on March 30, 2007. The current unsolved problems at the present moment are: It is not possible to load a Standard Midi File or other sequence file from a card to internal memory onto an existing pattern without initializing all the existing pattern's settings back to basic format. In order to load a SMF onto a pattern and retain that pattern's settings, the SMF must be first loaded onto an unused or empty location within internal memory, then moved to the wanted location with the Copy function. Many users complain that there is no sustain in Song Mode, resulting in a noticeable audio gapping between patterns. This is most noticeable at slower tempos. (Although different Machine, see also: MC-808 Roland MC-808#Unresolved issues) Even when a part is set to EXT, the internal sound engine will continue to produce sounds. You can work this around by assigning a blank patch to that part; however, this is not the expected behaviour described in the manual. Going into MENU (e.g. USB transfer) resets the 909 to Preset Pattern 1 when you come out of menu. Tracks 6 and 13 will not mute/unmute when pressed at the same time (introduced in v1.22) There are current unresolved issues with the implementation of effects when switching patterns, but the MC-909 was "designed by specification". Roland might refer to the end users' requests for effects that sustain past the point when the pattern switches as they "developing a future new product". The last operating system upgrade (Version 1.23), corrected few issue with: RPS Setting, RPS Mixer WRITE editor of the contents of operation does not save a fixed point. Import BMP features and functionality Realtime Erase fixed. Song editing function has been strengthened. Sounds & Effects strengthened, Stable for tuning various countries. (Roland Corp. Japan Trans: MC-909 VER. History List ) Roland MC-909 SYSTEM PROGRAM OS (Ver.1.03 → Ver.1.23) - HISTORY LIST How to find the version: 1. [MENU] button. 2. Display in the "System" is highlighted in the state, [ENTER] button. 3. [F6 (SystemInfo)] button, followed by, [F4 (Version)] button (*) to move the display in the "Ver .*.**" to see it. (*) Ver.1.04 in the earlier version, [F4 (Version)] does not appear. Upgrade to please. Changes: [→ Ver.1.23] RPS Setting, RPS Mixer WRITE editor of the contents of operation does not save a fixed point. [→ Ver.1.22] Import BMP features and functionality Realtime Erase fixed. Stable for tuning various countries. [→ Ver.1.20] -MC-909 new built-in preset patterns / patch / rhythm set has been added. 9 Patterns: 118, the number of patches: 117, sets the rhythm: 17, the number of samples Wave: 9 It SRX Series can be used only MC-909 special patches added. It SRX-01: set the rhythm: 7 It SRX-02: The patch: 10 It SRX-03: The patch: 43, set the rhythm: 11 It SRX-04: The patch: 30 It SRX-05: Patch: 72, set the rhythm: 10 (* MC-909 Ver.1.0 from mounted) It SRX-06: The patch: 24, set the rhythm: 4 It SRX-07: Patch: 47, set the rhythm: 18 It SRX-08: The patch: 22, set the rhythm: 38 It SRX-09: Patch: 82, set the rhythm: 23 It SRX-10: The patch: 24 -Song editing function has been strengthened. -Song mixer setup screen editing has been added. -As a result, each step of each level, bread and shift key, mute, the tempo makes it simple to change. -Machine gun features have been added to the sample. -Matrix control knob to turn in the short-range high-speed loops and samples, which can be used as a machine gun can be pronounced. -Loop Tune set a negative example to load and save the Loop Tune 0 to become fixed. [→ Ver.1.13] -While playing MUTE / CONTROL PART button ON / OFF we can not improve on that. -When editing a patch LFO DEPTH Fixed incorrect width could change when the operation knob. -SYS-EX fixed points become unstable if the conduct and behavior patterns, including the editorial process. [→ Ver.1.11] -Fixed points of the steps that turn sour and you play the selected song after song of the sky continuously. -About Reverb Type, Effects Routing has failed to fix the list of changes in the window screen. -TIE Arpeggio Edit screen fixed point could be left ringing sound as you type. PATTERN CALL during the performance of the selected pattern, a fixed return to the setup parameters have changed. [→ Ver.1.10] -Added a screen to check the current version. -The selection of a pattern can be selected when the power first. -The output of the metronome sound, MIX/DIR1/DIR2 we can choose from. -RPS was able to modify the real-time. -The choice of recording real-time recording of [Replace] function is added. -Part Mute function [Default] was added. -The edit pattern, we can even choose what parts to select Edit. -The edit pattern [Extract a Rhythm Instrument] is add. -The edit pattern [ERASE], [COPY] in a CC # to allow selection. -Sample list and file utility, select the sample [LOAD], [ERASE] was possible. -Samples from the [Create Rhythm] has to be able to. -Editing arpeggio on [Step REC] is add. -Added a function parameter list of each patch in the edit. -V-LINK features Edirol V-4 was supported. -A maximum number of sounds per pattern memory, about 12,000 of about 30,000 has been extended to the sound from the sound. The maximum number of sounds in the entire memory is the sound change is 1300000. -We added a measure of the pattern shown here has been played on the screen mixer. -Sample was split in the chops, to be stored in a number of consecutive samples. -Save time running as a rhythm created, you can select the memory card area. -Added a status display area and memory card. -We change the memory usage of the screen display file utility. -Added a master level meter on the screen. -The sample list [SIZE] Sample from a number of conventions KB (kilobytes) has been changed to describe. [→ Ver.1.04] -was added to adjust the controller parameters beam. -When copying a set of rhythmic tone, was modified to work correctly. -TR-REC at the particular scale, we fix the area where there can not be stamped. [→ Ver.1.03 Bld. 0106] -DATE 11/01/2002 16:46 -ORIGINAL EARLER OS or PROTOTYPE OS ! References Further reading External links MC-909 PDF Manual Links: Online Roland MC-909 PDF Quick Start Manual Location Online Roland MC-909 PDF Owners Manual Location Online Roland MC-909 PDF Addendum Manual Location Online Roland MC-909 PDF Midi Implementation Location Online Roland MC-909 PDF Patch/Performance List Location Online Roland MC-909 TurboStart Location Online Support Documents: Roland MC-909 "Getting Started Guide - July 29, 2003" Location Online Support Documents: Roland MC-909 new features added Version 1.10 Location Online Roland MC-909 Product Brochure Location - Sep. ’02 RAM-3619 C-4 ERK-UPR-SE Other links: Roland MC-909 OS/System Program - Version 1.23 (Download From Roland Corporation World HQ). (Translated) Roland JP Corporation, History list on MC-909 system program (Ver.1.23) site and files. Main Roland Corporation World HQ, MC-909 site and files. Roland UK Limited, MC-909 site and files. Roland US Corporation, MC-909 site and files. ROLAND MC-909 EDITOR WINDOWS OS VERSION 1.22, Support for the SRX-11 and the SRX-12 expansion boards. ROLAND MC-909 EDITOR MAC OS X Tiger VERSION 1.22, - Support for the SRX-11 and the SRX-12 expansion boards. Roland MC-909 Product Demo Module - Version 1.1 Roland MC-909 Product Interactive Tour Demo Roland MC-909 Product Brochure - Sept. 2002 RAM-3619 French forum web site www.MC909.org translation in English. Roland Clan Forums - Groove Zone. Roland MC-909's discussion Group Forum, at Yahoo! Groups. FutureProducers.com - 'musicians learning from musicians' - Roland MC-909 Forum. Harmony Central - 'Leading Internet resource for musicians, supplying valuable information from news and product reviews' - Roland MC-909 KEYBOARD Magazine Articles - KEYBOARD Reports Roland MC-909 SAMPLING GROOVEBOX By Ken Hughes : April 2003 REMIX Magazine Articles - Roland MC-909 By JOE SILVA : June 1, 2003 Making Tracks Magazine Articles - MC-909 By Kent Carmical : July 2003 NZ Musician Magazine Articles - DJ Tools: Roland MC-909 Sampling Groovebox By Chris Macro : June/July 2003 Electronic Musician Magazine Articles - ROLAND MC-909 By Jim Aikin : Sept. 1, 2003 Roland MC-909 Tip/Trick: using a H256 MB xD card via a xD to SmartMedia adaptor - from this site. Site to download MC-909 Sample Editor/RPS Editor - Applications Tool. (PC Software and Utility). Roland Clan Forums - Groove Zone "Info on connecting both Roland MC-909 and Roland MC-505 with a (MIDI Solutions Event Processor)". MIDI Solutions Site - MIDI Solutions Event Processor (Hardware) used in connecting both Roland MC-909 and Roland MC-505 together. "Roland MC-909 DVD Owner's Manual" - by: ProAudioEXP.com - (Roland MC-909 DVD Video Tutorial Owner's Manual Help Instruction Training). "Roland MC-909 DVD Training Tutorial" - by: SampleKings.com - (Roland MC-909 DVD Part-1 & Part-2 Video Instruction Training Tutorial). YouTube.com - Lists of Random Videos on Roland MC-909. MC-909 D-Beam Grooveboxes Japanese inventions
7163043
https://en.wikipedia.org/wiki/Computer%20Programs%20Directive
Computer Programs Directive
The European Union Computer Programs Directive controls the legal protection of computer programs under the copyright law of the European Union. It was issued under the internal market provisions of the Treaty of Rome. The most recent version is Directive 2009/24/EC. History In Europe, the need to foster the computer software industry brought attention to the lack of adequate harmonisation among the copyright laws of the various EU nations with respect to such software. Economic pressure spurred the development of the first directive which had two goals (1) the harmonisation of the law and (2) dealing with the problems caused by the need for interoperability. The first EU Directive on the legal protection of computer programs was Council Directive 91/250/EEC of 14 May 1991. It required (Art. 1) that computer programs and any associated design material be protected under copyright as literary works within the sense of the Berne Convention for the Protection of Literary and Artistic Works. The Directive also defined the copyright protection to be applied to computer programs: the owner of the copyright has the exclusive right to authorise (Art 4): the temporary or permanent copying of the program, including any copying which may be necessary to load, view or run the program; the translation, adaptation or other alteration to the program; the distribution of the program to the public by any means, including rental, subject to the first-sale doctrine. However, these rights are subject to certain limitations (Art. 5). The legal owner of a program is assumed to have a licence to create any copies necessary to use the program and to alter the program within its intended purpose (e.g. for error correction). The legal owner may also make a back-up copy for his or her personal use. The program may also be decompiled if this is necessary to ensure it operates with another program or device (Art. 6), but the results of the decompilation may not be used for any other purpose without infringing the copyright in the program. The duration of the copyright was originally fixed at the life of the author plus fifty years (Art. 8), in accordance with the Berne Convention standard for literary works (Art. 7.1 Berne Convention). This has since been prolonged to the life of the author plus seventy years by the 1993 Copyright Duration Directive (superseded but confirmed by the 2006 Copyright Term Directive). Council Directive 91/250/EEC was formally replaced by Directive 2009/24/EC on 25 May 2009, which consolidated "the various minor amendments the original directive had received over the years". Implementation See also Copyright law of the European Union Software copyright References External links Text of the original directive on the legal protection of computer programs (no longer in force) Consolidated version of the directive (1993-11-19) no longer in force Report from the Commission to the Council, the European Parliament and the Economic and Social Committee on the implementation and effects of directive 91/250/EEC on the legal protection of computer programs, (2000-04-10) Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs current directive, in force Copyright law of the European Union European Union directives Copyright legislation 1991 in law 1991 in the European Economic Community
51708977
https://en.wikipedia.org/wiki/Nextcloud
Nextcloud
Nextcloud is a suite of client-server software for creating and using file hosting services. It is enterprise-ready with comprehensive support options. Being free and open-source software, anyone is allowed to install and operate it on their own private server devices. Nextcloud is functionally similar to Dropbox, Office 365 or Google Drive when used with its integrated office suite solutions Collabora Online or OnlyOffice. It can be hosted in the cloud or on-premises. It is scalable from home office solutions based on the low cost Raspberry Pi all the way through to full sized data centre solutions that support millions of users. The original ownCloud developer Frank Karlitschek forked ownCloud and created Nextcloud, which continues to be actively developed by Karlitschek and other members of the original ownCloud team. Features Nextcloud files are stored in conventional directory structures, accessible via WebDAV if necessary. User files are encrypted during transit and optionally at rest. Nextcloud can synchronise with local clients running Windows (Windows 7, 8, and 10), macOS (10.6 or later), or various Linux distributions. Nextcloud permits user and group administration (via OpenID or LDAP). Content can be shared by defining granular read/write permissions between users and groups. Alternatively, Nextcloud users can create public URLs when sharing files. Logging of file-related actions, as well as disallowing access based on file access rules is also available. Nextcloud has planned new features such as monitoring capabilities, full-text search and Kerberos authentication, as well as audio/video conferencing, expanded federation and smaller user interface improvements. Since the software is modular, it can be extended with plugins to implement extra functionality. Developers can offer their extensions to other users for installation via a manufacturer-operated platform. This platform communicates with the Nextcloud instances via an open protocol. The App Store contains over 200 extensions. With the help of these extensions, many functionalities can be added, including: Calendars (CalDAV) Contacts (CardDAV) Streaming media (Ampache) Browser-based text editor Bookmarking service URL shortening suite Gallery RSS feed reader Document viewer tools from within Nextcloud Connection to Dropbox, Google Drive and Amazon S3 Web analytics (Use of Matomo) Integration of content management systems e.g. Pico CMS Viewer for weather forecasting Viewer for DICOM Viewer for maps Managing of cooking recipes On January 17, 2020, version 18 was presented in Berlin under the product name Nextcloud Hub. For the first time, an office package (here OnlyOffice) was directly integrated here and Nextcloud announced, as its goal, direct competition with Microsoft Office 365 and Google Docs. Furthermore, a partnership with Ionos was announced at this date. Office functionality works with x86/x64 and ARM64 based servers with Collabora Online, OnlyOffice currently does not support ARM. In contrast to proprietary services the open architecture enables users to have full control of their data. Architecture In order for desktop machines to synchronize files with their Nextcloud server, desktop clients are available for PCs running Windows, macOS, FreeBSD or Linux. Mobile clients exist for iOS and Android devices. Files and other data (such as calendars, contacts or bookmarks) can also be accessed, managed, and uploaded using a web browser without any additional software. Any updates to the file system are pushed to all computers and mobile devices connected to a user's account. The Nextcloud server is written in the PHP and JavaScript scripting languages. For remote access, it employs sabre/dav, an open-source WebDAV server. Nextcloud is designed to work with several database management systems, including SQLite, MariaDB, MySQL, Oracle Database, and PostgreSQL. With Nextcloud 12, a new architecture was developed with the name Global Scale, with the goal of scaling to hundreds of millions of users. It splits users over separate nodes and introduces components to manage the interaction between them. Nextcloud Box In September 2016, Nextcloud, in cooperation with Western Digital Labs and Canonical (the company behind Ubuntu), released the Nextcloud Box. The Nextcloud box was based on a Raspberry Pi, running Ubuntu Core with Snappy; it was intended to serve as a reference device for other vendors. In June 2017, Western Digital shut down Western Digital Labs, which caused the production of the box to end. History of the fork from ownCloud In April 2016 Karlitschek and most core contributors left ownCloud Inc. These included some of ownCloud's staff according to sources near to the ownCloud community. The fork was preceded by a blog post of Karlitschek, asking questions such as "Who owns the community? Who owns ownCloud itself? And what matters more, short term money or long term responsibility and growth?" There have been no official statements about the reason for the fork. However, Karlitschek mentioned the fork several times in a talk at the 2018 FOSDEM conference, emphasizing cultural mismatch between open source developers and business oriented people not used to the open source community. On June 2, within 12 hours of the announcement of the fork, the American entity "ownCloud Inc." announced that it is shutting down with immediate effect, stating that "[…] main lenders in the US have cancelled our credit. Following American law, we are forced to close the doors of ownCloud, Inc. with immediate effect and terminate the contracts of 8 employees.". ownCloud Inc. accused Karlitschek of poaching developers, while Nextcloud developers such as Arthur Schiwon stated that he "decided to quit because not everything in the ownCloud Inc. company world evolved as I imagined". ownCloud GmbH continued operations, secured financing from new investors and took over the business of the ownCloud Inc. Differences from ownCloud While Nextcloud was originally a fork of the ownCloud project, there are now many differences. For instance, ownCloud offers an open-source community edition, but also offers a proprietary Enterprise Edition with additional features and supports subscriptions—Nextcloud instead uses the same public code base for both free and paid users. Release history Maintenance and release schedule See also Seafile (FOSS client-server software for file storage and transfer) Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links Cloud computing Cloud storage File hosting Free software for cloud computing Free software programmed in JavaScript Free software programmed in PHP Software using the GNU AGPL license
52535079
https://en.wikipedia.org/wiki/OSX.Keydnap
OSX.Keydnap
OSX.Keydnap is a MacOS X based Trojan horse that steals passwords from the iCloud Keychain of the infected machine. It uses a dropper to establish a permanent backdoor while exploiting MacOS vulnerabilities and security features like Gatekeeper, iCloud Keychain and the file naming system. It was first detected in early July 2016 by ESET researchers, who also found it being distributed through a compromised version of Transmission Bit Torrent Client. Technical details Download and installation OSX.Keydnap is initially downloaded as a Zip archive. This archive contains a single Mach-O file and a Resource fork containing an icon for the executable file, which is typically a JPEG or text file image. Additionally, the dropper takes advantage of how OS X handles file extensions by putting a space behind the extension of the file name for example – as “keydnap.jpg ” instead of “keydnap.jpg”. Usually commonly seen icon images and names are used to exploit users' willingness to click on benign looking files. When the file is opened, the Mach-O executable runs by default in the Terminal instead of an image viewer like the user would expect. This initial execution does three things. First, it downloads and executes the backdoor component. Second, it downloads and opens a decoy document to match what the dropper file is pretending to be. Finally, it quits the Terminal to cover up that it was ever open. The terminal is only opened momentarily. Establishing the backdoor connection Since the downloader is not persistent, the downloaded backdoor component spawns a process named "icloudsyncd" that runs at all times. It also adds an entry to the LaunchAgents directory to survive reboots. The icloudsyncd process is used to communicate with a command & control server via an onion.to address, establishing the backdoor. It then attempts to capture passwords from the iCloud Keychain, using the proof-of-concept Keychaindump, and transmits them back to the server. Keychaindump reads securityd’s memory and searches for the decryption key for the user’s keychain as described in “Keychain Analysis with Mac OS X Memory Forensics” by K. Lee and H. Koo. Gatekeeper signing workaround Mac OS uses Gatekeeper to verify if an application is signed with a valid Apple Developer ID certificate preventing OSX.Keydnap from running. Further, even if the user does have Gatekeeper turned off, they will see a warning that the file is an application downloaded from the Internet giving the user an option to not execute the application. However, by packing OSX.Keydnap with a legitimate signing key as in the case of the compromised Transmission app, it successfully bypasses Gatekeeper protection. Detection and removal Activating Gatekeeper is an easy way to prevent accidental installation of OSX.Keydnap. If the user's Mac has Gatekeeper activated, the malicious file will not be executed and a warning will be displayed to the user. This is because the malicious Mach-O file is unsigned, which automatically triggers a warning in Gatekeeper. References MacOS malware Trojan horses
10198528
https://en.wikipedia.org/wiki/Institute%20for%20Information%20Infrastructure%20Protection
Institute for Information Infrastructure Protection
The Institute for Information Infrastructure Protection (I3P) is a consortium of national cyber security institutions, including academic research centers, U.S. federal government laboratories, and nonprofit organizations, all of which have long-standing, widely recognized expertise in cyber security research and development (R&D). The I3P is managed by The George Washington University, which is home to a small administrative staff that oversees and helps direct consortium activities. The I3P coordinates and funds cyber security research related to critical infrastructure protection and hosts high impact workshops that bring together leaders from both the public and private sectors. The I3P brings a multi-disciplinary and multi-institutional perspective to complex and difficult problems, and works collaboratively with stakeholders in seeking solutions. Since its founding in 2002, more than 100 researchers from a wide variety of disciplines and backgrounds have worked together to better understand and mitigate critical risks in the field of cyber security. History The I3P came into existence following several government assessments of the U.S. information infrastructure’s susceptibility to catastrophic failure. The first study, published in 1998 by the United States President's Council of Advisors on Science and Technology (PCAST), recommended that a nongovernmental organization be formed to address national cyber security issues. Subsequent studies–by the Institute for Defense Analyses, as well as a white paper jointly produced by the National Security Council and the Office of Science and Technology Policy, agreed with the PCAST assessment, affirming the need for an organization dedicated to protecting the nation’s critical infrastructures. In 2002, the I3P was founded at Dartmouth College through a grant from the federal government. Martin Wybourne chaired the I3P from 2003 to 2015. Since its inception, the I3P has: coordinated a national cyber security research and development program built informational and research bridges among academic, industrial and government stakeholders developed and delivered technologies to address an array of vulnerabilities Funding for the I3P has come from various sources, including the Department of Homeland Security (DHS), the National Institute of Standards and Technology (NIST) and the National Science Foundation (NSF). Member Institutions The I3P consortium consists of 18 academic research centers, 5 national laboratories, and 3 nonprofit research organizations. Binghamton University Carnegie Mellon University, H. John Heinz III College of Public Policy and Management Carnegie Mellon University, Software Engineering Institute Dartmouth College George Mason University George Washington University Georgia Institute of Technology Idaho National Laboratory Indiana University Johns Hopkins University Lawrence Berkeley National Laboratory MITRE Corporation New York University Oak Ridge National Laboratory Pacific Northwest National Laboratory Purdue University RAND Corporation Sandia National Laboratories SRI International University of California, Berkeley University of California, Davis University of Idaho University of Illinois University of Massachusetts, Amherst University of Tulsa University of Virginia Each member institution appoints a primary and secondary representative to attend regular consortium meetings. Research areas 2011-2014 Research Projects Advanced Technological Education The I3P has partnered with the Community College System of New Hampshire (CCSNH) on an educational project, “Cybersecurity in Healthcare Industry: Curriculum Adaptation and Implementation.” Funded by the National Science Foundation’s (NSF) Advanced Technological Education (ATE) program, the goal of the project is to produce well-qualified technicians to serve the healthcare information technology needs of rural northern New England. Improving CSIRTs The I3P launched a project called "Improving CSIRT Skills, Dynamics, and Effectiveness." This effort, funded by the Department of Homeland Security's Science and Technology Directorate, aims to explore what makes and sustains a good CSIRT. The results should help organizations ensure that their CSIRTs fulfill their maximum potential and become an invaluable tool in securing cyber infrastructure. The interdisciplinary team working on the new project will include cyber security and business researchers from Dartmouth College, organizational psychologists from George Mason University, and researchers and practitioners from Hewlett-Packard. Usable Security In April 2011, I3P convened a NIST-sponsored workshop examining the challenge of integrating security and usability into the design and development of software. One of the several workshop recommendations was the development of case studies to show software developers how usable security has been integrated into an organization's software development process. Consequently, the I3P has begun a Usable Security Project. Using a uniform study methodology, the project will document usable security in three different organizations. The results will be used to understand how key usable security problems were addressed, to teach developers about solutions, and to enable other researchers to perform comparable studies. Information Sharing The nation's Critical Infrastructure is under threat of cyber attack today as never before. The main response to the cyber threat facing the country is increased information sharing. Traditionally, agencies store data in data bases, and the information is not readily available to others who might benefit from it. The Obama administration made it clear that this strategy will not work – data must be readily available for sharing. The preferred way to do this would be using a cloud, where numerous government agencies would all store information, and the information would be available to all who have the appropriate credentials. This model has tremendous added benefits – but what are the associated risks? Researchers from RAND and The University of Virginia took on the challenge of answering that question in our Information Sharing Project. 2010-2011 Research Projects Privacy in the Digital Era Researchers from five I3P academic institutions are engaged in a sweeping effort to understand privacy in the digital era. Over the course of 18 months, this research project will take a multidisciplinary look at privacy, examining the roles of human behavior, data exposure, and policy expression on the way people understand and protect their privacy. Leveraging Human Behavior to Reduce Cyber Security Risk This project brings a behavioral-sciences lens to security, examining the interface between human beings and computers through a set of rigorous empirical studies. The multi-disciplinary project draws together social scientists and information security professionals to illuminate the intricacies of human perceptions, cognitions, and biases, and how these impact computer security. The project’s goal is to leverage these new insights in a way that produces more secure systems and processes. 2008-2009 Research Projects Better Security Through Risk Pricing I3P researchers on this project have examined ways to quantify cyber risk by exploring the potential for a multi-factor scoring system, analogous to risk scoring in the insurance sector. Overall, the work takes into account the two key determinants of cyber risk: technologies that reduce the likelihood of attack and internal capabilities to respond to successful or potential attacks. 2007-2009 Research Projects Survivability and Recovery of Process Control Systems Research This project builds on an earlier I3P project in control-systems security to develop strategies for enhancing control-system resilience and allowing for rapid recovery in the event of a successful cyber attack. Business Rationale for Cyber Security This project, an offshoot of an earlier study on the economics of security, addresses the challenge of corporate decision-making when it comes to investing in cyber security. It attempted to answer questions such as, “How much is needed?” “How much is enough?” “And how does one measure the return on investment?” The study includes an investigation of investment strategies, including risks and vulnerabilities, supply-chain interdependencies and technological fixes. Safeguarding Digital Identity Multidisciplinary in scope, this project addresses the security of digital identities, emphasizing the development of technical approaches for managing digital identities that also meet political, social and legal needs. The work has focused primarily on the two sectors for which privacy and identity protection are paramount: financial services and healthcare. Insider Threat This project addresses the need to detect, monitor and prevent insider attacks, which can inflict serious harm on an organization. The researchers have undertaken a systematic analysis of insider threat, one that addresses technical challenges but also takes into account ethical, legal and economic dimensions. U.S. Senate Cyber Security Report The I3P delivered a report titled National Cyber Security Research and Development Challenges: An Industry, Academic and Government Perspective, to U.S. Senators Joseph Lieberman and Susan Collins on February 18, 2009. The report reflects the finding of three forums hosted by the I3P in 2008 that brought together high-level experts from industry, government and academia to identify R&D opportunities that would advance cyber security research in the next five to 10 years. The report contains specific recommendations for technology and policy research that reflect the input of the participants and also the concerns of both the public and private sectors. Workshops The I3P connects with and engages with stakeholders through workshops and other outreach activities that are often held in partnership with other organizations. The workshops encompass a range of topics, some directly related to I3P research projects; others that are intended to bring the right people together to probe a particularly difficult foundational challenge, such as secure systems engineering or workforce development. Postdoctoral Fellowship Program The I3P sponsored a postdoctoral research fellowship program from 2004-2011 that provides funding for a year of research at an I3P member institution. These competitive awards were granted according to the merit of the proposed work, the extent to which the proposed work explored creative and original concepts, and the potential impact of the topic on the U.S. information infrastructure. Prospective applicants were expected to address a core area of cyber security research, including trustworthy computing, enterprise security management, secure systems engineering, network response and recovery, identity management and forensics, wireless computing and metrics, as well as the legal, policy and economic dimensions of security. References External links Official website Computer security organizations
61144485
https://en.wikipedia.org/wiki/Links%20Extreme
Links Extreme
Links Extreme is a 1999 golf video game developed by Access Software and published by Microsoft for Microsoft Windows. It is the first game in the Links series to be published by Microsoft, which purchased Access Software a month prior to the game's release. Links Extreme features unusual game modes and courses that are not common to the sport of golf. Critics felt that the game's concept was not handled well, and its small selection of two courses was particularly criticized. Gameplay Links Extreme features game modes and courses that are unusual to the sport of golf. Among the four unique game modes is Armadillo Al's Demolition Driving Range, in which the player uses exploding golf balls to hit targets such as armadillos, cows, and hot air balloons. The Extreme Golf mode features 17 pranks which affect a player's golf ball in different ways that can be harmful or beneficial to the player. In the Deathmatch and Poison game modes, the player is given a variety of exploding golf balls that are used to injure and ultimately kill the opponent golfer. The Poison mode differs in that the player can choose between playing the course or attacking the opponent golfer. The game also includes stroke play. The game includes two courses. Mojo Bay is an 18-hole course with a haunted island theme featuring zombies, giant skeletons, swamp monsters, crocodiles, and a pirate ship. Dimension X is a nine-hole course with the theme of a World War I battlefield, including biplanes, damaged buildings, and explosions. Golfers are dressed in Generation X clothing, including baggy jeans and cargo pants. The game features several golf swing methods, including traditional two-click and three-click options. Also featured is a multiplayer mode with options such as modem and LAN play, as well as compatibility with MSN Gaming Zone. Development and release Links Extreme was developed by Access Software, using the same game engine as other Links games at the time. The game was announced in mid-1998, and its release was initially scheduled for October 11 of that year. Chris Jones, executive vice president for Access Software, described the game as "Indiana Jones meets Happy Gilmore on the golf course. It's designed for the golfer who wants to bend the rules, demolish some clubs, but most importantly, win at all costs." Access Software acknowledged that the game was a risky idea, but believed that it would introduce golf to a broader audience, specifically younger gamers. The company stated that the game was not intended for the hardcore fans of the regular Links games. Microsoft purchased Access Software in April 1999, but had little involvement in the game, which was largely finished by that time. Links Extreme was the first Links game to be published by Microsoft. In the United States, the game was released for Microsoft Windows on May 27, 1999. In Australia, the game was published by Sega Ozisoft in mid-July 1999. Reception The game received mixed reviews according to the review aggregation website GameRankings. Critics felt that the game did not push its concept far enough, and that the concept was not handled well. Marc Saltzman of GamePro called it "a great idea gone horribly wrong," and believed that Access Software could have done better at making a creative and fun game out of the concept. Shawn Nicholls of AllGame called the game "a mix of a good idea with poor execution," but believed that it achieved its "off-the-wall" aspect. Gordon Goble of CNET Gamecenter considered it an "intriguing alternative golfing concept that didn't translate well," and called it a "dumbed-down" version of Links. Dan Egger of PC Gamer also considered it a good idea, but called the final product a "hall-assed attempt to make golf seem like an Aerosmith video". Egger felt that the game went only "halfway toward extremeness," and that "few, if any, boundaries are ever pushed" in the game. He considered it a "hideously unsuccessful" spin-off of the main Links games. PC Accelerator called the game a "Moronic detraction" from the Links series. Jon Dickinson of GameZone wrote that there "could have been a lot more time and effort" put into the game. Some critics stated that the game quickly became boring due to its lack of variety. The limited course selection was particularly criticized. William Abner of Computer Games Strategy Plus praised the Mojo Bay course for being adequately difficult, while Edgar Dupree of IGN considered it superior to Dimension X, which Dupree called "a joke of a course" in comparison. Goble also praised Mojo Bay, and felt that the various background features on Dimension X were "so superimposed and pixelated, it's laughable." He also felt that its World War I theme was out of place, describing it as "more weird than 'extreme.'" Nicholls enjoyed Dimension X over Mojo Bay, writing that it "is so well done, it's a shame it isn't the 18-hole course instead." Sean Miller of The Electric Playground praised both courses and considered them interesting. Steven L. Kent stated that "the monsters and strange course designs detract from the action." The driving range mode was especially criticized, in part because of its graphics and sound. Egger called the driving range a "slapped-together" feature that would only hold minimal interest, and Stephen Poole of GameSpot also felt that it had limited appeal. Abner considered the driving range shallow and forgettable, while Goble called it "annoyingly awful", and stated that it quickly became "absurdly boring". Kent called the driving range "a fun diversion". Abner considered Extreme Golf to be the best game mode, praising its humor. Miller also praised Extreme Golf, although Nicolls stated that it "doesn't hold much interest". Egger called the Deathmatch mode "utterly forgettable," and Dickinson considered Poison to be the most entertaining game mode. The sound and music received some praise, including the Mojo Bay course music. Abner and Nicholls praised the game's graphics, although Dickinson was disappointed by them. Nicholls believed the golf swings and putting were too difficult. Some critics wondered who the game's target audience was; Dupree wrote that golf fans would dislike the "wacky game mechanics" while action gamers would find the game boring. Poole stated that the game "tries much too hard" to appeal to both golfers and action gamers, resulting in a poor product. Some critics noted a complete lack of online players to compete against on MSN Gaming Zone. Saltzman noted various game glitches, including long load times and crashes. Miller also noted long loading times, and Goble complained of sluggish artificial intelligence as well as crashing. Goble considered the game "disjointed and hurried"; in describing Armadillo Al's Demolition Driving Range, he wrote "when Al himself says his facility is located in west Texas and the manual says it's in Nevada, you know someone didn't have enough time to straighten things out before the game's release." Dickinson also believed that the game felt "sort of unfinished." Aaron Curtiss of Los Angeles Times described the game as "tasteless and boorish," and called it "my kind of golf game," stating that it would appeal to men who enjoy explosions. Doug Bedell of The Dallas Morning News considered it more entertaining than real golf or the traditional Links LS games. Notes References External links 1999 video games Golf video games Microsoft games Windows games Windows-only games Video games developed in the United States
2842461
https://en.wikipedia.org/wiki/Heroides
Heroides
The Heroides (The Heroines), or Epistulae Heroidum (Letters of Heroines), is a collection of fifteen epistolary poems composed by Ovid in Latin elegiac couplets and presented as though written by a selection of aggrieved heroines of Greek and Roman mythology in address to their heroic lovers who have in some way mistreated, neglected, or abandoned them. A further set of six poems, widely known as the Double Heroides and numbered 16 to 21 in modern scholarly editions, follows these individual letters and presents three separate exchanges of paired epistles: one each from a heroic lover to his absent beloved and from the heroine in return. The Heroides were long held in low esteem by literary scholars but, like other works by Ovid, were re-evaluated more positively in the late 20th century. Arguably some of Ovid's most influential works (see below), one point that has greatly contributed to their mystique—and to the reverberations they have produced within the writings of later generations—is directly attributable to Ovid himself. In the third book of his Ars Amatoria, Ovid argues that in writing these fictional epistolary poems in the personae of famous heroines, rather than from a first-person perspective, he created an entirely new literary genre. Recommending parts of his poetic output as suitable reading material to his assumed audience of Roman women, Ovid wrote of his Heroides: "vel tibi composita cantetur Epistola voce: | ignotum hoc aliis ille novavit opus" (Ars Amatoria 3.345–6: "Or let an Epistle be sung out by you in practiced voice: unknown to others, he [sc. Ovid] originated this sort of composition"). The full extent of Ovid's originality in this matter has been a point of scholarly contention: E. J. Kenney, for instance, notes that "novavit is ambiguous: either 'invented' or 'renewed', cunningly obscuring without explicitly disclaiming O[vid]'s debt to Propertius' Arethusa (4.3) for the original idea." In spite of various interpretations of Propertius 4.3, consensus nevertheless concedes to Ovid the lion's share of the credit in the thorough exploration of what was then a highly innovative poetic form. Dating and authenticity The exact dating of the Heroides, as with the overall chronology of the Ovidian corpus, remains a matter of debate. As Peter E. Knox notes, "[t]here is no consensus about the relative chronology of this [sc. early] phase of O[vid]'s career," a position which has not advanced significantly since that comment was made. Exact dating is hindered not only by a lack of evidence, but by the fact that much of what is known at all comes from Ovid's own poetry. One passage in the second book of Ovid's Amores (Am.) has been adduced especially often in this context: Knox notes that "[t]his passage ... provides the only external evidence for the date of composition of the Heroides listed here. The only collection of Heroides attested by O[vid] therefore antedates at least the second edition of the Amores (c. 2 BC), and probably the first (c. 16 BC) ..." On this view, the most probable date of composition for at least the majority of the collection of single Heroides ranges between c. 25 and 16 BC, if indeed their eventual publication predated that of the assumed first edition of the Amores in that latter year. Regardless of absolute dating, the evidence nonetheless suggests that the single Heroides represent some of Ovid's earliest poetic efforts. Questions of authenticity, however, have often inhibited the literary appreciation of these poems. Joseph Farrell identifies three distinct issues of importance to the collection in this regard: (1) individual interpolations within single poems, (2) the authorship of entire poems by a possible Ovidian impersonator, and (3) the relation of the Double Heroides to the singles, coupled with the authenticity of that secondary collection. Discussion of these issues has been a focus, even if tangentially, of many treatments of the Heroides in recent memory. As an example following these lines, for some time scholars debated over whether this passage from the Amores—corroborating, as it does, only the existence of Her. 1–2, 4–7, 10–11, and very possibly of 12, 13, and 15—could be cited fairly as evidence for the inauthenticity of at least the letters of Briseis (3), Hermione (8), Deianira (9), and Hypermnestra (14), if not also those of Medea (12), Laodamia (13), and Sappho (15). Stephen Hinds argues, however, that this list constitutes only a poetic catalogue, in which there was no need for Ovid to have enumerated every individual epistle. This assertion has been widely persuasive, and the tendency amongst scholarly readings of the later 1990s and following has been towards careful and insightful literary explication of individual letters, either proceeding under the assumption of, or with an eye towards proving, Ovidian authorship. Other studies, eschewing direct engagement with this issue in favour of highlighting the more ingenious elements—and thereby demonstrating the high value—of individual poems in the collection, have essentially subsumed the authenticity debate, implicating it through a tacit equation of high literary quality with Ovidian authorship. This trend is visible especially in the most recent monographs on the Heroides. On the other hand, some scholars have taken a completely different route, ascribing the whole collection to one or two Ovidian imitators (the catalogue in Am. 2.18, as well as Ars am. 3.345–6 and Epistulae ex Ponto 4.16.13–14, would then be interpolations introduced to establish the imitations as authentic Ovid). The collection The paired letters of the Double Heroides are not outlined here: see the relevant section of that article for the double epistles (16–21). The single Heroides are written from the viewpoints of the following heroines (and heroes). The quotations highlighted are the opening couplets of each poem, by which each would have been identified in medieval manuscripts of the collection: I. Penelope writes to her famed husband, Odysseus, a hero of the Trojan War, towards the end of his long absence (the subject of Homer's Odyssey). {| |valign=top width="80px"|Epistula I: |width="380px"| |This your Penelope sends to you, too-slow Ulysses;  A letter in return does me no good; come yourself! |} II. Phyllis, the daughter of Lycurgus, writes to her lover Demophoon, the son of Theseus, king of Athens, after he fails in his promised return from his homeland. {| |valign=top width="80px"|Epistula II: |width="380px"| |I, your hostess, Demophoon—I, your Phyllis of Rhodope—  Complain: you're gone far longer than you promised! |} III. Briseis, the daughter of Briseus, writes to Achilles, the central hero of the Trojan War and focal character of Homer's Iliad, urging him to accept her as part of a package-deal from Agamemnon, leader of the Greek forces at Troy, and to return to battle against the Trojans. {| |valign=top width="80px"|Epistula III: |width="380px"| |What you're reading—this letter came from your ravished Briseis,  The Greek painstakingly copied out by her uncivilised hand. |} IV. Phaedra, wife of Theseus, writes to her stepson, Hippolytus, confessing her semi-incestuous and illicit love for him. {| |valign=top width="80px"|Epistula IV: |width="380px"| |What well-being she herself will lack unless you give it her  The Cretan maiden sends to the man born of an Amazon. |} V. The nymph Oenone, by Hellenistic tradition Paris' first wife, writes to Paris, son of Priam King of Troy, after he abandoned her to go on his famed journey to Sparta, and then returned with the abducted Helen of Sparta as a wife. {| |valign=top width="80px"|Epistula V: |width="380px"| |The Nymph sends words you ordered her to write,  From Mount Ida, to her Paris, though you refuse her as yours. |} VI. Hypsipyle, queen of Lemnos, to Jason, after he abandoned her for Medea {| |valign=top width="80px"|Epistula VI: |width="380px"| |Hypsipyle of Lemnos, born of the people of Bacchus,  Speaks to Jason: how much of your heart was truly in your words? |} VII. Dido to Aeneas, on his departure to Italy {| |valign=top width="80px"|Epistula VII: |width="380px"| |Dardanian, receive this song of dying Elissa:  What you read are the last words written by me. |} VIII. Hermione, daughter of Menelaus, to Orestes, son of Agamemnon and Clytemnestra, urging him to save her from marriage to Achilles' son, Pyrrhus {| |valign=top width="80px"|Epistula VIII: |width="380px"| |Hermione speaks to one lately her cousin and husband,  Now her cousin. The wife has changed her name. |} IX. Deianira, daughter of Oeneus, king of Aetolia, to her husband Hercules, after he laid down his weapons to be with Iole, the daughter of Eurytus, king of Oechalia {| |valign=top width="80px"|Epistula IX: |width="380px"| |A letter, that shares her feelings, sent to Alcides  By your wife, if Deianira is your wife. |} X. Ariadne to Theseus after he abandoned her on the island of Naxos on his way back to Athens. He does not marry Phaedra until later (see Epistle IV). {| |valign=top width="80px"|Epistula X: |width="380px"| |Even now, left to the wild beasts, she might live, cruel Theseus.  Do you expect her to have endured this too, patiently? |} XI. Canace, daughter of Aeolus, to her brother and lover, Macareus, before killing herself following the death of their baby at the hands of their father {| |valign=top width="80px"|Epistula XI: |width="380px"| |An Aeolid, who has no health herself, sends it to an Aeolid,  And, armed, these words are written by her hand. |} XII. Medea to Jason, after he abandoned her to marry Creusa (also known as Glauce) {| |valign=top width="80px"|Epistula XII: |width="380px"| |Scorned Medea, the helpless exile, speaks to her recent husband,  surely you can spare some time from your kingship? |} XIII. Laodamia, the daughter of Acastus, to her husband Protesilaus, urging him not to take too many risks in the Greeks' attack on Troy {| |valign=top width="80px"|Epistula XIII: |width="380px"| |She, who sends this, wishes loving greetings to go to whom it's sent:  From Thessaly to Thessaly's lord, Laodamia to her husband. |} XIV. Hypermnestra to her husband, Lynceus, calling for him to save her from death at the hands of her father, Danaus {| |valign=top width="80px"|Epistula XIV: |width="380px"| |Hypermestra sends this letter to her one cousin of many,  The rest lie dead because of their brides' crime. |} XV. Sappho to her ex-lover Phaon, after he left her {| |valign=top width="80px"|Epistula XV: |width="380px"| |When these letters, from my eager hand, are examined  Are any of them known to your eyes, straight away, as mine? |} Translations and influence The Heroides were popularized by the Loire valley poet Baudri of Bourgueil in the late eleventh century, and Héloïse used them as models in her famous letters to Peter Abelard. A translation, Les Vingt et Une Epistres d'Ovide, was made of this work at the end of the 15th century by the French poet Octavien de Saint-Gelais, who later became Bishop of Angoulême. While Saint-Gelais' translation does not do full justice to the original, it introduced many non-Latin readers to Ovid's fictional letters and inspired many of them to compose their own Heroidean-style epistles. Perhaps the most successful of these were the Quatre Epistres d'Ovide (c. 1500) by , a friend and colleague of Saint-Gelais. Later translations and creative responses to the Heroides include Jean Lemaire de Belges's Premiere Epître de l'Amant vert (1505), Fausto Andrelini's verse epistles (1509–1511; written in the name of Anne de Bretagne), Contrepistres d'Ovide (1546), and Juan Rodríguez de la Cámara's Bursario, a partial translation of the Heroides. Classics scholar W. M. Spackman argues the Heroides influenced the development of the European novel: of Helen's reply to Paris, Spackman writes, "its mere 268 lines contain in embryo everything that has, since, developed into the novel of dissected motivations that is one of our glories, from La Princesse de Clèves, Manon Lescaut and Les Liaisons Dangereuses to Stendhal and Proust". The Loeb Classical Library presents the Heroides with Amores in Ovid I. Penguin Books first published Harold Isbell's translation in 1990. Isbell's translation uses unrhymed couplets that generally alternate between eleven and nine syllables. A translation in rhymed couplets by Daryl Hine appeared in 1991. It was the inspiration for 15 monologues starring 15 separate actors, by 15 playwrights at the Jermyn Street Theatre in 2020. Notes All notes refer to works listed in the Bibliography, below. Selected bibliography For references specifically relating to that subject, please see the relevant bibliography of the Double Heroides. Editions Dörrie, H. (ed.) (1971) P. Ovidi Nasonis Epistulae Heroidum (Berlin and New York) Showerman, G. (ed. with an English translation) and Goold, G. P. (2nd edition revised) (1986) Ovid, Heroides and Amores (Cambridge, MA and London) Commentaries Kenney, E. J. (ed.) (1996) Ovid Heroides XVI–XXI (Cambridge). Knox, P. E. (ed.) (1995) Ovid: Heroides. Select Epistles (Cambridge). Roebuck, L. T. (ed.) (1998) Heroides I w/ Notes & Comm. (Classical Association of New England) Literary overviews and textual criticism Anderson, W. S. (1973) "The Heroides", in J. W. Binns (ed.) Ovid (London and Boston): 49–83. Arena, A. (1995) "Ovidio e l'ideologia augustea: I motivi delle Heroides ed il loro significato", Latomus 54.4: 822–41. Beck, M. (1996) Die Epistulae Heroidum XVIII und XIX des Corpus Ovidianum (Paderborn). Courtney, E. (1965) "Ovidian and Non-Ovidian Heroides", Bulletin of the Institute of Classical Studies of the University of London (BICS) 12: 63–6. ___. (1998) "Echtheitskritik: Ovidian and Non-Ovidian Heroides Again", CJ 93: 157–66. Fulkerson, L. (2005) The Ovidian Heroine as Author: Reading, Writing, and Community in the Heroides (Cambridge). Heinze, T. (1991–93) "The Authenticity of Ovid Heroides 12 Reconsidered", Bulletin of the Institute of Classical Studies of the University of London (BICS) 38: 94–8. Jacobson, H. (1974) Ovid's Heroides (Princeton). Kennedy, D. F. (2002) "Epistolarity: The Heroides", in P. R. Hardie (ed.) The Cambridge Companion to Ovid (Cambridge): 217–32. Knox, P. E. (1986) "Ovid's Medea and the Authenticity of Heroides 12", Harvard Studies in Classical Philology (HSCP) 90: 207–23. ___. (2002) "The Heroides: Elegiac Voices", in B. W. Boyd (ed.) Brill's Companion to Ovid (Leiden): 117–39. Lachmann, K. (1876) Kleinere Schriften zur classischen Philologie, Bd. 2 (Berlin). Lindheim, S. (2003) Mail and Female: Epistolary Narrative and Desire in Ovid's Heroides (Madison, WI). Lingenberg, W. (2003) Das erste Buch der Heroidenbriefe. Echtheitskritische Untersuchungen (Paderborn). Palmer, A. (ed.) [completed by L.C. Purser (ed.)] (1898) P. Ovidi Nasonis Heroides, with the Greek translation of Planudes (Oxford). Rahn, H. (1963) "Ovids elegische Epistel", Antike und Abendland (A&A) 7: 105–120. Reeve, M. D. (1973) "Notes on Ovid's Heroides", Classical Quarterly (CQ) 23: 324–338. Rosenmeyer, P. A. (1997) "Ovid's Heroides and Tristia: Voices from Exile", Ramus 26.1: 29–56. [Reprinted in Knox (ed.) (2006): 217–37.] Smith, R. A. (1994) "Fantasy, Myth, and Love Letters: Text and Tale in Ovid's Heroides", Arethusa 27: 247–73. Spentzou, E. (2003) Readers and Writers in Ovid's Heroides: Transgressions of Genre and Gender (Oxford). Steinmetz, P. (1987) "Die literarische Form der Epistulae Heroidum Ovids", Gymnasium 94: 128–45. Stroh, W. (1991) "Heroides Ovidianae cur epistolas scribant", in G. Papponetti (ed.) Ovidio poeta della memoria (Rome): 201–44. Tarrant, R. J. (1981) "The Authenticity of the Letter of Sappho to Phaon", Harvard Studies in Classical Philology (HSCP) 85: 133–53. Verducci, F. (1985) Ovid's Toyshop of the Heart (Princeton). Analyses of individual epistles Barchiesi, A. (1995) Review of Hintermeier (1993), Journal of Roman Studies (JRS) 85: 325–7. ___. (2001) Speaking Volumes: Narrative and Intertext in Ovid and Other Latin Poets, eds. and trans. M. Fox and S. Marchesi (London): "Continuities", 9–28. [Translated and reprinted from Materiali e discussioni per l'analisi dei testi classici (MD) 16 (1986).] "Narrativity and Convention in the Heroides", 29–48. [Translated and reprinted from Materiali e discussioni per l'analisi dei testi classici (MD) 19 (1987).] "Future Reflexive: Two Modes of Allusion and the Heroides", 105–28. [Reprinted from Harvard Studies in Classical Philology (HSCP) 95 (1993).] Casali, S. (1992) "Enone, Apollo pastore, e l'amore immedicabile: giochi ovidiani su di un topos elegiaco", Materiali e discussioni per l'analisi dei testi classici (MD) 28: 85–100. Fulkerson, L. (2002a) "Writing Yourself to Death: Strategies of (Mis)reading in Heroides 2", Materiali e discussioni per l'analisi dei testi classici (MD) 48: 145–65. ___. (2002b) "(Un)Sympathetic Magic: A Study of Heroides 13", American Journal of Philology (AJPh) 123: 61–87. ___. (2003) "Chain(ed) Mail: Hypermestra and the Dual Readership of Heroides 14", Transactions of the American Philological Association (TAPA) 133: 123–146. Hinds, S. (1993) "Medea in Ovid: Scenes from the Life of an Intertextual Heroine", Materiali e discussioni per l'analisi dei testi classici (MD) 30: 9–47. ___. (1999) "First Among Women: Ovid, and the Traditions of ‘Exemplary' Catalogue", in amor : roma, S. M. Braund and R. Mayer (eds.), Proceedings of the Cambridge Philological Society (PCPS) Supp. 22: 123–42. Hintermeier, C. M. (1993) Die Briefpaare in Ovids Heroides, Palingensia 41 (Stuttgart). Jolivet, J.-C. (2001) Allusion et fiction epistolaire dans Les Heroïdes: Recherches sur l'intertextualité ovidienne, Collection de l' École Française de Rome 289 (Rome). Kennedy, D. F. (1984) "The Epistolary Mode and the First of Ovid's Heroides", Classical Quarterly (CQ) n.s. 34: 413–22. [Reprinted in Knox (ed.) (2006): 69–85.] Lindheim, S. (2000) "Omnia Vincit Amor: Or, Why Oenone Should Have Known It Would Never Work Out (Eclogue 10 and Heroides 5)", Materiali e discussioni per l'analisi dei testi classici (MD) 44: 83–101. Rosati, G. (1991) "Protesilao, Paride, e l'amante elegiaco: un modello omerico in Ovidio", Maia 43.2: 103–14. ___. (1992) "L'elegia al femminile: le Heroides di Ovidio (e altre heroides)", Materiali e discussioni per l'analisi dei testi classici (MD) 29: 71–94. Vessey, D. W. T. (1976) "Humor and Humanity in Ovid's Heroides", Arethusa 9: 91–110. Viarre, S. (1987) "Des poèmes d'Homère aux Heroïdes d'Ovide: Le récit épique et son interpretation élégiaque", Bulletin de l'association Guillaume Budé Ser. 4: 3. Scholarship of tangential significance Armstrong, R. (2005) Ovid and His Love Poetry (London) [esp. chs. 2 and 4] Hardie, P. R. (2002) Ovid's Poetics of Illusion (Cambridge). Holzberg, N. (1997) "Playing with his Life: Ovid's 'Autobiographical' References", Lampas 30: 4–19. [Reprinted in Knox (ed.) (2006): 51–68.] ___. (2002) Ovid: The Poet and His Work, trans. G. M. Goshgarian (Ithaca, NY and London). James, S. L. (2003) Learned Girls and Male Persuasion: Gender and Reading in Roman Love Elegy (Berkeley). [esp. ch. 5] Kauffman, L. S. (1986) Discourses of Desire: Gender, Genre, and Epistolary Fictions (Ithaca, NY). Knox, P. E. (ed.) (2006) Oxford Readings in Ovid (Oxford and New York). Zwierlein, O. (1999) Die Ovid- und Vergil-Revision in tiberischer Zeit (Berlin and New York). External links Latin text at The Latin Library English translation, A. S. Kline Perseus/Tufts: Commentary on the Heroides of Ovid Poetry by Ovid 1st-century BC Latin books Cultural depictions of Sappho Roman mythology Trojan War literature
22717006
https://en.wikipedia.org/wiki/GestureTek
GestureTek
GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia. Founding Founded in 1986 by Canadians Vincent John Vincent and Francis MacDougall, this privately held company develops and licenses gesture recognition software based on computer vision techniques. The partners invented video gesture control in 1986 and received their base patent in 1996 for the GestPoint video gesture control system. GestPoint technology is a camera-enabled video tracking software system that translates hand and body movement into computer control. The system enables users to navigate and control interactive multi-media and menu-based content, engage in virtual reality game play, experience immersion in an augmented reality environment or interact with a consumer device (such a television, mobile phone or set top box) without using touch-based peripherals. Similar companies include gesture recognition specialist LM3LABS based in Tokyo, Japan. Technology GestureTek's gesture interface applications include multi-touch and 3D camera tracking. GestureTek's multi-touch technology powers the multi-touch table in Melbourne's Eureka Tower. A GestureTek multi-touch table with object recognition is found at the New York City Visitors Center. Telefónica has a multi-touch window with technology from GestureTek. GestureTek's 3D tracking technology is used in a 3D television prototype from Hitachi and various digital signage and display solutions based on 3D interaction. Patents GestureTek currently has 8 patents awarded, including: 5,534,917 (Video Gesture Control Motion Detection); 7,058,204 (Multiple Camera Control System, Point to Control Base Patent); 7,421,093 (Multiple Camera Tracking System for Interfacing With an Application); 7,227,526 (Stereo Camera Control, 3D-Vision Image Control System); 7,379,563 (Two Handed Movement Tracker Tracking Bi-Manual Movements); 7,379,566 (Optical Flow-Based Tilt Sensor For Phone Tilt Control); 7,389,591 (Phone Tilt for Typing & Menus/Orientation-Sensitive Signal Output); 7,430,312 (Five Camera 3D Face Capture). GestureTek's software and patents have been licensed by Microsoft for the Xbox 360, Sony for the EyeToy, NTT DoCoMo for their mobile phones and Hasbro for the ION Educational Gaming System. In addition to software provision, GestureTek also fabricates interactive gesture control display systems with natural user interface for interactive advertising, games and presentations. In addition, GestureTek's natural user interface virtual reality system has been the subject of research by universities and hospitals for its application in both physical therapy and physical rehabilitation. In 2008, GestureTek received the Mobile Innovation Global Award from the GSMA for its software-based, gesture-controlled user interface for mobile games and applications. The technology is used by Java platform integration providers and mobile developers. Katamari Damacy is one example of a gesture control mobile game powered by GestureTek software. Competitors Other companies in the industry of interactive projections for marketing and retail experiences include Po-motion Inc., Touchmagix and LM3LABS. References Artificial intelligence applications Technology companies of the United States Multimedia frameworks Gesture recognition Technology companies established in 1986
851378
https://en.wikipedia.org/wiki/11th%20Armored%20Cavalry%20Regiment
11th Armored Cavalry Regiment
The 11th Armored Cavalry Regiment ("Blackhorse Regiment") is a unit of the United States Army garrisoned at Fort Irwin, California. Although termed an armored cavalry regiment, it is being re-organized as a multi-component heavy brigade combat team. The regiment has served in the Philippine–American War, World War II, the Vietnam War, Cold War, Operation Desert Storm (scout platoons), and Operation Iraqi Freedom (Iraq War). The 11th ACR serves as the Opposing Force (OPFOR) for the Army and Marine task forces, and foreign military forces that train at the National Training Center. The OPFOR trained U.S. Army forces in mechanized desert warfare following a Soviet-era style threat until June 2002, when the OPFOR and the 11th ACR changed to portraying an urban/asymmetrical warfare style of combat U.S. soldiers are facing in operations abroad. From June to December 2003, members of the 11th ACR deployed to Afghanistan, where they helped to develop and train the armor and mechanized infantry battalions of the Afghan National Army. These specialized units would defend the Afghan capital during the country's constitutional convention. In January 2004, the 11th ACR deployed to Iraq. The 11th ACR was not reorganized under the U.S. Army Combat Arms Regimental System, but has been reorganized under the U.S. Army Regimental System. History 11th Cavalry Regiment The regiment was constituted on 2 February 1901 in the Regular Army as the 11th Cavalry Regiment, and was organized on 11 March 1901 at Fort Myer, Virginia. The regiment participated in the 1916 Pancho Villa Expedition under the command of William Jones Nicholson. For an operational history of the regiment, see the separate squadron histories below. At the start of World War II, the 11th Cavalry was stationed at the Presidio of Monterey in California. They moved to Fort Ord in stages from 16 to 27 January 1940 and again to Camp Clayton on 15 April to 15 May 1940 for temporary training. They participated in maneuvers at Fort Lewis in Washington from 4 to 29 August 1940, and returned to the Presidio of Monterey on 31 August 1940, where they were detached from the 2nd Cavalry Division, and resumed its status as a separate regiment. They next moved to Camp Seeley in California on 7 November 1941, and again to Live Oaks, California on 24 July 1941; they then returned to Camp Seeley on 17 September 1941, and to Camp Lockett on 10 December 1941. They were next assigned to the United States Army Armored Force on 12 June 1942, and relocated to Fort Benning in Georgia on 10 July 1942, where they prepared to be inactivated and reorganized. The 11th Cavalry Regiment was deactivated on 15 July 1942 at Fort Benning, Georgia; personnel and equipment concurrently transferred to the 11th Armored Regiment, with concurrent development of the 11th Cavalry Group, and the 11th Tank Group. The remainder of 11th Cavalry was disbanded on 26 October 1944. 11th Armored Regiment 11th Armored Regiment was constituted on 11 July 1942 in the national army, assigned to the 10th Armored Division, and organized at Fort Benning on 15 July 1942 from the personnel and equipment of the 11th Cavalry Regiment. The motto on the unit insignia is "Allons", which means "Let's Go" in French. The regiment moved to Murfreesboro, Tennessee on 22 June 1943, and then Fort Gordon on 5 September 1943. 11th Armored Regiment was broken up on 20 September 1943, and its elements were distributed as follows: HHC-11th Armored Regiment, and 1st and 2nd Battalions were reorganized as the 11th Tank Battalion in the 10th AD. 3rd Battalion, 11th Armored Regiment was reorganized and redesignated as the 712th Tank Battalion, and relieved from assignment to the 10th AD. 712th Tank Battalion was inactivated at Camp Kilmer, New Jersey on 27 October 1945, and redesignated the 525th Medium Tank Battalion on 1 September 1948. It was activated on 10 September 1948 at Fort Lewis, Washington. 525th Medium Tank Battalion was redesignated as 95th Tank Battalion on 4 February 1950, assigned to 7th Armored Division, and activated at Camp Roberts, California on 24 November 1950, and inactivated there on 15 November 1953. Reconnaissance Company was reorganized and redesignated as Troop E, 90th Cavalry Reconnaissance Squadron, which maintained a separate history thereafter. Maintenance and Service Companies were disbanded. 11th Tank Battalion As part of the 10th Armored Division, 11th Tank Battalion shipped out from the New York Port of Embarkation on 13 September 1944, and landed in France on 23 September 1944. The battalion participated in the Rhineland, Ardennes-Alsace, and Central Europe Campaigns, and was located at Schongau, Bavaria, Germany on 14 August 1945. The battalion returned to the Hampton Roads Port of Embarkation on 13 October 1945, was inactivated at Camp Patrick Henry, Virginia on the same day, and was relieved from assignment to the 10th AD. 11th Cavalry Group (Mechanized) HHT, 11th Cavalry Regiment was redesignated on 19 April 1943 as HHT, 11th Cavalry Group, and was activated at Camp Anza, California on 5 May 1943. At that time, the 36th Cavalry Reconnaissance Squadron and 44th Cavalry Reconnaissance Squadron were attached. The group was then moved to Fort Bragg on 31 January 1944, and again to Atlantic Beach, Florida on 15 March 1944 for amphibious training. They then moved to Camp Gordon on 1 June 1944 and then departed the New York Port of Embarkation on 29 September 1944, and arrived in England on 10 October 1944, and landed in France on 26 November 1944. They moved to the Netherlands on 8 December 1944, went into the line in Germany on 12 December 1944, and protected the Roer River sector; they recrossed into the Netherlands on 3 February 1945, and re-entered Germany on 27 February 1945 on the left flank of the U.S. 84th Infantry Division. The group then held a defensive line along the Rhine River near Düsseldorf on 12 March 1945 under the XIII Corps, and crossed the Rhine at Wesel on 1 April 1945, screened XIII Corps' northern flank, and saw action during the Battle of Munster and the seizure of the Ricklingen Bridge over the Leine River. During the campaign in northwestern Europe, Troop B of the 44th Cavalry Reconnaissance Squadron served as a mechanized escort and security force for the headquarters of General Dwight D. Eisenhower, supreme commander of the Allied Expeditionary Forces. In August 1945, 11th Cavalry Group headquarters was located at Gross Ilsede, Germany. HHT, 11th Cavalry Group was converted and reorganized as HHT, 11th Constabulary Regiment on 1 May 1946. During this period, the regimental headquarters was located in Regensburg. As a constabulary unit, the 11th Constabulary Regiment patrolled occupied Germany and performed law enforcement and keeping of the public order missions. HHT 11th Constabulary Regiment was further reorganized and redesignated as HHC, 11th Armored Cavalry Regiment on 30 November 1948. 11th Tank Group HHT, 11th Tank Group was constituted on 19 July 1943 in the National Army. It was activated at Camp Campbell, Kentucky on 28 July 1943 as a separate group. It was reorganized and redesignated as HHC, 11th Armored Group on 5 December 1943. During the war, armored groups such as the 11th were used as administrative headquarters for the numerous independent tank battalions fielded in the European Theater of Operations. HHC, 11th Armored Group was converted and redesignated HHT, 1st Constabulary Regiment on 1 May 1946. HHT, 1st Constabulary Regiment was inactivated on 20 September 1947 in Germany. 11th Armored Cavalry Regiment Reassembly and organizing of 11th ACR was completed on 30 November 1948 by reconstitution and reorganization of elements of the 11th Cavalry Regiment and HHT, 1st Constabulary Regiment. HHT-1st Constabulary Regiment was converted, redesignated and consolidated into 11th ACR as HHT, 3rd Battalion, 11th ACR on 30 November 1948. 11th Tank Battalion was consolidated into 11th ACR on 8 January 1951. 95th Tank Battalion was consolidated into 3rd Battalion, 11th ACR on 1 October 1958. Air Troop inactivated 20 March 1972 in Vietnam; 2d Squadron inactivated 6 April 1972 in Vietnam; Air Troop and 2d Squadron activated 17 May 1972 in Germany. Around 1984, Air Troop was enlarged and became the 4th Squadron (Thunderhorse), also known as the Combat Aviation Squadron. Placed 17 June 1986 under the United States Army Regimental System Inactivated 15 October 1993 – 15 March 1994 in Germany Activated 16 October 1994 (less 3d and 4th Squadrons; the Air Defense Artillery Battery; and the Howitzer Batteries, 1st and 2d Squadrons) at Fort Irwin, California West Germany (1957–64) As part of the Gyroscope unit rotations, the 11th ACR was sent to West Germany in March 1957 for another round at the border surveillance mission along the Iron Curtain, replacing the 6th Armored Cavalry Regiment. Regimental headquarters and 1st Squadron were located in Straubing while the 2nd Squadron was stationed in Landshut and the 3rd Squadron in Regensburg. The regiment's border surveillance mission was along the German-Czech frontier. In 1964, the 11th ACR returned to the United States and would be bound for Vietnam within two years. South Vietnam (1966–72) Home now for the regiment was Fort Meade, Maryland where the "Blackhorse" engaged in operational training and support activities like participation in the Presidential Inauguration and support for ROTC summer training. With the war in South Vietnam escalating, the Blackhorse Regiment was alerted for assignment to Southeast Asia on 11 March 1966. The regiment began specialized training for combat in a counterinsurgency environment. Modifications were made to the organization and equipment (MTOE) with emphasis on the use of modified M113 armored personnel carriers (APCs). Two M-60 machineguns with protective gun shield were mounted at the port and starboard rear of the vehicle, and a combination circular & flat frontal gun shield(s) were added around the .50 caliber machine gun located at the commander's hatch. This combination produced a M-113 Armored Cavalry Assault Vehicle, or, in Vietnam more simply referred to as an ACAV by GIs, a name coined by 11th Armored Cavalrymen. The regiment's modifications emphasized the use of ACAVs instead of the Patton medium tank and completely replaced the M-114 found in reconnaissance platoons, which may have existed in European and CONUS areas of operation. The M114 had been deployed to Vietnam in 1962, but withdrawn in 1964 due to its unsatisfactory, and often disastrous performance. Throughout the war, the tank companies, with their M48 Patton tanks, remained the same in each squadron. In 1968, Colonel George S. Patton IV (son of World War II General Patton), commander of the 11th ACR in South Vietnam recommended to General Creighton Abrams that one squadron from a division and the other from theater command be issued the army's new aluminum tanks (Sheridans) for combat testing. General Abrams concurred, and in January 1969, M551 Sheridans were issued to the 3rd Squadron 4th Armored Cavalry and the 1st Squadron 11th Armored Cavalry. Due to differences between the organization of regimental cavalry squadrons and divisional cavalry squadrons, in 1st Sqdn 11th ACR, the Sheridans were issued to the ACAV troops, replacing three M113 ACAVs in each platoon (the squadron's one tank company remained intact); in 3rd Squadron, 4th Cavalry, the Sheridans replaced M48A3 tanks throughout. Although the 3/4 Cavalry met near disaster with their Sheridans within a month of receiving them (one destroyed by a mine), the 1/11 Cavalry had just the reverse in luck, killing nearly 80 enemy soldiers during an engagement on 23 February 1969. All things considered, the army was satisfied with the Sheridan tank, and by the end of 1970 alone, well over 200 M551s would be in South Vietnam. While nearly all US armored cavalry squadrons were equipped with the M551 by 1970, the 11th ACR tank companies, as well as the three US Army armor battalions (1/69th, 2/34th, and 1/77th Armor) in country, all retained their 90mm gun M48A3 Patton tanks. Only the M48s, as well as the Australian Centurions, and ARVN M41 Walker Bulldog light tanks could effectively and safely conduct "thunder runs"; the firing of all tank weapons while driving down the highway or road. While ACAVs did not have a cannon, the Sheridan's high recoil from its 152mm main gun negated it from firing excessive broadsides while moving down a road. Thus the most favored tanks for clearing highways with "thunder runs" on a daily basis, most often fell to the M48s of the 11th ACR and accompanying armor units. Arrival in country The Blackhorse Regiment arrived in Vũng Tàu, South Vietnam on 7 September 1966 and was commanded by Col. William W. Cobb. Operation Hickory (7–15 October 1966) produced the first enemy casualties inflicted by the 3rd Squadron and elements of the 919th Engineer Company in the vicinity of Phu Hoa. Blackhorse Base Camp "Atlanta" was the code name for the establishment of Blackhorse Base Camp—the new home of the 11th Armored Cavalry Regiment in Vietnam. Blackhorse Base Camp was located approximately south of the village of Xuan Loc on Route 2 and approximately southeast of the village of Long Goia. Saigon is approximately to the west along Rt. 1. The operation began on 20 October and concluded on 3 November 1966. Stanton's Vietnam Order Of Battle lists the following locations for the 11th Armored Cavalry Regiment's headquarters in Vietnam: Bien Hoa September 1966 – November 1966 Long Binh December 1966 – February 1967 Blackhorse Base Camp/Xuan Loc March 1967 – January 1969 Lai Khê February 1969 Long Giao March 1969 – September 1969 Bien Hoa October 1969 – June 1970 Di An July 1970 – March 1971 Operation Cedar Falls From January until 18 May 1967, the regiment conducted three major search and destroy operations. These operations would later be known as reconnaissance in force (RIF) operations. The first of these operations commence on 8 January 1967 and was known as "Operation Cedar Falls". It continued until 24 January 1967. The 1st and 2nd Squadrons operated in the infamous "Iron Triangle" region near Ben Cat employing search and destroy tactics, screening and blocking, and security in attacks on successive objectives. Operation Junction City Operation Junction City I and II involved the 1st and 3rd Squadrons. It began on 18 February 1967 and ran through 15 April 1967. This operation took these squadrons to the headquarters of the Central Office South Vietnam (COSVN) believed to be located in Bình Dương Province with the objective of destroying this important headquarters. This joint mission conducted with the 1st Australian Task Force secured lines of communication and fire support bases (FSB). Extensive RIF operations were conducted as well. Operation Manhattan Commencing on 23 April 1967 the third operation titled Operation Manhattan was a thrust into the Long Nguyen Secret Zone by the 1st and 2nd Squadrons. This zone was a long-suspected regional headquarters of the Viet Cong. In a series of reconnaissance in force operations 60 tunnel complexes were uncovered. 1884 fortifications were destroyed. 621 tons of rice was evacuated during these operations. Operation Manhattan ended on 11 May 1967. Operation Kittyhawk Beginning in April 1967 and running through 21 March 1968, the regiment was tasked to secure and pacify Long Khánh District. This year-long mission was called Operation Kittyhawk. It achieved three objectives: Viet Cong (VC) were kept from interfering with travel on the main roads, Vietnamese were provided medical treatment in civic action programs like MEDCAP and DENTCAP and finally, RIF operations were employed to keep the VC off balance, making it impossible for them to mount offensive operations. 1967 From the summer of 1967 until the winter the regiment was led by Col. Roy W. Farley. Operation Emporia I & II was a road clearing operation with limited RIF missions by the 1st and 3rd Squadrons in Long Khánh District. Operation Valdosta I & II was a regimental size operation. Its purpose was to provide security at polling places during elections and to maintain reaction forces to counter VC agitation. As a result of the operation 84.7% of eligible voters cast ballots in Long Khánh District in the first general election and 78% in the second. Operation Quicksilver Operation Quicksilver involved the 1st and 2nd Squadrons of the 11th Armored Cavalry. Its purpose was to secure routes that moved logistical personnel of the 101st Airborne Division between Binh Long and Tây Ninh Provinces. Cordon, search and RIF missions were also performed. Operation Fargo Operation Fargo ran from 21 December 1967 until 21 January 1968. This regimental size operation conducted RIFs in Binh Long and Tây Ninh Provinces and opened Route 13 to military traffic for the very first time. The Tet Offensive The early part of 1968 was marked by the most ambitious and embolden offensive attack coordinated by the VC and NVA in the history of the war. The Tet Offensive was designed to coincide with the Vietnamese New Year. Operation Adairsville Operation Adairsville began on 31 January 1968. Word was received by the II Field Force HQs to immediately re-deploy to the Long Binh/Bien Hoa area to relieve installations threatened by the Tet Offensive. At 1400 hours (2:00 pm) the 1st Squadron was called to move from their position south of the Michelin Rubber Plantation to the II Field Force headquarters. The 2nd Squadron moved from north of the plantation to III Corps POW Compound were enemy soldiers were sure to attempt to liberate the camp. The 3rd Squadron moved from An Lộc to III Corps Army, Republic of Vietnam (ARVN) headquarters. It took only 14 hours and 80 miles to arrive in position after first being alerted. Operation Alcorn Cove The security operation in the Long Binh/Bien Hoa area and the area around Blackhorse Base Camp by the 1st and 2nd Squadrons is continue under Operation Alcorn Cove which began on 22 March 1968. This joint mission with the ARVN 18th Division and 25th Division was a twofold operation of security and RIFs. Operation Toan Thang was an extension of "Alcorn Cove". That joint operation involved the 1st and 25th Infantry Divisions. From April 1968 to January 1969, the 11th Cavalry was commanded Colonel (later Major General) George S. Patton IV, the son of General George S. Patton Jr. "Workhorse" The 3rd Squadron K Troop was part of the 3rd Squadron and known as "Killing K Troop". 3rd Squadron's nickname was "Workhorse". Shortly after its arrival in Vietnam, the 3rd Squadron engaged the Viet Cong for the first time. The squadron was awarded a Meritorious Unit Citation for this period. The Tet Offensive of 1968 gave the squadron a chance to fight the enemy's troop formations in open combat. In Bien Hoa the 3rd Squadron drove the enemy forces from the area near III Corps headquarters. Its action was crucial in smashing the enemy's offensive. On 20 October 2009 President Barack Obama presented a Presidential Unit Citation to troop commander Captain John B. Poindexter and all veterans of A Troop, 1st Squadron, 11th Cavalry for their heroism along the Cambodian border on 26 March 1970. Brigadier General John Bahnsen, a recipient of the Distinguished Service Cross, served with the 11th ACR in Vietnam, commanding first the regiment's Air Cavalry Troop, and later its 1st Squadron. Fulda Gap The 11th Cavalry Group Mechanized was redesignated as the 11th Constabulary Regiment on 3 May 1946 in order that the regiment could fulfill its occupation duties, and was restored as the 11th Armored Cavalry Regiment and inactivated in November 1948. Blackhorse was brought back into active status 1 April 1951 at Camp Carson, Colorado. In 1954, the regiment transferred to Fort Knox, Kentucky to complete its training in armored tactics. The Blackhorse Regiment rotated to southern Germany in May 1957, relieving the 6th ACR, and assumed the mission of patrolling the German-Czechoslovak border until its return to the United States in 1964. The Blackhorse arrived in Vietnam on 7 September 1966. Second Squadron spearheaded Operation Fish Hook into Cambodia on 1 May 1970, surrounding a North Vietnamese logistics center. During the drawdown of U.S. forces in Vietnam in early 1972, the 11th ACR was inactivated in stages (Air Troop inactivated 20 March 1972 in Vietnam; 2d Squadron inactivated 6 April 1972 in Vietnam) and subsequently reactivated in Germany (Air Troop and 2d Squadron activated 17 May 1972 in Germany) by reflagging the 14th Armored Cavalry Regiment. The unit, based at Downs Barracks, had the mission of patrolling the East-West German border. During the late 1980s the 11th's 4th Squadron (Air) operated the first air assault school in Europe, known as the Blackhorse Air Assault School, based in Fulda. After the Soviet Union dissolved in December 1991 the regiment ended its seventeen-year station along the Iron Curtain. The Blackhorse Regiment deployed an aviation task force on 10 April 1991 to Turkey for Operation Provide Comfort, an operation to support the Kurdish relief effort. One month later, the three maneuver squadrons (1st, 2d and 3d) along with the regiment's support squadron, deployed to Kuwait for Operation Positive Force, an operation to secure Kuwait so it could rebuild from the war. By October, the regiment had completed its missions in Turkey and Kuwait and returned to Fulda. As the need for US forces in Europe decreased, the Blackhorse Regiment was inactivated in a ceremony on 15 October 1993, and the remaining troops departed Germany in March 1994. Training the force Reactivated again on 26 October 1994, the 11th Armored Cavalry Regiment now serves as the Army's Opposing Force at the National Training Center. The regiment portrays a determined opposing force that trains US forces in the basic principles of army operations and challenges all the battlefield operating systems. As the 2nd Brigade Tactical Group, the squadron trains brigade and battalion task forces during ten rotations a year at the National Training Center. Current organization 1st Squadron First Squadron, 11th Armored Cavalry, "Ironhorse", was activated as a horse squadron at Fort Myer, Virginia in 1901. It has served in the Philippines, Mexico, Europe, and Vietnam. It is now organized as a combined arms battalion, and comprises one of the two maneuver elements of the 11th ACR. It is organized around a Headquarters and Headquarters Troop (HHT), and four line troops (two infantry, two armor), with a total authorized strength of 720 soldiers. It is equipped with the OPFOR Surrogate Vehicle, an M901 ITV highly modified with an M2/M3 Bradley Fighting Vehicle turret to represent the BMP-2 armored personnel carrier, and the OSTV (OPFOR Surrogate Tank Vehicle) a vehicle based on the OPFOR Surrogate Vehicle which can simulate a wide spectrum of threat tanks. Using this equipment and configuration, the squadron performs the first of its two primary missions, acting as a non-permissive opposing force (OPFOR) during ten FORSCOM combat training rotations each year. The squadron's second mission is to deploy and fight as a combined arms battalion for various contingency operations throughout the world. In order to support this mission, the squadron must also maintain, operate and remain proficient on the M1A1 Abrams Tank and M2A2 Bradley Fighting Vehicle. Commanded by Lieutenant Colonel Hennisse, the approximately 400 men of the squadron trained nine months before becoming the first squadron to leave for the regiment's inaugural deployment, to the Philippines. Arriving in January 1902, Troops A and D patrolled Samar, where they fought the regiment's first engagement. In 1905, the regiment relocated to Fort Des Moines, Iowa. In 1906, the 1st Squadron remained in Des Moines while the rest of the regiment deployed to Cuba as part of President Theodore Roosevelt's Army of Pacification. In 1909, the 1st Squadron rejoined the rest of the regiment in Fort Oglethorpe, Georgia. On 12 March 1916, the regiment received orders to join General John J. Pershing as part of the Mexican Punitive Expedition to pursue Pancho Villa. Nine days later, the 1st Squadron led the way, arriving in Mexico on 21 March. Later, the 1st Squadron rode 22 hours straight to the rescue of United States forces besieged in Parral. The 11th ACR was not deployed during World War I. During this period, 1st Squadron conducted port operations in Newport News, Virginia. After the Armistice, the regiment, with its predominantly black horses, was stationed at the Presidio of Monterey, in California. The Army reorganizations for World War II eliminated the horse cavalry in 1940 and 1st Squadron traded in "saddles and hooves" for "tracks and steel". The regiment was inactivated 15 July 1942. The personnel and equipment of the former 1st and 2nd Squadrons was combined to form the newly designated 11th Tank Battalion, which later fought at the Battle of the Bulge. On 1 April 1951, the regiment was reactivated as the 11th Armored Cavalry Regiment, as part of the build-up for the Korean War. The regiment served in Fort Carson, Colorado and Fort Knox, Kentucky until deploying to Germany to replace the 6th ACR along the Czechoslovakian border. In July 1964, 1st Squadron, along with the regiment, transferred to Fort Meade, Maryland. In 1966, the regiment deployed to Vietnam. The 1st Squadron earned the Valorous Unit Awards (twice), the Republic of Vietnam Cross of Gallantry (three times), and the Presidential Unit Citation. It was during the Vietnam War that the 11th ACR was granted authorization to wear its distinctive unit patch. President Barack Obama awarded Alpha Troop of the 1st Squadron the Presidential Unit Citation on 20 October 2009, in recognition of a rescue mission 26 March 1970. In February 1971, 1st Squadron was inactivated, then reactivated in May 1972, at Downs Barracks in Fulda, Germany. During the Southwest Asia Campaign, Ironhorse operated Camp Colt, a scout training camp for reservists reporting to active duty. Following Desert Storm, the regiment deployed to Kuwait in support of Operation Positive Force from June 1991 to September 1991. 1st Squadron, along with the rest of the regiment, was inactivated at Fulda, Germany in March 1994. The 1/63rd Armored Regiment, Fort Irwin, California was reflagged 1st Squadron, 11th Armored Cavalry Regiment in October 1994 with the mission of Opposing Forces for the National Training Center and continues to do so today. On 30 January 2005, 1st Squadron left Fort Irwin for Iraq. After spending about three weeks in Kuwait, the squadron moved to Camp Taji on the outskirts of Baghdad. The squadron was assigned the task of patrolling the Adhamiyah sector of Baghdad, a suburb of Baghdad just north of Sadr City. The squadron was also assigned the task of training Iraqi Army units to ultimately take over control of the sector. On 21 May 2005, the squadron left Camp Taji for Camp Liberty, one of the many camps that encircle Baghdad International Airport. Their new task was to patrol the Abu Ghraib sector just west of Baghdad and to provide perimeter security for Abu Ghraib prison. While in the Abu Ghraib sector, 1/11 ACR participated in Operation Thunder Cat along with the 256th Infantry Brigade of the Louisiana Army National Guard. The operation focused on disrupting IED cells in and around the Abu Ghraib sector, west of Baghdad. During this operation, 1/11 ACR uncovered five separate weapons caches, detained four suspected insurgents and uncovered $2,200 in US currency. The squadron redeployed to Fort Irwin on 22 January 2006 where it resumed its opposing forces mission for the National Training Center. During its deployment, the Nevada Army National Guard's 1st Squadron, 221st Cavalry, the 11th's former official roundout unit, took over the duty of OPFOR. 2nd Squadron The 2nd Squadron is part of the Army's Opposing Force at the National Training Center, conducting battle operations in accordance with published doctrine and combat instructions. While in its role as the 801st Brigade Tactical Group, the Eaglehorse Squadron portrays an opposing force (OPFOR) that trains US forces in the basic principles of combined arms maneuver (CAM) and wide area security (WAS). The regiment trains brigade and battalion task forces during ten rotations a year at the National Training Center, Ft. Irwin, California. Additionally between rotations, the squadron conducts realistic, live-fire based training at the platoon and Bradley crew level. The 1st Battalion (Mechanized), 52d Infantry was inactivated on 26 October 1994 and the 2d Squadron was reactivated in its place by reflagging the existing unit. The 2d Squadron ("Eaglehorse") was activated on 2 February 1901 at Fort Myer, Virginia, and its military campaign geographic areas include the Philippines, Mexico, Europe, Vietnam, and support in Southwest Asia. 2nd Squadron deployed with the regiment to the Philippines to suppress insurgent forces during November 1901. This deployment was commemorated by the bolos becoming part of the Blackhorse crest. The Blackhorse Regiment settled in Fort Des Moines, Iowa in 1905. The 2nd Squadron deployed to Cuba, 16 October 1906, as part of President Theodore Roosevelt's Army of Pacification. Their mission was to patrol and be a show of force. Eaglehorse joined with the General J. Pershing Pancho Villa Expedition in a punitive action against Mexico, with orders to pursue Pancho Villa, on 12 March 1916. Major Robert L. Howze, Commander, 2nd Squadron, led the "last mounted charge" on 5 May 1916, placing the Eaglehorse Squadron action as a milestone in military history. The Blackhorse Regiment patrolled the U.S.-Mexican border from 1919 through 1942. The regiment received the name "Blackhorse" and a distinctive coat of arms while stationed at the Presidio of Monterey. World War II The regiment inactivated as a "horse regiment" on 15 July 1942 at Fort Benning, Georgia. The Headquarters and Headquarters Troop was redesignated on 19 April 1943 as the Headquarters and Headquarters Troop, 11th Cavalry Group Mechanized. The former squadrons of the 11th Cavalry were sent to fight with the 10th Armored Division and the 90th Infantry Division overseas. The new HHT, 11th Cavalry Group Mechanized drew new squadrons, the 36th and 44th, and also received an Assault Gun Troop (Howitzer Battery). After guarding the US southeastern coast from March 1944 until 1 June 1944, the group moved to Camp Gordon, Georgia to begin training for overseas deployment, The regiment arrived in the United Kingdom on 10 October 1944. The regiment entered France on 23 November 1944. Moving through France and Germany, the Blackhorse was assigned to the Ninth US Army and attached to XIII Corps, whose flank the Blackhorse screened during the corps' sweep from the Roer to the Rhine. 3rd Squadron Post-Vietnam, the 3rd Squadron ("Workhorse") was based at McPheeters Barracks in Bad Hersfeld, Germany, about 40 kilometers north of Fulda. The 3rd Squadron was organized as an armored cavalry squadron like the 1st and 2nd Squadrons. HHT and I, K, and L Troops, Howitzer Battery, as well as M Company were organic to the squadron. Attached was the 58th Engineer Company. Bravo Battery, 2nd Battalion, 2nd Air Defense Artillery was also headquartered with the squadron. In the field, the attached units of the regiment like the 58th Engineer Company usually operated over a wide area, with smaller detachments dedicated to supporting the armored cavalry squadrons of the regiment. 4th Squadron The 11th Armored Cavalry Regiment (ACR) arrived in Viet Nam in September 1966, the Air Cavalry Troop (ACT), organic to the regiment, arrived in December of the same year with a complement of UH-1C Gunships and UH-1D Command and Control "slicks". Early in January 1967, ACT was flying combat support for the regiment's missions. It was after this time that ACT earned its nickname Thunderhorse because of the distinctive roaring sound of rotorwash over the rice paddies and the unit's distinctive Blackhorse insignia. In July 1968, Air Cavalry Troop was reorganized into Air Troop (AT), consisting of nine AH-1G Cobra gunships, designated "reds" and nine OH-6 light observation helicopters (LOH), designated "whites", which flew in pairs as target acquisition and destroy missions as "pinks" and an aerial rifle platoon (ARP) "Blues" with infantry/cavalry scouts transported in the venerable UH-1 "Huey" (officially designated Iroquois). Air Troop served with distinction, earning the Republic of Vietnam Cross of Gallantry with Palm "VIETNAM 24 February – 19 May 1971" (DAGO 42, 1972) and the 1st platoon of AT earned the Presidential Unit Citation "DUC HOA 12 Mar – 1 Apr 1969" (DAGO 69, 1969) in addition to the regiment's awards and streamers. In 1969, a Trooper from Air Troop, SFC Rodney J. T. Yano posthumously earned the Medal of Honor. The aviation assets of the regiment were deactivated on 20 March 1972 and left Viet Nam. The regiment was reactivated on 17 May 1972 to replace the inactivated 14th ACR and on 18 September 1972, the newly formed Command and Control Squadron was formed at Sickles Army Airfield near Fulda. The regiment's new C&C Squadron was given the task of providing aerial surveillance of the 385-kilometer "iron curtain", which separated East and West Germany. C&C Squadron consisted of its headquarters elements, as well as Air Troop (AT/AHT) with its 3 UH-1H, 21 AH-1S(MOD) and 13 OH-58A. Combat Aviation/Support Troop (CAT/ST AIR), with its 13 UH-1H, including an Aerial Mine Platoon (AMP) and 6 OH-58A, 58th Combat Engineer Company (CEC). The 340th Army Security Agency (ASA) and the 84th Army Band. A detachment of two each OH-58A were assigned to the 2nd Squadron at Bad Kissingen and the 3rd Squadron at Bad Hersfeld. In 1981, Air Troop, under the command of Major Joseph W. Sutton, won the Draper Cavalry Award; it was the first time an aviation unit had won the award. Later, under the command of Major Michael K. Mehaffey, Air Troop was recognized as the Army Aviation Association of America's (AAAA) Unit of the Year. On 1 June 1982, Command and Control Squadron was redesignated as Regimental Combat Aviation Squadron (RCAS) and officially as the Combat Aviation Squadron (provisional), 11th Armored Cavalry Regiment and nicknamed "Lighthorse". In the spring of 1984, Air Troop was once again named AAAA Aviation Unit of the Year. On 14 June of the same year, under the guidance of the Department of the Army's "Cavalry 86" and the new " J-series" Modified Table of Organization and Equipment (MTOE), elements of AT and CAT were combined to form the new 11th Combat Aviation Squadron (11th CAS), named "Thunderhorse" to honor the history of those Air Cavalry Troopers who had served before. The new squadron consisted of a Headquarters, and a Headquarters Troop, to include an Aircraft Maintenance Platoon (AVUM), designated "Crazyhorse"; Alpha Troop which was assigned a combat support aviation role, was the last Aerial Mine Platoon in the Army. It used a Yosemite Sam, clad in a cavalry uniform for its mascot and was called the "Miners"; Bravo Troop, an Attack Helicopter Troop used the old cobra logo from AT and later a bull dog; Charlie Troop, called "Tankbusters", an attack helicopter troop used the silhouette of a Soviet T-62 in an AH-1 turret gun site; Delta Troop an air cavalry troop was known as the "Death Riders" and used a "Jolly Roger" type skull on a red and white background: Echo Troop, an air cavalry troop used a red and white logo which included a large letter E an AH-1 and OH-58 profile; F Troop, an air cavalry troop used a cartoon figure of an AH-1 punching a Soviet MI-24(HIND) with the motto "Grab 'em by the nose – kick 'em in the ass" and the 511th Military Intelligence Company (MI/CEWI) "Trojan Horse" which had replaced the 340th ASA. In 1984, three EH-1H were assigned to the 511th. The squadron's S-4 section was known as "Hobbyhorse". On 17 Jun 1986 the squadron aligned itself in accordance with the US Army Regimental System (USARS). Now flying UH-60's, OH-58C's and AH-1F's, the squadron eventually ended up as HHT, N, O, P, Q, R S and AVUM Troops. On 9 November 1989 the wall fell and by the first of March 1990, the squadron ceased border operations. Early on 10 April 1991 elements of the squadron were issued no notice deployment orders to self-deploy from Fulda to Diyarbakır, Turkey in support of Operation Provide Comfort. Task Force Thunderhorse deployed 15 UH-60 and five OH-58D along with crews and support personnel. During this period the 511th MI (CEWI) was recognized as the best company sized military intelligence unit in the Army. With the fall of the wall, and the collapse of the Warsaw Pact, the regiment on 15 October 1993 to 15 March 1994 began the deactivation process of the unit. The regiment, less the 3rd and 4th Squadrons, was reactivated in October 1994 at Fort Irwin California. Support Squadron Support Squadron, 11th ACR provides combat support/combat service support to the 11th ACR and NTC Opposing Force and conducts deployment, survivability and MOS sustainment training IOT ensure the success of the regiment, OPFOR, and squadron. "Packhorse", was activated in Germany under the command of LTC Ronald Kelly on 17 September 1985 to support the Blackhorse as it patrolled the East-West German border along the Fulda Gap. The squadron's official name at that time was Combat Support Squadron (CSS). The nickname "Packhorse" is derived from the early days of the U.S. Cavalry, when soldiers went on campaigns accompanied by packhorses, additional horses and/or mules that carried all their essential supplies. Everything from food to gunpowder to horseshoes were transported in this manner. Initial organization included five units—Headquarters and Headquarters Troop, Maintenance Troop, Supply and Transportation Troop, Medical Troop, and the attached 54th Chemical Detachment. The squadron also operated the Regimental Material Management Center which had the responsibility for the overall logistics state of the regiment. Elements of the squadron were based at both Fulda and Wildflecken. The squadron was large for a battalion-sized unit, as the Maintenance Troop alone had some 400 soldiers assigned. The Packhorse provided logistical support during both the frequent regimental maneuvers of the Cold War and at gunnery exercises at Grafenwoehr, where the squadron operated for weeks at a time while the cavalry troops and tank companies rotated through the firing ranges. Squadron vehicles during the Cold War included 3/4-ton M1009 CUCV's, 1&1/4-ton M1008 and M1010 pickup trucks that often carried special-purpose shelters mounting communications, medical, or maintenance equipment, HEMTT's, M88's, tanker trucks, and trucks carrying chemical decontamination equipment. A pair of M934 5-ton Expansible Vans ("Expando-vans") housed the squadron headquarters in the field. The squadron was also capable of highly specialized functions such as the provision of potable water by filtering fresh water sources through purification units. On 3 October 1990, the two Germanys re-unified. By December 1991 the Soviet Union dissolved, ending squadron's six-year presence along the Iron Curtain. In August 1990, Iraq invaded Kuwait, prompting the United States to respond. On 16 May 1991, the Packhorse received orders to deploy to Kuwait to support the regiment as it secured the country while it struggled to rebuild after the war. By October, the regiment had completed its mission and the Packhorse returned to Fulda. As the need for U.S. forces in Europe decreased, the Packhorse was inactivated on 15 February 1994, followed by the Blackhorse on 15 March 1994. The 177th Forward Support Battalion was inactivated on 26 October 1994, becoming the Regimental Support Squadron, 'Packhorse' now carrying its new role with the U.S. Army's Opposing Forces at the National Training Center. Since its reactivation Support Squadron has been the Forces Command (FORSCOM) winner of the Philip A. Connelly award for garrison food service excellence in FY 1992, 1993 and 1994. The squadron also won this same competition at the Department of the Army (DA) level in FY 1993. The squadron played a part in the regiment's selection as the only Army unit in the Department of Defense for the Phoenix award for Maintenance Excellence in FY 1995, FY 1999 and FY 2000. Headquarters and Headquarters Troop was the winner of the DA Award for Maintenance Excellence for FY 1995 and FY 1996 in the Intermediate Equipment Density Category, the only unit to do so in the 16-year history of the program. Maintenance Troop was recognized as the Salute Magazine Unit of the Year for 1995; this competition included units from all branches of the service. The 58th Engineer Company placed 4th in FORSCOM in the DA Award for Maintenance Excellence in the Heavy Equipment Density Category in FY 1995, and were 3rd-place winners in FY 1996 and FY 1997. The 511th Military Intelligence Company placed 3rd in FORSCOM in the separate Company category for the Army Supply Excellence Award and was the Starry Award winner for being the best company in the regiment during 2000. The squadron also won the FY 2002 FORSCOM Competition for both the Supply Support Activity (SSA) and Squadron Supply Operations, the SSA also subsequently received Runner-up at DA Level for FY 2002. The squadron won the FORSCOM level in the Philip A. Connelly competition for best Field Feeding Crew of FY 2003. Supply/Trans and Maintenance Troops competed and won the Army Award for Maintenance Excellence in FORSCOM for FY 2003; both are competing at the DA level. The squadron also placed first in the FORSCOM level of Army Supply Excellence for FY 2004 in Squadron Supply Operations and the SSA; both are also competing at the DA level. Headquarters and Headquarters Troop Mission Provide personnel, administrative, and logistical support to the Regimental Support Squadron. Provide food service support to all NTC units in both the field and garrison. While providing this support, HHT will protect the force and provide superb quality of life for its troopers and families. Headquarters Platoon Headquarters Platoon's mission is to support the troop administration, logistics, and preparation for war. The platoon consists of the troop commander's staff. They are the orderly and training room, communications section, motor pool, NBC room, unit supply, and arms room. The orderly room supports the troop in administration. The training room schedules training and maintains the troop readiness status. The motor pool supports the troop in organizational level maintenance. The NBC room supports the troop in nuclear, biological, and chemical training, and the unit supply supports the troop in organizational supply and arms room. Also attached to the headquarters platoon are the cavalry scouts and mortar platoons. The cavalry scouts use high speed maneuvering and advanced optical equipment to identify targets. The mortar platoon uses the heavy 120 MM mortar system to provide long range indirect fire. Field regimental dining facility The Field Regimental Dining Facility (FRDF) Platoon supports 10 rotations per year. The mission is to provide Class I in the field for the 11th Armored Cavalry Regiment Opposing Force (OPFOR) during all force-on-force rotations. Horse Detachment The 11th ACR Horse Detachment is a Special Ceremonial Unit tasked with preserving the history and traditions of the Regiment's original mounted Cavalry Troopers, and is 1 of only 6 Mounted Units left in the Active Duty U.S. Army. The Horse Detachment represents the 11th ACR, Fort Irwin, and the United States Army at official ceremonies, on-post and regional community relations events, and Army recruiting and community outreach objectives. They participate in the U.S. Cavalry Association's annual National Cavalry Competition against other Active Duty Mounted Units as well as civilian reenactors, and have won the Pulaski Trophy for Most Outstanding Military Unit 5 times: 2021, 2019, 2018, 2012, and 2009. Maintenance Troop Maintenance Troop's mission is to provide class IX support and conduct direct support maintenance for the 11th Armored Cavalry Regiment to ensure the Regiment's success on the NTC battlefield. While providing this support, Maintenance Troop will protect the force and provide improved quality of life for its troopers and families. MT 1st Platoon Headquarters Platoon consists of the commander's staff, motor pool, shop office, NBC room, orderly room, technical supply and unit supply. This is the largest platoon in the troop. The main mission of this platoon is to keep the troop ready for war at all times. The shop office is the backbone of direct support maintenance. The shop officer and the repair control sergeant direct all the maintenance support for the regiment. They order repair parts and track the parts from the time it is ordered, to the time the part is received. They also track all maintenance jobs from initial inspection to actual repair to final inspection and pick-up by the customer. Technical supply work 24-hour days, 7 days a week, providing Class IX repair parts to the OPFOR. They provide serviceable assets which include major assemblies, DLRs (Depot Level Reparables), Repairable Exchange Items and ASL stockage. The mission of NBC room is to provide nuclear, biological and chemical training to the troop. The training room is in charge of planning and executing training for the entire troop. The orderly room provides administrative support to the whole troop. The supply room provides organizational supply. The motor pool's mission is to provide organizational maintenance for all vehicles and commo equipment for the entire troop. MT 2nd Platoon Second Platoon is divided in 2 sections: Automotive, Armament, and Fuel & Electrical, which include 41C, 44B, 44E, 45B, 45E, 45G, 63G, 63H, and 63W MOSs. The mission of the above-mentioned personnel is to provide quality direct support in the areas of repair parts (generators, alternators and starters), recovery assistance, welding and machine shop assistance, and automotive repair. The Automotive section provides direct support maintenance to the 11th Armored Cavalry Regiment and Allied units in support of the OPFOR's daily mission. This entails repairing and replacing transmissions, steering gears, transfers, fuel injector pumps, differentials, engines, axles and necessary gaskets and seals for various types of wheel vehicles. The above jobs are just a few of the tasks that the automotive section does to ensure that the OPFOR equipment returns to the battlefield as quickly as possible. The Fuel & Electric section provides support in the areas of repairing and replacing wiring harnesses, generators, alternators, starters, brake shoe linings and the resurfacing of brake drums. In addition the F&E section repairs and replaces fan towers, gear assemblies and shocks for the M551 tank. The Armament section provides support for the main turret, ballistic computers, laser ranger finders, and other armament controls for the M1A1 Abrams, main battle field tank, as well as the various small arms repair and aiming devices. MT 3rd Platoon Third platoon consists of Ground Support Equipment repair, Service and Recovery, and the Communications / Electronics shop, which include 35C, 35E, 35F, 35N, 52C, 52D, 63B and 63J MOSs. This platoon is usually referred to as '3rd shift, 3rd shop', because when mission calls they often work around the clock. The GSE section is tasked with the mission of repairing engineer equipment. GSE repairs and returns the equipment to the NTC battlefield. The Communication / Electronics shop works around the clock to repair the regiment's radios. The special electronics devices section of the 3rd platoon tirelessly maintains NVGs for the regiment. They also ensure that all chemical agent monitors and navigational satellite systems are maintained. The Service and Recovery section has a continuous mission of providing recovery to disabled vehicles for the post. They are trained to inspect a vehicle and if possible fix the vehicle on the spot so that it can continue its mission, but if that is not possible then they are trained to recover the vehicle with any available means. There are no manuals written on how to recover a damaged vehicle, the manuals that exist only talk about the principles of recovery and the capabilities of each recovery vehicle. It is only by experience on the job that the soldier decides on how a vehicle will be recovered. The section is composed of 91 E (Allied Trade Specialist) They are both a Welder and a machinist. They can manufacture a functional part from a piece of metal or they can fabricate anything within the limitation of the equipment they have. MT 4th Platoon Maintenance Support Team (MST) Platoon's mission is to provide dedicated direct support maintenance. The MST platoon is made up of 2 different MOS: 63W (wheel vehicle repairer) and 63Y (track vehicle mechanic). Supply and Transportation Troop The Supply & Transportation Troop, Regimental Support Squadron, 11th Armored Cavalry Regiment, Fort Irwin, California, provides support to the Opposing Force (OPFOR) soldiers of the 11th Armored Cavalry Regiment. While supporting the soldiers of the OPFOR with Class I (food), II (heaters, chemlights), III (fuel), and IV (construction material) as well as all the transportation requirements needed on the NTC battlefield, S&T Troop will also provide a better quality of living for its soldiers and their families. There are four platoons (Headquarters/Supply, Maintenance, Petroleum, and Transportation). The unit is responsible for the direct support of Class I (Ration Break Point), Class III (Bulk and Aviation fuel), Class IV (lumber), Class VII (major items), field services, and direct transportation support with light, medium, and heavy capability assets. Also, it is responsible for maintaining and issuing civilian vehicles in support of the OPFOR to replicate the presence of civilians on a battlefield (COBs). Supply Platoon S&T's Supply Platoon mission includes the Class I breakdown for each rotation, the issuing of COB-Vs prior to rotations and the issuing of Allied Fleet Vehicle prior to rotation. The Supply Platoon consists of four 6K forklifts, over 50 COB-Vs and over 25 Allied Fleet vehicles. In addition to all this the Supply Platoon is the housing and issuing point for all regimental CL IV. Transportation Platoon The S&T Transportation Platoon missions consist of transporting Class I, II, IV, V, and Class IX. In addition to hauling that the Transportation Platoon is often tasked to haul tracked vehicles with their 8 Heavy Equipment Transport Systems. Along with the 8 HET systems the Transportation Platoon has 4 PLS systems, 14 M931 tractors, 5 XM 1098 3000 gallon water tankers and 22 M871 flat bed trailers. POL Platoon S&T's POL platoon mission consists of providing CL III (B) support for the regiment. This includes forward area resupply point (FARP) and Refuel On the Move (ROM) capabilities in order to support rotational CL III requirements. The platoon consists of two M978 HEMTT 10-ton, 2500 gal. Fuel Servicing trucks; eight M969 5000 gallon semitrailer tankers; ten M931 5-ton truck tractors; and a 300K forward area refueling point system. Maintenance Platoon S&T's Maintenance Platoon's mission is to ensure that all of S&T's vehicles are fully mission capableand able to be utilized to execute all missions tasked down to S&T. This means that the Maintenance Platoon must maintain the operational readiness of eight Heavy Equipment Transport systems (M1070/M1000 HET), 25 5-Ton truck tractors (M931s), 22 M871 trailers, five XM1098 3000 gallon water tankers, eight M969 5000 gallon semitrailer tankers, four Palletized Loading Systems (M1074/1075), four 6,000 pound forklifts, and other vehicles in the S&T fleet. Headquarters and Headquarters Detachment, Task Force Palehorse Headquarters and Headquarters Detachment, Task Force Palehorse, provides world class Observer-Controller/Trainer's (OC/T's) for the 11th Armored Cavalry Regiment. Task Force Palehorse works directly with Operations Group at the National Training Center to assist in meeting the training requirements for the Rotational Training Unit (RTU). All permanent party members of Task Force Palehorse spend countless hours in training and are required to be subject matter experts (SME) in combined arms maneuver (CAM), wide area security (WAS) operations, the respective combination of CAM/WAS, Unified Land Operations and global insurgency tactics, techniques, and procedures (TTP's). Task Force Palehorse provides invaluable feedback to units through professional After-Action Reviews and written reports in support of ten National Training Center rotations a year. On order, Task Force Palehorse deploys as military advisors to foreign nations to aid with military training, organization, combat operations and other various military tasks. Task Force Palehorse has earned an Army wide reputation for developing and producing highly lethal and top tier leaders. The name Palehorse has its genesis in Revelation 6:8 "And I looked, and behold a pale horse: and his name that sat on him was Death.." Leaders are hand picked from throughout the Army to fill the ranks of the Task Force all of which are combat proven, highly motivated and masters of the profession. Because of this, per capita, Task Force Palehorse has the highest concentration of Combat Infantry Badges (CIB), Combat Action Badges (CAB) and prestigious awards for service in combat than any other unit within the Regiment. Current order of battle 11th Armored Cavalry Regiment 1st Squadron, 11th Armored Cavalry Regiment (Combined Arms Squadron) 2nd Squadron, 11th Armored Cavalry Regiment (Combined Arms Squadron) 1st Battalion, 144th Field Artillery Regiment (CA ARNG) Support Squadron, 11th Armored Cavalry Regiment Campaign participation credit Gallery See also Observation Post Alpha Notes References Bonsteel, F. T. The Eleventh Cavalry, 1901 to 1923. Monterey, California: 1923 Cobb, William W. 11th U.S. Cavalry Report; Armor, LXXXVI, No. 2 (March–April 1967). Fitfield, Robert W., et al. 11th U.S. Cavalry, California-Mexican Border, 1941. Los Angeles, California: 1941. Haynes, George L. Jr. and James C. Williams. The Eleventh Cavalry From the Roer to the Elbe, 1944–1945. Nuremberg, Germany. Herr, John K., and Edward S. Wallace. The Story of the U.S. Cavalry. Boston, Massachusetts:1953. Tricoche, George Nestler. La vie militaire a l'etranger. Notes d'un engage voluntaire au 11th United States Cavalry. Paris, France: 1897. Unit Members. 11th Armored Cavalry. Fort Knox, Kentucky: 1956. . 11th Armored Cavalry, Germany, 1958. Germany: 1958. Blumenson, Martin. Breakout and Pursuit. 1961. https://web.archive.org/web/20081007045140/http://www.calguard.ca.gov/1-144fa/Pages/default.aspx John Albright, "Convoy Ambush on Highway 1, 21 November 1966", in John A. Cash et al. Seven Firefights in Vietnam. Reprint. New York: Dover, 2007, pages 49–70. Pelini, Marc E., '11th ACR Destroys Terror Cell, Find Weapons and Cash', 14 August 2005, DVIDS, http://www.dvidshub.net/news/2718/1-11th-acr-destroys-terror-cell-find-weapons-cash External links 11th Armored Cavalry Regiment Home Page GlobalSecurity.org: 11th Armored Cavalry Regiment 2nd Squadron, 11th Armored Cavalry Regiment in Cold War Germany 11th Cavalry in Vietnam 11th US Cavalry {Commemorative}, A Troop K Troop, 3rd. Sqd. 11th Armored Cavalry Regiment Vietnam 19 66–1972 Photo gallery of M Troop / Company Blackhorse Museum Fulda Military units and formations established in 1901 Armored cavalry regiments of the United States Army Military units and formations of the United States in the Philippine–American War 011th Armored Cavalry Regiment Cavalry regiments of the United States Army
47449116
https://en.wikipedia.org/wiki/Customer%20success
Customer success
Customer Success or Customer Success Management is a business method ensuring customers achieve success: their desired outcomes while using your product or service. Customer Success is relationship-focused client management, that aligns client and vendor goals for mutually beneficial outcomes. Effective Customer Success strategy typically results in decreased customer churn and increased up-sell opportunities. The goal of Customer Success is to make the customer as successful as possible, which in turn, improves customer lifetime value (CLTV) for the company. Key sub-functions Key functions of a Customer Success (CS) team includes: Technical Enablement: While the scope and level of effort (LOE) can vary drastically from a few hours to many person-years, almost every software solution, and generally any innovation, requires some level of initial setting and enablement. This activity, which can also be referred to as Initial Implementation or the Customer On-boarding and Initial Engagement, is normally the first that follows the initial sale of the solution and any add-on component of it and is normally governed by a statement of work (SOW) that defined the deliverables the software provider commits to provide, the timeframe for it and the commercial structure for the engagement. In many organizations, the team responsible for this function is referred to as professional services. Knowledge Enablement: providing the customer with the knowledge needed to make best use of the solution is a function that can sometimes be independent from the two functions noted above or sometimes provided by one of them. Formal and informal training are part of this function as well as customer-to-customer relations (like community sites) and self-service knowledge systems. Identifying Growth Opportunity: During the lifecycle of the contract, the Customer Success Manager (CSM) will have direct access to conversations around growth of the clients business. It is a key function of the role to stand in as a "trusted adviser" and help identify expansion opportunities. Either in contract scope of feature/function expansion. Identifying Churn Risk: Utilizing a Customer Health score is pivotal in identifying churn (or lost revenue). As the main point of contact for accounts, the CSM has visibility into the possibility of churn/cancelled accounts. The CSM can intervene when needed to help decrease the likelihood of a lost account. General Account Management: While Customer Success is a more robust solution to account management, there is good portion of the day-to-day that falls under the "account management' title. CS is the team that manages the business relations between the customer and the provider. This function operates in parallel with the technical teams and is working with the customer to ensure they best utilize the provider's capabilities, expand and improve them. They will work to increase adoption of the solution, ensure renewal and expansion of contracts and manage executive relations. This includes being an advocate for the customer to various groups within an organization. Metrics of Success NPS – “Net Promoter Score” is a management tool that can be used to gauge the loyalty of a firm's customer relationships. It serves as an alternative to traditional customer satisfaction research and is claimed to be correlated with revenue growth. CSAT – “Customer Satisfaction” is a score that indicates how satisfied a customer is with a specific product, transaction, or interaction with a company. The term “CSAT” is most often used in the context of a “CSAT score,” which describes a numerical measure of customer satisfaction. CES – “Customer Effort Score” (or “Net Easy Score”) is a single-item metric that measures how much effort a customer has to exert to get an issue resolved, a request fulfilled, a product purchased/returned or a question answered. Churn – Churn rate, when applied to a customer base, refers to the proportion of contractual customers or subscribers who leave a supplier during a given time period. Health score – Consolidated score which summarizes the overall situation of each customer. Customer success managers (CSM) Presently, the customer success function within most organizations is embodied in the customer success manager (CSM), client relationship manager (CRM), or client strategy consultant (CSC) job titles. The CSM acts as the main point of contact and as a trusted advisor for the customer from the vendor side as they are the one ultimately responsible and accountable for that customer's success. The function may share many of the same functions of traditional account managers, relationship managers, project managers, and technical account managers, but their mode of operations tend to be much more focused on long-term value-generation to the customer. At its heart, it is about maximizing the value the customer generates from utilizing the solutions of the vendor, while enabling the vendor the ability to derive high return from the customer value. To enable that, the CSM must monitor the customer's usage of and satisfaction from the solutions of the vendor, identify opportunities and challenges from the way the customer engages with the solution and take action to help resolve challenges and foster expansion of the usage as well as the value from the solutions (to both sides) over time. As a consequence, relentlessly monitoring and managing the customer health is a key success factor for every CSM as well as the need to deeply understand the drivers of value the customer gains from the solutions provided by the vendor. Without such deep and timely understanding of these two aspects of the customer, the CSM will not be able to act effectively. In young organizations, where the total number of employees (and customers) is small, the CSM may be the first employee of the customer success team. As such, they will be responsible for most of the functions described above, which over time may be fulfilled by more specialized team members. Ownership of commercial responsibilities by CSM vary among companies. While some believe a CSM's neutrality from sales or commercial conversations may make a customer more likely to respond to and engage with a CSM, others view the ownership of the commercial relations as natural to a long-term relationship between a vendor and a customer and more empowering to the CSM. For CSMs to fulfill the responsibilities of their role, they must be empowered by an organization's executive team to navigate freely among all parts of an organization. This maintains the CSMs credibility with the customer as an effective resource. In organizations where CSMs are just another level of abstraction or a "screen" between the customer and the resources they need, the credibility of the CSM is compromised and the customer experience eroded which may result in a customer not renewing or expanding their business with the vendor. Furthermore, lacking the top-down support will deprive the CSM the ability to garner the right resources needed by them to complete their jobs. Virtual customer success managers (VCSM) Virtual customer success managers are remote points of contact for customers and monitor the success of customers, providing important feedback. Background/history Every company that sells its products and/or services to customers has functions responsible for managing the customer fulfillment and relations. In traditional businesses, those functions are referred to most commonly as "Fulfillment", "Post Sales" or "Professional Services". In the world of technology, companies have been developing, selling and enabling software solutions for many years. At most of those companies the function responsible for managing the relations with customers used to be called "Account Management", "Operations" or "Services". As the business world is changing and evolving into new fields, the method by which software is enabled to customers changes. One of the most significant changes in recent years has been the emergence of software as a service (aka SaaS). SaaS is a method of enabling a software solution to customers in a subscription model that departs from the "old" model of granting a perpetual license that enables the customer to own the solution and therefore use it as they see fit (but also be responsible for its operation). Rather, when enabling a solution to customers as a SaaS, companies offer their products as services instead of physical objects, moving the economy to a subscription model. The customer "rents" the solution and is able to use it only for the period they have rented it for. The vendor enabling the solution provides not only the solution itself, but also the infrastructure that supports it. The key implication of this new model is a fundamental shift in the engagement model between the software vendor and its customers. If in the traditional "enterprise software" model, a customer buys the license to the software and pays the vendor then, regardless of its actual usage, in a SaaS model, the customer pays a (much smaller) rent for the software every month. The software vendor must therefore ensure the customer is using the solution and seeing value from it if they wanted to ensure the customer continues to pay their rent. This fundamental shift in the software industry's operating model revealed a need for a function at the company to own and ensure that success of its customers. The emergence of this function is what is now being referred to as customer success (CS). While the trend towards SaaS has been going on since the beginning of the 21st century, the understanding of the need for much stronger focus on customer success and therefore the creation of the field of customer success only began around 2010–2012. The CS function is responsible for retaining and growing the business that the sales team has secured. Case studies show that companies with strong CS teams outperform peers with weak or no CS teams in a multitude of financial criteria including customer retention (also measured by "churn", which is the opposite of retention), revenue growth rates, gross margin, customer satisfaction, and referrals. In fact, customer experience is the greatest untapped source of both decreased costs and increased revenue in most industries, but only if companies take the time to understand what underpins it and how they can benefit financially from improving it. References Business process Customer experience
734158
https://en.wikipedia.org/wiki/Gaussian%20%28software%29
Gaussian (software)
Gaussian is a general purpose computational chemistry software package initially released in 1970 by John Pople and his research group at Carnegie Mellon University as Gaussian 70. It has been continuously updated since then. The name originates from Pople's use of Gaussian orbitals to speed up molecular electronic structure calculations as opposed to using Slater-type orbitals, a choice made to improve performance on the limited computing capacities of then-current computer hardware for Hartree–Fock calculations. The current version of the program is Gaussian 16. Originally available through the Quantum Chemistry Program Exchange, it was later licensed out of Carnegie Mellon University, and since 1987 has been developed and licensed by Gaussian, Inc. Standard abilities According to the most recent Gaussian manual, the package can do: Molecular mechanics AMBER Universal force field (UFF) DREIDING force field Semi-empirical quantum chemistry method calculations Austin Model 1 (AM1), PM3, CNDO, INDO, MINDO/3, MNDO Self-consistent field (SCF methods) Hartree–Fock method: restricted, unrestricted, and restricted open-shell Møller–Plesset perturbation theory (MP2, MP3, MP4, MP5). Built-in density functional theory (DFT) methods B3LYP and other hybrid functionals Exchange functionals: PBE, MPW, PW91, Slater, X-alpha, Gill96, TPSS. Correlation functionals: PBE, TPSS, VWN, PW91, LYP, PL, P86, B95 ONIOM (QM/MM method) up to three layers Complete active space (CAS) and multi-configurational self-consistent field calculations Coupled cluster calculations Quadratic configuration interaction (QCI) methods Quantum chemistry composite methods – CBS-QB3, CBS-4, CBS-Q, CBS-Q/APNO, G1, G2, G3, W1 high-accuracy methods Official release history Gaussian 70, Gaussian 76, Gaussian 80, Gaussian 82, Gaussian 86, Gaussian 88, Gaussian 90, Gaussian 92, Gaussian 92/DFT, Gaussian 94, and Gaussian 98, Gaussian 03, Gaussian 09, Gaussian 16. Other programs named 'Gaussian XX' were placed among the holdings of the Quantum Chemistry Program Exchange. These were unofficial, unverified ports of the program to other computer platforms. License controversy In the past, Gaussian, Inc. has attracted controversy for its licensing terms that stipulate that researchers who develop competing software packages are not permitted to use the software. Some scientists consider these terms overly restrictive. The anonymous group bannedbygaussian.org has published a list of scientists whom it claims are not permitted to use GAUSSIAN software. These assertions were repeated by Jim Giles in 2004 in Nature. The controversy was also noted in 1999 by Chemical and Engineering News (repeated without additional content in 2004), and in 2000, the World Association of Theoretically Oriented Chemists Scientific Board held a referendum of its executive board members on this issue with a majority (23 of 28) approving the resolution opposing the restrictive licenses. Gaussian, Inc. disputes the accuracy of these descriptions of its policy and actions, noting that all of the listed institutions do in fact have licenses for everyone but directly competing researchers. They also claim that not licensing competitors is standard practice in the software industry and members of the Gaussian collaboration community have been refused licenses from competing institutions. See also List of quantum chemistry and solid-state physics software References External links Computational chemistry software
1622419
https://en.wikipedia.org/wiki/Internet%20Explorer%207
Internet Explorer 7
Windows Internet Explorer 7 (IE7) (codenamed Rincon) is a web browser for Windows. It was released by Microsoft on October 18, 2006, as the seventh version of Internet Explorer and the successor to Internet Explorer 6. Internet Explorer 7 is part of a long line of versions of Internet Explorer and was the first major update to the browser since 2001. It was the default browser in Windows Vista and Windows Server 2008 (later default was Internet Explorer 9), as well as Windows Embedded POSReady 2009 (later default was Internet Explorer 8) and can replace previous versions of Internet Explorer on Windows XP and Windows Server 2003 but unlike version 6, this version does not support Windows Me, Windows 2000, or earlier versions of Windows. It also does not support Windows 7, Windows Server 2008 R2 or later Windows Versions. Internet Explorer 7 was the last version of Internet Explorer to be supported on Windows Server 2003 SP1 and Windows XP x64 Edition below SP2; the software supported only Windows XP x64 Edition SP2 and Windows Server 2003 SP2 from Internet Explorer 8 onward. Some portions of the underlying architecture, including the rendering engine and security framework, have been improved. New features include tabbed browsing, page zooming, an integrated search box, a feed reader, better internationalization, and improved support for web standards, although it does not pass the Acid2 or Acid3 tests. Security enhancements include a phishing filter, stronger encryption on Windows Vista and Windows Server 2008 (256-bit from 128-bit in Windows XP and Windows Server 2003), and a "Delete browsing history" button to easily clear private data. It is also the first version of Internet Explorer which is branded and marketed under the name 'Windows', instead of 'Microsoft'. IE7 shipped as the default browser in Windows Vista and Windows Server 2008 and was offered as a replacement for Internet Explorer 6 for Windows Server 2003 and Windows XP. IE7 was superseded by Internet Explorer 8 in March 2009. Support for Internet Explorer 7 will end on October 10th, 2023 alongside the end of support for Windows Embedded Compact 2013. Support for Internet Explorer 7 on other Windows versions ended on January 12th 2016 when Microsoft began requiring customers to use the latest version of Internet Explorer available for each Windows version. History In August 2001, Microsoft released Internet Explorer 6 as an update to Windows NT 4.0 with Service Pack 6a, Windows 98, Windows 2000 and Windows ME from previous Internet Explorer versions, such as Internet Explorer 5 and included it by default in Windows XP and Windows Server 2003. With the release of IE6 Service Pack 1 in 2003, Microsoft announced that future upgrades to Internet Explorer would come only through future upgrades to Windows, stating that "further improvements to IE will require enhancements to the underlying OS." On February 15, 2005 at the RSA Conference in San Francisco, Microsoft Chairman Bill Gates announced that Microsoft was planning a new version of Internet Explorer that would run on Windows XP. Both he and Dean Hachamovitch, General Manager of the Internet Explorer team, cited needed security improvements as the primary reason for the new version. The first beta of IE7 was released on July 27, 2005 for technical testing, and a first public preview version of Internet Explorer 7 (Beta 2 preview: Pre-Beta 2 version) was released on January 31, 2006. The final public version was released on October 18, 2006. On the same day, Yahoo! provided a post-beta version of Internet Explorer 7 bundled with Yahoo! Toolbar and other Yahoo!-specific customizations. In late 2007 Microsoft announced that IE7 would not be included as part of Windows XP SP3, with both Internet Explorer 6 and 7 receiving updates. Most PC manufacturers, however, have pre-installed Internet Explorer 7 (as well as 8) on new XP PC's, especially netbooks. On October 8, 2007, Microsoft removed the Windows Genuine Advantage component of IE7, allowing it to be downloaded and installed by those without a genuine copy of Windows. Within a year after IE7's release (end of 2006 to end of 2007) support calls to Microsoft had decreased 10-20%. On December 16, 2008, a security flaw was found in Internet Explorer 7 which can be exploited so that crackers can steal users' passwords. The following day, a patch was issued to fix the flaw, estimated to have affected around 10,000 websites. , estimates of IE7's global market share were 1.5-5%. Release history On January 31, 2006, Microsoft released a public preview build (Beta 2 preview: Pre-Beta 2 version) of Internet Explorer 7 for Windows XP Service Pack 2 (not for Windows Server 2003 SP1) on their web site. It stated that more public preview builds (possibly Beta 2 in April) of Internet Explorer 7 will be released in first half of 2006, and final version will be released in second half of 2006. The pre beta build was refreshed on March 20, 2006 to build 7.0.5335.5. A real Beta 2 Build was released on April 24, 2006 to build 7.0.5346.5. In addition, at the MIX'06 conference, Bill Gates said that Microsoft is already working on the next two versions of IE after version 7. On June 29, 2006, Microsoft released Beta 3 (Build 7.0.5450.4) of Internet Explorer 7 for Windows XP SP2, Windows XP x64 Edition and Windows Server 2003 SP1. It features minor UI cleanups, re-ordering of tabs by drag and drop, as well as noticeable performance improvements. On August 24, 2006, the Release Candidate 1 (RC1) of Internet Explorer 7 (Build 7.0.5700.6) was released for Windows XP SP2, Windows XP x64 Edition and Windows Server 2003 SP1. This was the last pre-release version of IE7 before the final release. On September 28, 2006, 3Sharp, a privately held technical services firm, published the results of a study, commissioned by Microsoft, evaluating eight anti-phishing solutions in which Internet Explorer 7 (Beta 3) came out on top. The study evaluated the ability to block phish, to warn about phish, and to allow good sites. On October 18, 2006, the first finished version was released on microsoft.com, and was distributed as a high-priority update via Automatic Updates (AU) on November 1. AU notifies users when IE7 is ready to install and shows a welcome screen that presents key features and choices to "Install", "Don't Install", or "Ask Me Later". On November 8, 2006, a version of Internet Explorer 7 was released for Windows Vista only (7.0.6000.16386). On November 11, 2006, another version for Windows XP was made available (7.0.5730.11IC). On September 24, 2007, Windows Server 2008 RC0 was released with version 7.0.6001.16659. On October 4, 2007, the latest version for Windows XP SP2 and Windows Server 2003 SP1 (7.0.5730.13) was made available. On February 4, 2008, a version of Internet Explorer 7 was released for Windows Vista SP1 and Windows Server 2008 only (7.0.6001.18000). On May 26, 2009, the latest version for Windows Vista and Windows Server 2008 (7.0.6002.18005) was made available. Features With this version, Internet Explorer was renamed from Microsoft Internet Explorer to Windows Internet Explorer as part of Microsoft's rebranding of components that are included with Windows. Internet Explorer 7 introduces the Windows RSS Platform with which it is tightly integrated and can subscribe to RSS and Atom feeds, synchronize and update them on a schedule and display them with its built-in style sheet. Version 7 is intended to defend users from phishing as well as deceptive or malicious software, and it also features full user control of ActiveX and better security framework, including not being integrated as much with Windows as previous versions, thereby increasing security. Unlike previous versions, the Internet Explorer ActiveX control is not hosted in the Windows Explorer process, but rather it runs in its own process. It also includes bug fixes, enhancements to its support for web standards, tabbed browsing with tab preview and management, a multiple-engine search box, a web feeds reader, Internationalized Domain Name support (IDN), and antiphishing filter. On October 5, 2007, Microsoft removed the 'genuine software' validation before install, which means that all versions of Windows, whether able to pass validation or not, are able to install the browser. The integrated search box supports OpenSearch. On Windows Vista, Internet Explorer operates in a special "Protected Mode", that runs the browser in a security sandbox that has no WRITE access to the rest of the operating system or file system. When running in Protected Mode, IE7 is a low integrity process; it cannot gain write access to files and registry keys outside of the low-integrity portions of a user's profile. This feature aims to mitigate problems whereby newly discovered flaws in the browser (or in Add-Ons hosted inside it) allowed crackers to subversively install software on the user's computer (typically spyware). Usability and accessibility Version 7 tabs. The user can rearrange tabs by dragging and dropping them as desired. Privacy and security Since it is tightly integrated with the operating system, Internet Explorer makes full use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to that of Windows Explorer. However, with version 7, FTP sites are rendered in a hyper linked page, with the folder-like view available if the site is accessed from Windows Explorer. IE7 can itself launch Windows Explorer for FTP sites. Protected Mode (available in Windows Vista only), whereby the browser runs in a sandbox with even lower rights than a limited user account. As such, it can only write to the Temporary Internet Files folder and cannot install start-up programs or change any configuration of the operating system without communicating through a broker process. IE7 Protected Mode relies on the User Account Control technology. ActiveX Opt-In blocks ActiveX Control unless it allowed to be installed. This feature improves security from unverifiable and vulnerable controls. ActiveX controls can be chosen to be installed on Information Bar. User can turn on and off ActiveX Control by using Add-on Manager. The new Phishing Filter offers protection against phishing scams and other websites that may be considered dangerous for a user to enter their personal information into. When enabled, every website the user visits is checked against a master list of known phishing sites. If a site is listed, the user is informed. In light of the privacy implications of this feature, it is not enabled automatically; the user is asked when they start Internet Explorer 7 if they want it enabled. Microsoft is working in conjunction with companies that specialize in identifying phishing schemes to ensure the list of known sites is accurate and quickly updated. The address bar and status bar appear in all windows including popups which helps to block malicious sites from disguising them as trusted sites. Also the address bar features a color code to visually indicate the trustworthiness of the page. The address bar turns red when a page, with invalid security certificate, is accessed. In case of sites not using any encryption, the address bar is white. And if the page uses high security certificate, the bar turns green. Modal windows such as dialog boxes are shown only when the tab that has generated them is selected (in such situations, the tab color becomes orange). On the other hand, the save window is modal and when saving the page shown in a tab, the user cannot browse other tabs. The address bar no longer allows JavaScript to be executed on blank pages (about:blank). This feature is still supported on other pages though, which enables bookmarklets to work properly. A reason for the change has not been given. The status bar no longer allows custom text to be entered (e.g.: "Formatting C:\ 10% Complete |||||||") and will always show the URL of any link hovered over, for security. It now also shows the target URL of form buttons to help identify forms which submit their data to suspicious sites. The ability to limit scripting functions, such as those that modify the status bar or adjust the size or position of the browser window was introduced with IE7. "Delete Browsing History" cleans the complete browsing history in a single step. Previously this was a multistage process requiring users to delete browser cache, history, cookies, saved form data and passwords in a series of different steps. This is useful for improving privacy and security in a multiuser environment, such as an Internet café. Fix My Settings checks at startup or when a setting is changed, if the current settings are unsafe it notifies the user. The user can also press a button in order to fix the settings to a safe state. There is currently no way to disable these warnings. Old protocols and technologies removed: Gopher, TELNET, Scriptlets, DirectAnimation, XBM, Channels (.CDF files) also known as 'Active Desktop Items', etc. The DHTML Editing Control is being removed from IE7 for Windows Vista to reduce surface area for security attacks. No Add-ons allows IE7 to launch without the installed extensions. IE7 cipher strength: 256-bit (Only for Vista, IE7 for XP and Server 2003 only supports 128-bit) The address bar turns red when the certificate presented by a secure site has some problems. In that case navigation to the site is blocked by default, and can only be accessed after the user explicitly confirms the navigation. IE7 includes support for Extended Validation Certificates (EV). When sites present an EV certificate the address bar is shown in green. New Group Policy's Administrative Templates (.adm files) for IE7 are loaded automatically onto the Domain Controller when a Group Policy is opened from a workstation where IE7 has been installed. These new administrative templates allow for controlling the Anti-Phishing filter state, for example. Reset Internet Explorer settings Deletes all temporary files, disables browser add-ons, and resets all the changed settings to factory settings. It can be used if the browser is in an unusable state. Microsoft has addressed security issues in two distinct ways within Windows Vista: User Account Control, which forces a user to confirm any action that could affect the stability or security of the system even when logged in as an administrator, and "Protected-mode IE", which runs the web browser process with much lower permissions than the user. The first vulnerability exclusive to Internet Explorer 7 was posted after 6 days. Internet Explorer 7 is a component of Windows Embedded Compact 7 and Windows Embedded Compact 2013 and follows the same lifecycle, thus it will continue to be supported until October 10, 2023. Phishing filter Some users have criticised the phishing filter for being too easy to circumvent. One successful method of bypassing Internet Explorer's Phishing Filter has been reported by redirecting a blacklisted web page to another, non-blacklisted page, using a server-side redirect. Until the new page is blocked as well, the attack can remain active. This flaw means that phishers can keep links from previous emails functioning by simply moving to a new server when their original web page is blacklisted and adding a redirect. This has been criticised as doubly serious as the presence of a phishing filter may lull users into a false sense of security when the filter can be bypassed. Phishing filter went on to be developed into and renamed Safety Filter and then SmartScreen by Microsoft, during the development of Internet Explorer 8. Standards support Internet Explorer 7 adds support for per-pixel alpha transparency in PNG, as well as minor improvements to HTML, CSS and DOM support. Microsoft's stated goal with version 7 was to fix the most significant bugs and areas which caused the most trouble for developers, however full compatibility with standards was postponed. Internet Explorer 7 additionally features an update to the WinInet API. The new version has better support for IPv6, and handles hexadecimal literals in the IPv6 address. It also includes better support for Gzip and deflate compression, so that communication with a web server can be compressed and thus will require less data to be transferred. Internet Explorer Protected Mode support in WinInet is exclusive to Windows Vista. Although Internet Explorer 7 is more compliant than previous versions, according to all figures it remains the least standards-compliant compared to other major browsers of the period. It does not pass the Acid2 or the Acid3 tests, two test cases designed by the Web Standards Project to verify CSS compliance. In a 2008 MSNBC article, Tim Berners-Lee said that lack of support in Internet Explorer was responsible for holding back the widespread adoption by webmasters of several new open technology standards, specifically scalable vector graphics (SVG), supported elsewhere since 2001, but only available in Internet Explorer using a 3rd party plugin (until the release of Internet Explorer 9). System requirements IE7 requires at least: 233 MHz processor. Windows XP SP2. Super VGA (800 × 600) monitor with 256 colors. Mouse or compatible pointing device. RAM (for the browser alone): 64 MB for 32-bit Windows XP/Server 2003, 128 MB for 64-bit Windows XP/Server 2003. References External links Internet Explorer: Home Page IEBlog — The weblog of the Internet Explorer team Internet Explorer Community — The official Windows Internet Explorer Community Internet Explorer Tips Serious security flaw found in IE 2006 software Internet Explorer News aggregator software Windows components Windows Server 2008 Windows Vista Windows web browsers Windows XP
16820448
https://en.wikipedia.org/wiki/Pace%20University
Pace University
Pace University is a private university with its main campus in New York City and secondary campuses in Westchester County, New York. It was established in 1906 by the brothers Homer St. Clair Pace and Charles A. Pace as a business school. Pace enrolls about 13,000 students in bachelor's, master's and doctoral programs. Pace University offers about 100 majors at its six colleges and schools, including the College of Health Professions, the Dyson College of Arts and Sciences, Elisabeth Haub School of Law, Lubin School of Business, School of Education, and Seidenberg School of Computer Science and Information Systems. It also offers a Master of Fine Arts in acting through The Actors Studio Drama School and is home to the Inside the Actors Studio television show. The university runs a women's justice center in Yonkers, a business incubator and is affiliated with the public school Pace High School. Pace University originally operated out of the New York Tribune Building in New York City, and spread as the Pace Institute, operating in several major U.S. cities. In the 1920s, the institution divested facilities outside New York, maintaining its Lower Manhattan location. It purchased its first permanent home in Manhattan's 41 Park Row in 1951, and opened its first Westchester campus in 1963. Pace opened its largest building, 1 Pace Plaza, in 1969. Four years later, it became a university. History In 1906, brothers Homer St. Clair Pace and Charles Ashford Pace founded the firm of Pace & Pace to operate their schools of accountancy and business. Taking a loan of $600, the Pace brothers rented a classroom on one of the floors of the New York Tribune Building, today the site of the One Pace Plaza complex. The Paces taught the first class of 13 men and women. The school grew rapidly, and moved several times around Lower Manhattan. The Pace brothers' school was soon incorporated as Pace Institute, and expanded nationwide, offering courses in accountancy and business law in several U.S. cities. Some 4,000 students were taking the Pace brothers' courses in YMCAs in the New York-New Jersey area. The Pace Standardized Course in Accounting was also offered in Boston, Baltimore, Washington, D.C., Buffalo, Cleveland, Detroit, Milwaukee, Grand Rapids, Kansas City, St. Louis, Denver, San Francisco, Los Angeles, Portland, and Seattle. In the 1920s, concerned about quality control at distant locations, the Pace brothers divested their private schools outside New York and subsequently devoted their attention to the original school in Lower Manhattan, eventually to become one of the campuses of Pace University. Pace Institute in Washington, D.C. later became Benjamin Franklin University (now part of The George Washington University). In 1927 the school moved to the newly completed Transportation Building at 225 Broadway, and remained there until the 1950s. After Charles died in 1940 and Homer in 1942, Homer's son Robert S. Pace became the new president of Pace. In 1947, Pace Institute was approved for college status by the New York State Board of Regents. In 1951, the college purchased its first campus building: 41 Park Row in Lower Manhattan. The building, a New York City designated landmark, was the late-19th-century headquarters of The New York Times. In 1963, the Pleasantville Campus was established using land and buildings donated by the then-president of General Foods and Pace alumnus and trustee Wayne Marks and his wife Helen. In 1966, U.S. Vice President Hubert Humphrey and New York City Mayor John Lindsay broke ground for the One Pace Plaza Civic Center complex, with then Pace president Edward J. Mortola. The former New York Tribune Building at 154 Nassau Street, across from 41 Park Row, was demolished to make way for the new building complex. The New York State Board of Regents approved Pace College's petition for university status in 1973. Shortly thereafter, in 1975, the of White Plains (formerly known as Good Counsel College) consolidated with Pace and became the White Plains campus which at the time was used to house both undergraduate courses and Pace's new law school created in that same year. In September 1976, Pace began offering courses in Midtown Manhattan in the Equitable Life Assurance Company building (now AXA Equitable Life Insurance Company) on Avenue of the Americas, and moved once before moving to its current location in 1997. Briarcliff College was acquired in 1977 and became the Briarcliff campus. A graduate center was opened in 1982 in White Plains, New York, and in 1987 the Graduate Center moved to the newly built Westchester Financial Center complex in downtown business district of White Plains; which at the time of its opening, Pace's graduate computer science program was the first nationally accredited graduate program in the state of New York. In 1994, all undergraduate programs in White Plains were consolidated to the Pleasantville-Briarcliff campus, and the White Plains campus on North Broadway was given to the law school; resulting in the university's Westchester undergraduate programs in Pleasantville and its Westchester graduate programs in White Plains. Finally in 1997, Pace purchased the World Trade Institute at 1 World Trade Center from the Port Authority of New York and New Jersey. On March 5, 2006, Pace students, alumni, faculty, and staff from all campuses convened on the Pleasantville Campus in a university-wide Centennial Kick-Off Celebration; there was a Pace Centennial train, provided free of charge by the Metropolitan Transportation Authority (MTA), to take Pace's New York City students, alumni, faculty and staff to Pace's Pleasantville campus. Former President Bill Clinton received an honorary doctorate of humane letters from Pace during the ceremony, which was held at the Goldstein Health, Fitness and Recreation Center. Following reception of the honorary degree, he addressed the students, faculty, alumni and staff of Pace, officially kicking off the Centennial anniversary of the university. Since her last visit in celebration of Black History Month in 1989, Dr. Maya Angelou again visited the Pace community on October 4, 2006, in celebration of Pace's Centennial. Two days later, on October 6, 2006, (Pace's Founders Day) Pace University rang the NASDAQ stock market opening bell in Midtown Manhattan to mark the end of the 14-month centennial celebration. On May 15, 2007, Pace University President David A. Caputo announced his early retirement from the presidency on June 3, 2007. The Board of Trustees of Pace University appointed Pace Law School dean Stephen J. Friedman to the position of interim president. Friedman has been dean and professor of law at Pace since 2004. He has also served as commissioner of the Securities and Exchange Commission and as co-chairman of Debevoise & Plimpton. Friedman retired as President of Pace University in July 2017. In 2015, in an effort to consolidate Pace University's Westchester campuses into a single location, Pace University sold the Briarcliff campus. The former president of Oberlin College, Marvin Krislov, was appointed president of Pace University in February 2017. In February 2017, Pace University embarked on a $190 million renovation process known as the 'Master Plan'. Phase 1, which included the One Pace Plaza and 41 Park Row buildings. was completed by a ribbon cutting event on January 28, 2019. Additional future phases include a vertical expansion of One Pace Plaza to create an additional of academic space, relocating the Lubin School of Business, moving administrative offices from 41 Park Row, and modernizing the facade of One Pace Plaza. Academics Admissions Pace University's 2019 undergraduate admission acceptance rate was 75.9%, with admitted students having an average high school GPA of 3.4, an average SAT composite score of 1160 out of 1600 (570 Math, 590 Reading & Writing), and an average ACT composite score of 25 out of 36. Rankings The 2020 edition of U.S. News & World Report ranked Pace as 202nd among universities in the United States. Schools and colleges The university consists of the following schools, each with a graduate and undergraduate division: The College of Health Professions (2011) Lienhard School of Nursing (1966) is ranked by U.S. News & World Report at 79th among graduate nursing schools. Dyson College of Arts and Sciences (1966) Pace School of Performing Arts (PPA) Lubin School of Business (1906) Among fewer than three percent of global business schools with dual accreditation from AACSB International. School of Education (1966) Seidenberg School of Computer Science and Information Systems (1983), named in 2005 for Verizon Chairman/CEO & Pace alumnus Ivan Seidenberg. Susan M. Merritt served as founding dean from its inception in 1983 for 25 years, the longest of any dean at Pace. Pforzheimer Honors College (2003) Adult and Continuing Education (formerly known as University College 1979–1984; School of Continuing Education 1968–1979) World Trade Institute of Pace University (purchased from the Port Authority of New York and New Jersey in 1997 - originally located on the 55th floor of 1 World Trade Center until September 11, 2001, reopened in 2003, closed in 2005.) The Actors Studio MFA program. The Michael Schimmel Center for the Arts is home to the television show Inside the Actors Studio previously hosted by James Lipton, and once hosted Tony Randall's National Actors Theatre. Pace University was ranked tied for 202nd among national universities by U.S. News & World Report in 2020, and tied for 34th for "Top Performers on Social Mobility". In 2015, Pace University was ranked #19 in New York State by average professor salaries. Campuses Pace University campuses are located in New York City and Westchester County. The university's shuttle service provides transportation between the New York City and Pleasantville campuses. Furthermore, Pace University has a high school located just ten blocks away from the university's New York City Campus (see Pace University High School). New York City The New York City campus is located in the Civic Center of Lower Manhattan, next to the Financial District and New York Downtown Hospital. The campus is walking distance to well-known New York City sites including Wall Street, the World Trade Center, World Financial Center, South Street Seaport, Chinatown and Little Italy. Pace has about of space in Lower Manhattan. The main building, One Pace Plaza, is a two-square-block building bounded by Gold, Nassau, Spruce, and Frankfort Streets, directly adjacent to the Manhattan entrance ramp of the Brooklyn Bridge. Located directly across from City Hall, the One Pace Plaza complex houses most of the classrooms, administrative offices, a student union, a 750-seat community theater, and an 18-floor high-rise residence hall (known as "Maria's Tower"). 41 Park Row was the 19th-century headquarters of The New York Times, and carrying on that legacy the building today houses the campus' student newspaper The Pace Press, as well as student organization offices, the Pace University Press, faculty offices, the university's bookstore, and classrooms. 41 Park Row also houses the Haskins Laboratories, of Dr. Seymour H. Hutner, where medical experiments are held, like the Green tea extract study in the international media. The buildings of 157 William Street, 161 William Street, and 163 William Street were acquired by Pace following the September 11 attacks to make up for loss of the entire 55th floor, , in the North Tower of the World Trade Center which used to house Pace University's World Trade Institute and World Trade Conference Center (See the section below entitled September 11, 2001). The Willam Street buildings house classrooms, offices of the Seidenberg School of Computer Science & Information Systems, the School of Education, the College of Health Professions, the university's business incubators, along with Pace's Downtown Conference Center where the e.MBA residency sessions are held (Pace also has leased office space in 156 William Street). Pace has residence halls at 182 Broadway and 33 Beekman Street. The 33 Beekman Street building is the tallest student residential building in the world. Pace also leases residence accommodations at the new state-of-the-art residence at 55 John Street, also in Lower Manhattan. Pace also offers classes in midtown Manhattan in the art deco Fred F. French Building on at 551 Fifth Avenue. In January 2019, Pace completed a $45 million renovation of One Pace Plaza and the adjoining 41 Park Row. Westchester Pleasantville Campus Classes began in Pleasantville in Westchester County, New York in 1963. The campus today consists of the former estate of then Vice Chairman of General Foods Corporation, Wayne Marks (Class of 1928) - previously belonging to 18th century noted physician Dr. George C. S. Choate (who gave his name to a pond and a house on the campus.) Located on the campus is the Environmental Center, constructed around the remnants of a 1779 farmhouse. The center, which is dedicated to the environmental studies program, provides office and classroom space; it houses the university's animals such as chicken, goats, sheep, pigs, and raptors. As part of the Pleasantville Master Plan, this Environmental Center was expanded and relocated to the back of campus. Two brand new residence halls, Elm Hall and Alumni Hall, were constructed in the center of campus, in its place and the Kessel Student Center was remodeled. Elisabeth Haub School of Law Located within 30 minutes of New York City's Grand Central Station, some north of Manhattan in White Plains, New York in Westchester County is The Elisabeth Haub School of Law at Pace University. Nestled in between the Cross-Westchester Expressway (I-287) and NY Route 22 (North Broadway), the Law School is situated on a spacious landscaped suburban campus with a mix of historic and modern buildings. Founded in 1976, Pace Law School is the only law school located between New York City and the state capital of Albany, New York, away. In 2020, U.S. News & World Report ranked the law school's Advanced Certificate in Environmental Law program #3, and gave the law school a general rank of #136. On the Law School's campus is the recognized Pace Environmental Litigation Clinic where adjunct professor emeritus of Environmental Law, and alumnus of Pace, Robert F. Kennedy Jr. served as co-director before retirement. Also on the campus is the New York State Judicial Institute, the United States' first statewide center for training and research for all judges and justices of the New York State Unified Court System. Frequent Pace shuttle service is provided between the Law School campus and the White Plains Station of the Metro-North Railroad for many law students who commute from New York City and throughout the state. Stephen J. Friedman, former commissioner of the Securities and Exchange Commission and former co-chairman of Debevoise & Plimpton, is the immediate past dean of Pace Law School. Other properties Pace University High School Pace University established a public high school and opened its doors to its first class in September 2004. Pace High School is in New York City school district Region 9, and shares a building with Middle School 131 at 100 Hester Street in Lower Manhattan, 10 blocks away from the university's New York City campus. SCI² business incubators In the fall of 2004, Pace University opened two business incubators to help early-stage companies grow in New York City in Lower Manhattan and Yonkers. SCI², (which stands for Second Century Innovation and Ideas, Corp.) maintains accelerator sites in 163 William Street in Lower Manhattan and in the NValley Technology Center complex at 470 Nepperhan Avenue in Yonkers. Women's Justice Center at the Westchester County Family Court-Yonkers In 2001, the Women's Justice Center of Pace Law School opened a second site at the Westchester County Family Court in Yonkers, New York (the first being on the law school campus at the 27 Crane Avenue house). The Westchester County Family Court in Yonkers is one of three family courts in Westchester County. The Yonkers office of the Women's Justice Center is located at the Westchester Family Court, 53 South Broadway in Yonkers. International Disarmament Institute The International Disarmament Institute is a center for teaching and studying worldwide disarmament, arms control and non-proliferation. Matthew Bolton, the director of the institute, worked on The International Campaign to Abolish Nuclear Weapons, which won the Nobel Peace Prize in 2017. Theater and the arts The Michael Schimmel Center for the Arts is the principal theatre of Pace University and is located at the university's New York City campus in Lower Manhattan. The 750-seat Michael Schimmel Center for the Arts is home to the television show Inside the Actors Studio hosted by James Lipton and previously the home of the National Actors Theatre, a theatre company founded by actor Tony Randall who was in residence. The National Actors Theatre was the only professional theatre company housed in a university in New York City. Theater productions at Pace have included such stars as Tony Randall, Al Pacino, Steve Buscemi, Dominic Chianese, Billy Crudup, Charles Durning, Paul Giamatti, John Goodman, Chazz Palminteri, Linda Emond, Len Cariou, Roberta Maxwell, and Jeff Goldblum. Pace is also one of the venues for the Tribeca Film Festival, the Tribeca Theater Festival, the New York International Fringe Festival (FringeNYC), The River To River Festival (New York City's largest free-to-the-public summer festival), and Grammy Career Day of Grammy in the Schools. The Woodward Hall 135-seat theater at the campus at Briarcliff Manor in Westchester is home to the Hudson Stage Company. Athletics Pace's sports teams are called the Setters; the university's mascot is the Setter. Pace University sponsors fourteen intercollegiate varsity sports. Men's sports include baseball, basketball, cross country, football, lacrosse and swimming & diving; while women's sports include basketball, cheerleading, cross country, dance, field hockey, soccer, softball, swimming & diving and volleyball. Its affiliations include the National Collegiate Athletic Association (NCAA) Division II and the Northeast-10 Conference (NE-10). The school's official colors are blue and gold. Facilities Pace's athletic facilities are highlighted by the Goldstein Health, Fitness and Recreation Center in Pleasantville, New York, which boasts a 2,400-seat arena, eight-lane swimming pool, weight/fitness room, aerobics/dance room, training room, locker rooms, equipment room, meeting rooms, and offices of the athletics department. September 11, 2001 On the day of the terrorist attacks of September 11, 2001, Pace University, four blocks from Ground Zero, lost 4 students and over 40 alumni. Students were made to leave classes and evacuate to other locations in One Pace Plaza at 10:00 a.m. The New York City EMT cleared out the Admissions Lobby and made it into a triage center for victims of the attack. Many of the patients were New York City police officers, firefighters and other emergency workers. Debris and about three inches (7.5 cm) of dust and ashes laid over the Pace New York City campus area and local streets. None of Pace's buildings were damaged except in the World Trade Center; Pace lost the entire 55th floor, in the North Tower of the World Trade Center which used to house Pace University's World Trade Institute and the Pace University World Trade Conference Center (now the Downtown Conference Center). A memorial to students and alumni who lost their lives on September 11 stands on all three campuses of Pace University. A gift from the American Kennel Club, a statue of a German Shepherd dog stands in front of One Pace Plaza (as of Fall 2007) to commemorate Pace's support as a triage center on September 11. Notable alumni Notable graduates and former students at Pace include: Philip Abramo, American financial fraudster, white collar crime boss and DeCavalcante crime family Caporegime, known as "the King of Wall Street" Mike Adenuga, CEO Globacom Ailee, Korean-American singer currently promoting in South Korea Olivia Anakwe, fashion model; B.A. in psychology Stephanie Andujar, actress; Precious; Orange Is the New Black Nathaniel Barnes, former Liberian Ambassador to the United States Yancy Butler, actress Frank Calderoni, CEO, Anaplan Telfar Clemens, fashion designer James E. Davis, former Member of the New York City Council Stephanie Del Valle, musician, model, Miss World 2016 Dominique Fishback, actress, known for her role on HBO's The Deuce Richard Grasso, chairman and CEO (1995–2003) of the New York Stock Exchange Katie Henney, actress, starred in Felicity: An American Girl Adventure as Elizabeth Cole Kathleen Herles, voice actress, original voice of Dora on Dora the Explorer Joseph Ianniello, president and acting CEO, CBS Corporation Mel Karmazin, CEO (2004-2012), Sirius Satellite Radio; former president and CEO, CBS; former COO, Viacom Asher Levine, fashion designer Joy Mangano, inventor & entrepreneur Avi Mizrahi, an Israeli Defense Force, major general Lalit Modi, former commissioner of the Indian Premier League Tim Morehouse, fencer, Silver Medal winner in Men's Team Sabre at the 2008 Summer Olympics Olga Nolla, poet, writer, journalist, professor Fred Ohebshalom, New York real estate developer Rachael Ray, personality & TV cook, studied at Pace Pleasantville 1986–1987 Rossana Rosado, journalist & Secretary of State of New York Ken Rudin, radio journalist and political editor for National Public Radio (NPR) Felix Sater, convicted felon, real estate developer and entrepreneur, known for work on Trump SoHo, Midtown Miami, and the proposed Trump Tower Moscow Ivan G. Seidenberg '81, Former president & CEO, Verizon Sam Smith, former NBA writer at the Chicago Tribune and current writer for bulls.com. Edward W. Stack, chairman (1977–2000) and board member, National Baseball Hall of Fame, chairman & CEO Andrea Stewart-Cousins, New York State Senate Majority leader. Glenn Taranto, actor; known for his role as Gomez Addams in The New Addams Family Linn Thomas, Playboy Playmate of the Month, May 1997 & Penthouse Pet of the Month, October 2000 Barbara Farrell Vucanovich (R), US House of Representatives Nevada 2nd District Allen Weisselberg, CFO, The Trump Organization Suzanne Weyn, author of over forty novels, most notably, The Bar Code Tattoo and Bar Code Rebellion See also Drumgoole Plaza Qualtrics Willem C. Vis Moot References Further reading Weigold, Marilyn E. Opportunitas: The History of Pace University. New York, NY: Pace University Press, 1991. History of Pace University as told by Pace University Historian Marilyn E. Weigold. The Pace Story External links Pace University Athletics website 1906 establishments in New York (state) Educational institutions established in 1906 Civic Center, Manhattan Mount Pleasant, New York Universities and colleges in Manhattan Universities and colleges in Westchester County, New York Briarcliff Manor, New York Private universities and colleges in New York (state) Private universities and colleges in New York City
10673858
https://en.wikipedia.org/wiki/Video%20games%20in%20China
Video games in China
The video game industry in mainland China currently is one of the major markets for the global industry, where more than half a billion people play video games. Revenues from China make up around 25% of nearly video game industry as of 2018, and since 2015 has exceeded the contribution to the global market from the United States. Because of its market size, China has been described as the "Games Industry Capital of the World" and is home to some of the largest video game companies. China has also been a major factor in the growth of esports, both in player talent and in revenue. China had not always been a major factor in the industry, having been on the verge of economic recovery during the industry's formulative years in the 1970s and 1980s. With the introduction of the second generation of home consoles in the mid-1980s, a new black market of illegally-imported goods and video game clones arose to avoid the high costs of imports, driving away foreign companies. Notably, China imposed a near-complete ban on video game consoles in 2000, fearing the addicting-like impact of games on its youth; the ban was ultimately lifted in 2015. During that time, China's video game market greatly expanded in the area of computer games, including massively multiplayer online games (MMOs), social games, and mobile games, all which could be free to play titles with monetization to appeal to the average lower income of Chinese players. This massive growth from 2007 to 2013 led the games' publishers and operating companies like Tencent and Netease to become large global companies. Despite the legitimate growth of the industry, China's video game market continues to be offset by illegal importing and intellectual copyright theft. As with other parts of its media, China's government has strong oversight of the video game industry; all new titles go through a governmental approval process to assure that content aligns with the nation’s values. In 2018, an approvals freeze due to the reorganisation of China’s content vetting agencies caused numerous game releases to be held up. The video game market plummeted for a year. The government also fears the potential for its youth to become addicted to video games, and have required games to include anti-addiction measures. User verification is used to enforce playtime restrictions, which currently limit minors to three hours per week. History Broadly, the growth of the video game market in China is tied to expansion of its technology and digital economy from the 1990s to present day, which by 2016 represented over 30% of its gross domestic product. Initial growth (1980s-2000) At the time that the video game industry was being established in North America in the 1970s, China was in the midst of major political and economic reform following the death of Mao Zedong in 1976. The country was technologically behind much of the rest of the world in terms of its media. Part of the reform was modernization of its media systems, helping to boost economic prosperity for citizens. As such, China saw little of arcade games or the first generation of home consoles, like the Atari 2600. After the video game crash of 1983 which devastated the North American video game market, Japan became a dominant factor in the global market leading off the third generation of consoles such as Nintendo's Famicom. By this point, China's economy had significantly improved, and Japan started to make inroads into selling consoles into China. However, importing these into China was costly, with a 130% tariff on hardware and games along with value-added taxes. Console systems were in high demand, but because of the high costs of importing, only few foreign companies did so. This created the video game clone grey market in China – reverse manufacturing of consoles and games at much lower costs than imported system, even if this required dubious or illegal copyright infringement. Outright copyright theft ("piracy") was also rampant in China due to the country's poor intellectual property controls. The sales of cloned console hardware and games outpaced that of legitimate imports, and further drove many foreign companies away since they could not compete with this area, such that by the 1990s, most video game systems in China were manufactured there. Console games continued to grow in popularity through the 1990s, which created a broader concern in the media of video game addiction, with terms like "digital heroin" being used to describe video games. Even before the 1990s, there had been a broader stance in China that video games created negative effects on those that played them, which only grew during this decade. The impact on youth was particularly of concern, as video games were known to detract students from schoolwork, leaving them unprepared to enter China's college system. This situation was partially created by China's one-child policy, with sibling-less children having few others to interact with and little to do outside of school. The anti-addiction facet also discouraged foreign companies from trying to break into the Chinese market. Chinese console ban (2000–2015) The concerns about video game addiction and negative influence on the youth came to a head in June 2000. The State Council passed a bill crafted by seven ministries specifically aimed at video games. The bill established certain provisions on video game content and regulations on operations of Internet cafés and arcades. The most significant facet of this bill was a ban on the production, import, and sale of consoles and arcade machines. This ban was not absolute, as it allowed for some consoles to be released in China, notably Sony's PlayStation 2 in 2004 and several of Nintendo's consoles rebranded under the iQue partnership. However, with the restriction on game imports and their content, these consoles did not catch on in China. The ban did not include games available on personal computers (PC), and as a result, the PC video game market in China flourished over the next fifteen years. Internet cafés flourished, growing from 40,000 in 2000 to over 110,000 by 2002, and have remained numerous since. The ban on arcade machines was dropped in 2009, but while arcade were permitted to operate, they had to take several safeguards to prevent excessive use by youth. However, since such arcades offered a low-cost way to play games without a PC, they still became a thriving industry comparable to PC gaming at internet cafes. As a result, Chinese gamers frequently visit the arcades to play action games, particularly fighting games, and occasionally unlicensed arcade ports of popular PC or mobile games such as Angry Birds or Plants vs. Zombies. Online gaming (2004–2007) Legitimate acquisition of games and the hardware to play them was still relatively expensive in China, which continued to fuel the video game clone market in China. A large number of PC gamers in China acquired software through illegal downloads and pirated software websites to avoid the cost. Developers of legitimate games in China recognized that, to compete with this black market, they had to develop games that had a free or low upfront cost model but offered a way to monetize their games over time. Many Chinese-developed games became online games offering numerous microtransactions to recoup costs; such games could be offered at Internet cafés, which became a popular option for Chinese players that could not afford computer hardware, even as the price of computing equipment dropped over the next decade. This created a boon of massively multiplayer online games (MMOs) in the Chinese market and which helped to establish market dominance of companies like Tencent, Perfect World, and NetEase. PC cafes proliferated in urban centers as China's population continued to grow. Western free-to-play and subscription-based games like Dota 2 and World of Warcraft, poised to take advantage of this model, also became successful. It also prompted Chinese developers to develop numerous clones of popular Western games that they would offer at low cost, an issue that still persist presently. Online gaming became of serious concern to the government around 2007, re-raising the issues of gaming addiction that had prompted the 2000 console ban. A government report claimed that 6% of the country's teenaged population, about 3.5% of the country's population, were playing online games more than 40 hours a week. In July 2007, the government required that online game publishers and operates incorporate anti-addiction software on their games, specifically by monitoring how long underaged persons played. If a minor played for more than three hours straight, the game was to wipe half of any in-game currency that had earned that session, and lose all credits if played for more than five hours. Additionally, these systems were required to have the player to log in using their national identification. However, at the time of implementation, not all publishers incorporated the required controls, and for those that did, players would find ways around the limitations, such as using family member IDs, or otherwise would simply play past the time requirements as there was nothing else to do beyond the video game. Social and mobile gaming (2008–2014) By 2007, the size of the Chinese video game market was estimated to be about with around 42 million players, having grown 60% from the previous year mostly driven by online gaming. At this stage, China's impact on the larger global market, valued at , was not considered significant, as much of it was still driven by the grey market for clones and pirated games. However, the rapid growth led to forecasts that China would be a major contributor to the global market within five-years time. Online gaming readily led way to the rise of social network games in China around 2007–2008, given that players were accustomed to free-to-play nature of online gaming. The Chinese game Happy Farm (2008) was included in Wired list of "The 15 Most Influential Games of the Decade" at #14, for its major influence on global social network games, particularly for having "inspired a dozen Facebook clones," the largest being Zynga's FarmVille. A number of other games have since used similar game mechanics, such as Sunshine Farm, Happy Farmer, Happy Fishpond, Happy Pig Farm, Farm Town, Country Story, Barn Buddy, Sunshine Ranch, and Happy Harvest, as well as parodies such as Jungle Extreme and Farm Villain. This further prepared the China market for mobile games around 2012, where there are about one billion mobile phone subscriptions in China owned a mobile phone according to a United Nations report, and after Apple secured deals to distribute their iPhones within China. Mobile devices in China are less expensive than computer or console hardware, as well as provide Internet functionality, and for many, the only form of Internet connectivity they have, making them popular gaming devices. Mobile games in China grew rapidly over the next several years, growing from about 10% of the Chinese video game market in 2012 to 41% in 2016. This expanded to more than 50% by 2018. Furthering the growth of the social and mobile game markets was the fact that the anti-addiction measures applied to online games did not apply to these types of titles; it was not until 2017 where renewed concerns about mobile titles like Honour of Kings led Tencent to implement a similar anti-addiction system for its portfolio. Social and mobile gaming significantly grew the Chinese video game market beyond earlier estimates. By 2013, the Chinese market for video games saw nearly a ten-fold growth since 2007, valued at of the global , with over 490 million players, counting only those on personal computers; since consoles were still banned, these numbers do not take console players into account. Lifting of the console ban (2014–2017) In 2014, China partially eased the restrictions on video game hardware by allowing game consoles to be manufactured in the Shanghai Free-Trade Zone (FTZ) and sold in the rest of China subject to cultural inspections. In July 2015, the ban on video game consoles within the country was completely lifted. According to a statement from the country's Ministry of Culture, companies like Sony, Nintendo, and Microsoft — among others — were now allowed to manufacture and sell video game consoles anywhere in the country. Microsoft and Sony quickly took advantage of the lifting of the ban, announcing sales of the Xbox One and PlayStation 4 platforms within the FTZ shortly after the 2014 announcement. Microsoft established a partnership with BesTV New Media Co, a subsidiary of the Shanghai Media Group, to sell Xbox One units in China, with units first shipping by September 2014. Sony worked with Shanghai Oriental Pearl Media in May 2014 to establish manufacturing in the FTZ, with the PlayStation 4 and PlayStation Vita shipping into China by March 2015. CEO of Sony Computer Entertainment Andrew House explained in September 2013 that the company intended to use the PlayStation Vita TV as a low-cost alternative for consumers in an attempt to penetrate the Chinese video game market. Both Microsoft and Sony have identified China as a key market for their next-generation of consoles, the Xbox Series X and Playstation 5 respectively. Nintendo did not initially seek to bring the Wii U into China; Nintendo of America president Reggie Fils-Aime stated that China was of interest to the company after the ban was lifted, but considered that there were similar difficulties with establishing sales there as they had recently had with Brazil. Later, Nintendo had teamed up with Tencent by April 2019 to help sell and distribute the Nintendo Switch as well as aid its games through the Chinese government approval process led by National Radio and Television Administration. The Nintendo Switch went on sale in China on December 10, 2019, though unlike the international version, this unit included several concessions to region-lock it to China. Even with the ban lifted, console sales were slow, as consoles require dedicated space in home and did not have additional functionality, like personal computers, and further slowed by continued popularity of Internet cafès. The hardware grey market also persisted, drawing away legitimate sales of consoles. Of the industry revenue in 2018, only about was attributed to console sales. It is expected that as more interest in legitimate sales of consoles increases in the future, the grey market will wane. Despite official availability of the Switch, imported and grey-market sales of Switch consoles still dominated China; while Nintendo and Tencent reported that a million Switch consoles had been sold by January 2021, the total number of Switch consoles in use within the country were estimated to be at least twice as high due to imported, non-region-locked versions. Approvals freeze and further steps to restrict youth gaming (2018-ongoing) In March 2018, the organization structural of SART was changed, created a period of several months where no new game licenses were given out. Further, MOC had made the process of getting these licenses more stringent. This period has significantly impacted Tencent, one of the largest publishers of video games for China. In August 2018, Tencent was forced to pull from sale their version of Monster Hunter World from China as they had not gotten their license for it and the government received complaints about its content. Tencent were also blocked on publishing personal computer versions of PlayerUnknown's Battlegrounds and Fortnite Battle Royale. The license freezes was reported to have significant effects on those game publishers and developers that rely on Chinese sales. In late August 2018, the Chinese Ministry of Education called on the Chinese government and SART to also address the growing issue of myopia in children which was attributed to long hours of gaming on small screens like with mobile devices. the Ministry of Education had asked SART to consider placing restrictions on the number of hours each young player can play a game. On news of this, Tencent shares lost 5% of their value, an estimated on the stock market the next day. A further approval route was closed by Chinese authorities in October 2018; this "green channel" route that had been in place by August 2018, which allowed a game to have a period of one month on the market for purposes of consumer testing without having full government approval, but which had been seen by game publishers as temporary relief from the current ban. Tencent had been planning on distributing and monetizing from Fortnite Battle Royale via this method before this route was closed. With China's effective ban of new games continuing into October 2018, Chinese players have found other routes of getting new games, which include using Steam which uses overseas servers. Further, existing titles released before the freeze that continue to offer new content have seen a resurgence in players and spending as a result. To comply with the planned new rules, Tencent announced that all mobile games it manages in China will require users to use their Chinese ID to play. This will be used by Tencent to track the time that minors play the game and implement time limitations on them, among other steps to meet new regulations. By December 2018, the Chinese government had formed the "Online Game Ethics Committee" falling under the National Radio and Television Administration, which will review all games to be published in China for appropriate content as well as issues related to childhood myopia. The committee, by the end of the year, had restarted the approval process and will be working through a backlog of submissions to review in an expedited manner to allow new games to be released. Initial approvals to 80 back-logged titles was granted within days, but notably lacked games published by Tencent and Netease, the two largest publishers in China. After several more rounds, Tencent had two games approved near the end of January 2019, but did not include either Fortnite Battle Royale or PlayerUnknown's Battlegrounds, two major titles that were financial drivers in other countries. A second freeze on approvals started in February 2019, as any further approvals on new games were suspending until the committee has been able to clear the backlog of the titles from the prior freeze. By this point, only about 350 games had been approved from the previous freeze. According to China's State Administration of Press and Publication, the freezes were put in place as the video game industry had grown too rapidly in China at a rate that passed the capabilities for regulation to keep up. The second freeze that started in February 2019 was to put in place to give regulators a change to tune the game approval process to meet the current market size. The freeze is expected to be lifted in April 2019, alongside a new set of regulations for game approvals. These new changes include limiting the number of games that can be approved each year to around 5,000 games, strictly banning video game clones and games with obscene content, and placing more anti-addiction controls on mobile titles aimed at younger players. The nearly year-long freeze has had rippling effects on the global video game industry. Whereas in 2017, around 9,600 new games were approved, only around 1,980 were approved within 2018. Tencent had been one of the top 10 companies in the world at the start of 2018, but by October, its stock had dropped in value by 40%, an estimated , and knocked the company out of the top ten. Apple Inc. attributed revenue loss in the fourth quarter of 2018 to China's approval freeze, which had also affected mobile video game apps. The freeze was expected to impact total revenues of the video game industry in 2019, with one analysis projecting a decline in revenue from the previous year, the first time in only a decade. The Chinese government continued to push on restrictions on gaming after the approvals freeze was lifted, asserting its efforts were to restrict the influence of gaming on youth. The government has place restrictions on the amount of time minors can play video games, first in 2019 to 90 minutes per day on weekdays and three hours on weekends, and then to only one hour per day on weekends by 2021. The government has required that all online games to implement strong authentication protocols developed by the government as to track players' time in a game as well. Additionally, the government banned minors under the age of 16 from registering for livestreaming services. Since March 2021, there had been new pressure on video games, kicked off by statements made by President Xi Jinping during the annual Two Sessions meetings where he claimed that video games could have a bad influence on the minds of children who are psychologically immature. The government had stopped approval of games starting in August 2021 in an apparent new lockdown related to game content. The continued pressure by the government on the Chinese game sector started to take an effect on the economic valuation of the largest companies. An article published by the state-owned newspaper Economic Information Daily published a report in August 2021 that initially stated that online video games were an "opium for the mind", that gaming addiction was on the rise, and that there should be stronger government regulations. While the article was pulled and later republished without the "opium" statement, its effects caused shares of Tencent to drop by 10% initially that day in trading, though had recovered some after the revised article was published. Similar drops were seen with NetEase and Bilibili. Later in September 2021, when both Tencent and Netease were notified by the government of an upcoming hearing and reminding them than violations of their youth gaming restrictions would be seriously dealt with, both companies' stocks dropped by about 10% due to fears that the government may be clamping down more on gaming in the future, including another potential approvals freeze. Over 200 Chinese game companies, including Tencent and NetEase, signed a statement that month pledging that they will work to regulate youth gaming under the government's new regulations, as well as to enforce new rules relating to games involved "effeminate" portrayal of men in games. As reported by the South China Morning Post, an internal memo sent by the state's gaming trade organization to game companies in September 2021 for purposes of training further clarified that that government saw video games not as "pure entertainment" but as a form of art and thus works that must uphold "a correct set of values" related to China's heritage and culture, and would be more restrictive in what games they would approve within the country. The memo described games that have "blurred moral boundaries", where the player has an option of being good or bad within the game, and suggested that such games may need instead to restrict players to a specific moral path. Further, the memo identified that games that gave a revisionist form of history, or appeared more Japanese than Chinese, would likely fail to be approved. According to the South China Morning Post, the approvals for new games persisted through the end of 2021, and due to the lack of approvals, more than 14,000 game-related companies were deregistered in China through 2021. Additionally, players within China reported that the government had modified the country's network so that the international version of Steam was inaccessible from within the country, leaving only the Steam China client with limited selection of games that were already approved by the government. Online gaming Online gaming in China represents one of the largest and fastest growing Internet business sectors in the world. With 457 million Internet users currently active in China, the country now has the largest online user base in world, of which two-thirds engage in online game play. The average online gamer in China is relatively young (18 to 30 years old), male, and has at least completed a secondary level of education. Demographically the online gaming user base in China is very similar to base of China Internet users, most of whom live in larger cities. Online games in China fall into two primary categories: MMORPGs and MOCGs, the former have a predilection for persistent online worlds where hundreds to thousands of game players can interact simultaneously; the latter is a generic term for games played competitively online without the existence of a persistent online realm (games as simple as online Ma Jiang and online competitive card games would fall under this category). In 2011, there were over 100 million Chinese MMO gamers. Official Chinese statistics regarding online gaming state that as of the close of 2006 revenue from China's online gaming industry reached 8 billion RMB or around 1.04 billion US dollars, with earnings reaching around 33 billion RMB or 4.3 billion US dollars. Additionally, while Japanese, American, and South Korean companies have traditionally dominated the market, Chinese developed software now holds a 65% market share on the mainland, with an additional 20 million in revenue generated by users outside of China. The online gaming market in China grew to $1.6 billion in 2007, and is expected to exceed $3 Billion in 2010. According to another estimate, in 2007, China's online games market was worth about US$970 million, with over 36 million gamers. China is now the world's largest online gaming market, contributing one-third to the global revenue in this sector in 2009, or 56 percent of the Asia Pacific total. There are 368 million Internet users playing online games in the country and the industry was worth US$13.5 billion in 2013. 73% of gamers are male, 27% are female. Games QQ Games is one such popular online client. Growth was driven in part by China's most popular online game, Netease's Fantasy Westward Journey, which now has 1.66 million peak concurrent users. Another contributor is Giant's Zhengtu Online, which has 1.52 million peak concurrent users. The video game industry in China Publishers Today, the video game market is dominated by the Tencent Games division of Tencent Holdings, which is estimated to contribute to 46% of the overall revenue in China, and nearly 10% of the global video game market as of 2017, making it the largest video game company in the world. NetEase, which contributes to around 15% of overall revenue in China, is the second largest video game company in China, as well as the seventh largest in the world as of 2017. Other major players include Perfect World, Shunrong, and Shanda. These companies are noted for having made aggressive investments in foreign video game developers, particularly from South Korea and the United States, and for making strategic agreements with other entities to serve as the China-based operating arm for foreign interests to meet Chinese government regulations. Notably, Tencent's acquisitions have included: to acquire Riot Games in 2011 to gain right to the online game League of Legends and for Supercell in 2016 for its mobile game Clash of Clans. Among major investiments include approximately 5% of Activision in 2013, a 40% interest in Epic Games in 2013, and a 5% interest in Ubisoft in 2018. The 10 largest online game companies by revenue in 2017 are: Tencent: Tencent Games is the Interactive Entertainment Division (aka IED) of Tencent. NetEase: a popular online portal in China, also branched out in the space of MMORPGs with the release of Westward Journey. The game, based on ancient westward travels on the Silk Road (a popular theme from Chinese developed MMORPGs), has gone through two iterations; it was re-released as Westward Journey II due to numerous problems with the initial release, and its game engine was used to develop Fantasy Westward Journey, which is currently the most popular MMORPG in China (based on PCU numbers). YY 37 Interactive Perfect World Elex IGG Alpha Group Century Huatong Group (owner of Shanda). Shanda produces and supports many popular MMORPGs. The company is significant because it introduced a new online payment system with the release of Legend of Mir 2 in 2001. Instead of charging users for the initial purchase of the game, Shanda gave the software away free-of-charge and decided to charge users for time spent playing in game. This payment system specifically counteracted piracy because the company could maintain easier control over the time users spent in the game, rather than attempt to limit the game's distribution. Shanda maintains a large number of MMORPGs in China developed by Western, Korean and native Chinese companies; the latter two regions produce Shanda's most popular games. The company also maintains numerous casual games as well, with platforms supporting chess and other non-persistent world games. Kunlun Tech The9 (第九城市) is similar to Shanda Entertainment, it specifically maintains and produces MMORPG content for the Chinese gamer base. The9 is notable because of its partnership with Blizzard Entertainment in bringing World of Warcraft (the most popular MMORPG outside of Asia) to China. World of Warcraft is the most popular western MMORPG in Asia, and one of the most popular in China in general. Recent statistics place its peak concurrent users at around 688,000, easily among the top MMORPGs in the country. The9 also implemented a pay-for-time system for the game, which differs from the monthly subscription payment structure used by Blizzard in other territories. In April 2009, World of Warcraft owner Activision Blizzard announced it had selected The9 competitor NetEase to operate the game in China. The9's license expired on June 7, 2009. Popularity statistics In order to gauge the popularity of online games, both in China and internationally, three benchmarks are commonly implemented. The first is peak concurrent users (PCU), which is the maximum numbers of players online simultaneously at a given time. A high PCU number signifies that a game has a large base of constant user participation, which is essential for the survival of an online world. The second statistic used is the daily active player base; this number is essentially a count of the number of disparate users who sign on in a given 24-hour period. This statistic differs from PCU simply because of its longer time span but the daily user base is still a good quantifier of popularity and usage. The third statistic is simply the total number of registered users for a specific game or service, this statistic is significantly more problematic because most, if not all, online games do not limit the user to a single account or user name. For example, some games claim millions of registered users; a disingenuous statistic given that the most popular MMORPGs in China usually garner only 800,000 to one million peak concurrent users. Thus, while registered user numbers can be quite impressive, they are not as accurate a gauge of popularity as the other aforementioned statistics. Investment In 2010, there were 25 investments made into Chinese online gaming companies. Of the 25 investments 20 of these deal disclosed financial details. As a group these 20 deals combined for a total of US$137 million in investment. Developers China has domestically produced a number of games, including Arena of Valor, Westward Journey, The Incorruptible Warrior, and Crazy Mouse. There are a large number of domestically made massively multiplayer online role-playing game MMORPGs in China, although many generally remain unheard of outside of the country. China does have a small but growing indie game scene. The growth of China's indie game scene is considered to have been through hobbyist programmers starting around the 1990s and into the 2000s after the console ban where personal computer games became more popular. An early well-known indie developer, Coconut Island, was founded in the mid-2000s, and through its success, starting a number of game jams around the country starting in 2011, and eventually established the China Indie Game Alliance, one of the country's largest developer communities. Further interest in indie game development came with the popularity of mobile games in the country. The 2014 title Monument Valley developed by Ustwo in the United Kingdom is considered to have been an influential title as it was able to tell an emotional story through the game medium, and drew more interest in the indie game scene. Indie game development is challenged by the governmental approval process, requiring resources that many indie developers do not have. As with mainstream commercial games, indie games must be approved and get a license to be sold, or otherwise may be offered freely, which does not require a license. This has led to a black market around obtaining licenses, using non-China-controlled platforms like Steam to distribute games, or other questionable means to get their game into players' hands. As Valve has been working with Perfect World to create a China-specific client for Steam, which would be limited to games approached by the Chinese government, several indie developers fear this may harm the indie scene within China. Manufacturers Most of the major video game systems in the world since the 1990s have been manufactured in China; by 2019, 96% of all video game consoles were manufactured in China, generally taking advantage of the net lower-income labor available in the country. Some of the larger manufacturers based in China or with factories within China creating video game consoles include Foxconn, Hosiden, and Flex. Because of this, trade relations between China and other countries can have an impact on video game console pricing. Around 2019, as Sony, Microsoft, and Nintendo were preparing their next generation systems, a trade war between the United States and China was threatening to create a 25% import tariff on electronic goods shipped into the U.S. from China, which would have significantly affected the prices of these new consoles. There had already been an impact on personal computer components prior, leading to speculation on the impact on consoles. Sony, Microsoft and Nintendo jointly petitioned the U.S. government to not go through with this plan, in addition to other electronics vendors in the U.S. By January 2020, the U.S. government affirmed it had backed off this planned tariff. Despite this, Sony, Microsoft and Nintendo have all expressed plans to divest some of the Chinese manufacturing to other countries in southeast Asia such as Vietnam. Esports Esports in China had been significant since 1996, as the country gained access to the Internet and PC gaming cafes began appearing across the county, also added by the popularity of QQ, a Chinese instant messaging client that helped with long-distance communications. Players quickly flocked to existing Western games that supported competitive Internet play such as Command & Conquer: Red Alert and Quake II. However, it was the release of StarCraft in 1998 that established the formation of organized competitive esports in China, including the formation of the China StarCraft Association to arrange unofficial tournaments for 1999 and onward. That same year was also the first official esports tournament in China based on Quake II. By 2000, the China E-Sports Association, formed from StarCraft players, was established, and Chinese players and teams participated and won medals starting with the World Cyber Games 2001. By 2003, the Chinese government recognized the success that Chinese players had in these games, and despite the stigma that the government had towards the addicting qualities of video games, recognized esports as an official sport in 2003, encouraging youth to excel in this area and that participating in esports was "training the body for China". China continued to expand its esports engagement alongside South Korea over the next several years, with its growth occurring alongside the growth of other online games in China. China became more involved with planning of the World Cyber Games along South Korea, who had founded the event in 2000. The growth was further fueled by China's large Internet companies investing in esports teams and players, establishing esport tournaments of their own, and acquiring Korea developers of popular esports games. These companies have also gained investment into foreign companies that have produced popular esports titles in China. Notably, Tencent initially acquired an investment into Riot Games in 2008, which produced League of Legends, and by 2015 had fully acquired the studio. Tencent has also invested into Activision Blizzard, which, through Blizzard Entertainment, distribute StarCraft, World of Warcraft, Hearthstone, and Heroes of the Storm. The Alibaba Group and other e-commerce Chinese businesses have also invested heavily into the esports arena within China as early as 2006, but have made more inroads by establishing the World Electronic Sports Games in 2016 as a replacement for the World Cyber Games. Alibaba's efforts have centered on making the cities of Hangzhou and Changzhou esports centers in China. Due to both government encouragement and industry investment, the number of professional esports players in China grew from 50 in 2006 to over 1000 in 2016. In early 2019, China's Ministry of Human Resources and Social Security included both "professional gamer" and "professional gaming operator" as an officially recognized job on its Occupation Skill Testing Authority list; by July 2019, around 100,000 people had registers themselves as "professional gamers" under this, and were making an average of three times the average salary in China. The Ministry had stated they believe that the professional esports sector in China can have over 2 million jobs in five years. This expansive growth has led several local governments to offer incentives for bringing esports to their cities. In esports, China has been the world leader in terms of tournament winnings, possessing some of the best talents in the world across multiple video games, as well as one of the largest pool of video gamers. As of 2017, half of the top 20 highest earning esports players in the world are Chinese. In addition to talent, China is also one of the largest consumers of esports. The 2017 League of Legends World Championship, held in Beijing, drew an estimated 106 million viewers from online streaming services with 98% of them from China, a number on par with the television audience of the Super Bowl. The event was seen as China establishing its place in the global esports marketplace, and demonstrated how China and South Korea's leadership in this area has helped to expand esports popularity to other countries. China is estimated to have about 20% of the global revenue in esports, including sponsorships, merchandising and media rights, with an estimated of the global by 2019, having surpassed Europe and trailing only behind North America. Despite the popularity of esports in the country, it is still not exempt from the grasp of the government's censorship. This was most notable in the Blitzchung controversy in October 2019 where an American video game developer Blizzard Entertainment punished Ng Wai Chung (吳偉聰) (known as Blitzchung), a Hong Kongese esports player of the online video game Hearthstone, for voicing his support of the 2019–20 Hong Kong protests during an official streaming event. Despite the public's response, which included a boycott and a letter from United States Congress representatives, Blizzard did not remove the punishment but did slightly reduce it. Intellectual property protection As described above, China has had a history of a gray market of illegal imports and video game clones, both in hardware and software, as well as copyright theft/piracy as a result of poor intellectual property laws and enforcement in the latter part of the 20th century. Chinese developers have been known to copy video games from foreign developers which resulted in multiple clones of established video game franchises. Some developers take inspiration from existing games and incorporate the designs, gameplay and mechanics to their own IPs. There have been multiple lawsuits filed by major video game companies such as the case filed by Riot Games against Moonton Technology for copying its characters featured in League of Legends. There have been reports where plagiarists are credited as the original creators. Analysts have attributed the rise in plagiarism to lack of knowledge of the original IPs due to non-releases of games in the Chinese Markets, delays or outright ban by the Chinese government. More recently, with the tech industry boom in China, the government has implemented stricter copyright controls and processes, but it is still considered to be weaker than intellectual copyright protections in Western nations, which poses a threat for foreign companies seeking to sell into China. Piracy Because of the high amount of software piracy in China, many foreign game companies have been reluctant to enter the country's market with single player or console games. Instead, they have focused on selling online titles such as massively multiplayer online games as income from these titles comes largely from subscription fees or in game item purchases rather than the purchase price of the title itself. Nintendo claims that, as of February 14, 2008, China remains the main source of manufacturing pirated Nintendo DS and Wii games. Farming As of December 2005, there were an estimated 100,000 Chinese employed as "farmers", video game players who work to acquire virtual currency or items in online games so they can be sold to other players for real currency. Government oversight Responsible agencies Video games are regulated through the government as with most mass media in China, but further, as video games are seen as a cultural benefit, additional agencies are involved in promoting the growth of video games. Ministry of Information Industry The Ministry of Information Industry (MII) of the People's Republic of China (中华人民共和国工业和信息化部) was formed in the late 1990s through the integration of the Ministry of Post and Telecommunications and the Ministry of Electronics Industry. The agency's primary goals include the regulation and promotion of Chinese telecommunications and software companies which include online gaming. MII is also responsible for a number of initiatives aimed at increasing the number and prominence of natively produced online games. One example of such involvement is the inclusion of online gaming in the 2006–2010 plan for software and information service development. Listed here are the ministry's stated objectives involving online gaming: Study and formulate the state's information industry development strategies, general and specific policies, and overall plans, revitalize the electronics and information products manufacturing, telecommunications and software industries, promote the information economy and society. Draw up laws, rules and regulations on electronics and information products manufacturing, telecommunications and software industries, and publish administrative rules and regulations; and supervise the enforcement of laws and administrative rules. Work out technical policies, systems and criteria of the electronics and information products manufacturing, telecommunications and software industries, and technical systems and criteria of the radio and television transmission networks; certificate the entry of telecom networking equipment to networks and manage the entry of telecom terminal equipment to networks; direct the supervision and management of electronics and information products quality. Propel the research and development of electronics and information products manufacturing, telecommunications and software industries, organize research of major scientific and technological development projects, and digestion, absorption and creation of imported technologies, and promote the industrialization of scientific and technological research results; support the development of native industry. The ministry is also responsible for a number of initiatives aimed at increasing the number and prominence of natively produced online games. One example of such involvement is the inclusion of online gaming in the 2006–2010 plan for software and information service development. General Administration of Press and Publication The General Administration of Press and Publication (GAPP) (中华人民共和国新闻出版总暑) is responsible for monitoring and regulating publication of print based media, electronic media, and audio-visual products (including online games). GAPP has also been instrumental in combating the growing problem of Internet addiction and game addiction in China by teaming up with eight other government outlets concerned with the growing effect of game play on China's youth. Towards this end, GAPP works with other agencies, including the Central Civilization Office, Ministry of Education, Chinese Communist Youth League, Ministry of Information Industry, Ministry of Public Security, All China Women's Federation, and China's Care for the Next Generation Work Commission. GAPP also initiated the China National Online Game Publication Project in 2004. The intent of the project was to promote native game development through the use of government subsidies to game developers. In its third year, the project is to run through at least 2008, and has provided an estimated 300 million RMB to 16 Chinese game development companies. State Administration of Radio, Film, and Television China's State Administration of Radio, Film, and Television (SARFT) (国家广播电影电视总局) affected the world of Chinese online games in 2004 by instating a blanket ban on computer game related commercials in the state-run media. The only company to directly contradict this ban is Chinese game provider The9, which teamed with Coca-Cola to jointly promote the release of the popular Western MMORPG World of Warcraft in 2005. Besides this instance, the online game market has thrived without much media promotion. Crime The Beijing Reformatory for Juvenile Delinquents claimed in 2007 that a third of its detainees were influenced by violent online games or erotic websites when committing crimes such as robbery and rape. In a high-profile case from October 2004, 41-year-old Qiu Chengwei was sentenced to death for murdering 26-year-old Zhu Caoyuan over a dispute regarding the sale of a virtual weapon the two had jointly won in the game Legend of Mir 3. Also, in September 2007, a Chinese man in Guangzhou died after playing Internet video games for three consecutive days in an Internet cafe. Content control and censorship As with almost all mass media in the country, video games in China are subject to the national policies of censorship. Content in video games is overseen by SART/NRTA; publishers are required to obtain a license for the game in China from SART before publishing, which may be denied if the game contains elements deemed inappropriate. The process to submit games for a license and put them on sale following that is overseen by MOC. The State General Administration of Press and Publication and anti-porn and illegal publication offices have also played a role in screening games. Examples of banned games have included: Hearts of Iron (for "distorting history and damaging China's sovereignty and territorial integrity") I.G.I.-2: Covert Strike (for "intentionally blackening China and the Chinese army's image") Command & Conquer: Generals (for "smearing the image of China and the Chinese army") Battlefield 4 (for "smearing the image of China and endangering national security") In addition to banning games completely, several games have had their content screened to remove certain imagery deemed offensive or unfavorable. Common examples include skeletons or skulls being either fleshed out or removed entirely. Cases of which can be seen in Chinese versions of popular video games such as Dota 2 and World of Warcraft. With the formation of the Online Game Ethics Committee in December 2018, nine titles reportedly were classified as prohibited or to be withdrawn, but this has yet to be confirmed by reliable sources. These included Fortnite, PlayerUnknown's Battlegrounds, H1Z1, Paladins, and Ring of Elysium. Eleven other titles were told that they needed to make corrective action to be sold within China, including Overwatch, World of Warcraft, Diablo 3, and League of Legends. Publishing a title without having government approval can lead to a company being fined from five to ten times the revenues they earned from the game. In addition to content control, the Chinese government has pushed technology companies, including video game distributors like Tencent, into allowing the government to have partial ownership of the companies that can be used to affect the content produced; in exchange, such companies may gain a competitive edge over others in interactions dealing with the government. Along with guidelines to control and eliminate youth gaming issued in September 2021, the Chinese government has also issued a guidelines regarding the presentation of LGBT and "effimininacy" in video games are to be avoided. Anti-addiction measures China was one of the first countries to recognize the potential for addiction to the Internet, video games, and other digital media, and was the first country to formally classify Internet addiction a clinical disorder by recognizing Clinical Diagnostic Criteria for Internet Addiction in 2008. In 2015, the Chinese government also found that more than 500 million citizens over five years old, nearly half the population, suffered from some form of near-sightednessed, and while video games were not solely responsible for this, the government felt they needed to reduce the amount of time youth played video games. China has sought to deal with addiction to video games by its youth by enacting regulations to be implemented by video game publishers aimed to limit consecutive play time particularly for children. As early as 2005 China's Ministry of Culture has enacted several public health efforts to address gaming and internet related disorders. One of the first systems required by the government was launched in 2005 to regulate adolescents' Internet use, including limiting daily gaming time to 3 hours and requiring users' identification in online video games. In 2007, an "Online Game Anti-Addiction System" was implemented for minors, restricting their use to 3 hours or less per day. The ministry also proposed a "Comprehensive Prevention Program Plan for Minors’ Online Gaming Addiction" in 2013, to promulgate research, particularly on diagnostic methods and interventions. China's Ministry of Education in 2018 announced that new regulations would be introduced to further limit the amount of time spent by minors in online games. While these regulations were not immediately binding, most large Chinese publishers took steps to implement the required features. For example, Tencent restricted the amount of time that children could spend playing one of its online games, to one hour per day for children 12 and under, and two hours per day for children aged . This is facilitated by tracking players via their state-issued identification numbers. This has put some pressure on Western companies that publish via partners in China on how to apply these new anti-addiction requirements into their games, as outside of China, tracking younger players frequently raises privacy concerns. Specialized versions of games, developed by the Chinese partner, have been made to meet these requirements without affecting the rest of the world; Riot Games let its China-based studio implement the requirements into League of Legends for specialized release in China. A new law enacted in November 2019 limits children under 18 to less than 90 minutes of playing video games on weekdays and three hours on weekends, with no video game playing allowed between 10 p.m. to 8 a.m. These are set by requiring game publishers to enforce these limits based on user logins. Additionally, the law limits how much any player can spend on microtransactions, ranging from about $28 to $57 per month depending on the age of the player. In September 2020, the government implemented its own name-based authentication system to be made available to all companies to uphold these laws. Chinese regulators further reduced the amount of time minors are allowed to play online games in August 2021 to one hour each day on Friday, Saturday, and Sunday, as well as on public holidays, from 8 to 9 pm. The measures also capped how much minors could spend on such games, with those between 8 and 16 limited to 200 yuan per month, and those between 16 and 18 to 400 yuan per month. Implementation measures were not described as part of this regulation. In September 2021, GAPP launched a website that allowed for any Chinese citizen to report on games that appeared to be in violation of these anti-addiction measures, classified between those that failed to perform proper identity checks, those that failed limit minors' hours, and those that failed to limit minors' spending within the game. Data privacy Most of the large publishers in China routinely collect data on players and how they play their game. One primary reason is that this is information that may be mandated by the government due to its mass surveillance programs and for implementing systems such as for the anti-addiction measures. Secondly, many of these large companies not only provide video games but a range of media across the spectrum including online video, music, and books, and these companies couple that data to have better reach of targeted advertising as to increase revenues. There are fears, but no reported cases, of these large companies sharing data with the government from foreign users. These fears have had impacts for companies that are fully or partially controlled by Chinese companies. For example, Epic Games in 2018 released its own digital storefront, the Epic Games Store which came under some criticism by players in the West, partially due to fear that Epic would share their data with Tencent and subsequently to the Chinese government, and have called the Store spyware. Foreign ownership With the rising success of online games from 2007 onwards, some foreign companies sought to invest full or partial ownership of Chinese companies to help capture a portion of the growing market. The Chinese government, concerned that these foreign companies would have influence on how the Chinese companies manages their video games, passed a law that banned any foreign company from investing or having any type of ownership in a Chinese company, with the General Administration of Press and Publication serving as the watchdog for such violations. This still allows for foreign companies to bring games into China, but only through operating agreements and partnerships with wholly owned Chinese companies. For example, Blizzard Entertainment's World of Warcraft, an extremely popular MMO in China, was run initially through The9 and later by NetEase, both companies making necessary changes to parts of the game to adhere to Chinese content regulations. Content ratings China introduced a pilot version of its first content rating system in December 2020, the "Online Game Age-Appropriateness Warning" system. It uses three color-based classifications, green for "8+" (games appropriate for ages 8 and up), blue for "12+", and yellow for "16+". Games with online components are required to display these labels on packaging, their website, registration pages, and other relevant materials, The rating system was developed by the Audio-Video and Digital Publishing Association alongside Tencent and NetEase and 52 other organizations. See also Gold farming in China Software industry in China China Software Industry Association Video gaming in Indonesia History of Eastern role-playing video games Digital divide in China Telecommunications industry in China Notes References External links Video game culture Science and technology in the People's Republic of China Chinese culture
242710
https://en.wikipedia.org/wiki/Outline%20of%20academic%20disciplines
Outline of academic disciplines
An academic discipline or field of study is a branch of knowledge, taught and researched as part of higher education. A scholar's discipline is commonly defined by the university faculties and learned societies to which they belong and the academic journals in which they publish research. Disciplines vary between well-established ones that exist in almost all universities and have well-defined rosters of journals and conferences, and nascent ones supported by only a few universities and publications. A discipline may have branches, and these are often called sub-disciplines. The following outline is provided as an overview of and topical guide to academic disciplines. In each case an entry at the highest level of the hierarchy (e.g., Humanities) is a group of broadly similar disciplines; an entry at the next highest level (e.g., Music) is a discipline having some degree of autonomy and being the basic identity felt by its scholars; and lower levels of the hierarchy are sub-disciplines not normally having any role in the structure of the university's governance. Humanities Performing arts Music (outline) Accompanying Chamber music Church music Conducting Choral conducting Orchestral conducting Wind ensemble conducting Early music Jazz studies (outline) Musical composition Music education Music history Musicology Historical musicology Systematic musicology Ethnomusicology Music theory Orchestral studies Organology Organ and historical keyboards Piano Strings, harp, oud, and guitar (outline) Singing Woodwinds, brass, and percussion Recording Dance (outline) Choreography Dance notation Ethnochoreology History of dance Television (outline) Television studies Theatre (outline) Acting Directing Dramaturgy History Musical theatre Playwrighting Puppetry Scenography Stage design Ventriloquism Film (outline) Animation Film criticism Filmmaking Film theory Live action Visual arts Applied arts Animation Calligraphy Decorative arts Mixed media Printmaking Studio art Architecture (Outline of architecture) Interior architecture Landscape architecture Landscape design Landscape planning Architectural analytics Historic preservation Interior design (interior architecture) Technical drawing Fashion Fine arts Graphic arts Drawing (outline) Painting (outline) Photography (outline) Sculpture (outline) History African history American history Ancient history Ancient Egypt Carthage Ancient Greek history (outline) Ancient Roman history (outline) Assyrian Civilization Bronze Age Civilizations Biblical history History of the Indus Valley Civilization Preclassic Maya History of Mesopotamia The Stone Age History of the Yangtze civilization History of the Yellow River civilization Asian history Chinese history Indian history (outline) Indonesian history Iranian history Australian history Cultural history Ecclesiastical history of the Catholic Church Economic history Environmental history European history Intellectual history Jewish history Latin American history Modern history Philosophical history Ancient philosophy Contemporary philosophy Medieval philosophy Humanism (outline) Scholasticism Modern philosophy Political history History of political thought Pre-Columbian era history Prehistory Public history Russian history Scientific history Technological history World history Languages and literature Linguistics (Outline of linguistics) Applied linguistics Composition studies Computational linguistics Discourse analysis English studies Etymology Grammar Grammatology Historical linguistics History of linguistics Interlinguistics Lexicology Linguistic typology Morphology (linguistics) Natural language processing Philology Phonetics Phonology Pragmatics Psycholinguistics Rhetoric Semantics Semiotics (outline) Sociolinguistics Syntax Usage Word usage Comics studies Comparative literature Creative writing Fiction (outline) Non-fiction English literature History of literature Ancient literature Medieval literature Post-colonial literature Post-modern literature Literary theory Critical theory (outline) Literary criticism Poetics Poetry World literature African-American literature American literature British literature Law Administrative law Canon law Civil law Admiralty law Animal law/Animal rights Civil procedure Common law Contract law Corporations Environmental law Family law Federal law International law Public international law Supranational law Labor law Property law Tax law Tort law (outline) Comparative law Competition law Constitutional law Criminal law Criminal justice (outline) Criminal procedure Forensic science (outline) Police science Islamic law Jewish law (outline) Jurisprudence (Philosophy of Law) Legal management Commercial law Corporate law Procedural law Substantive law Philosophy Aesthetics (outline) Applied philosophy Philosophy of economics Philosophy of education Philosophy of engineering Philosophy of history Philosophy of language Philosophy of law Philosophy of mathematics Philosophy of music Philosophy of psychology Philosophy of religion Philosophy of physical sciences Philosophy of biology Philosophy of chemistry Philosophy of physics Philosophy of social science Philosophy of technology Systems philosophy Epistemology (outline) Justification Reasoning errors Ethics (outline) Applied ethics Animal rights Bioethics Environmental ethics Meta-ethics Moral psychology, Descriptive ethics, Value theory Normative ethics Virtue ethics Logic (outline) Mathematical logic Philosophical logic Meta-philosophy Metaphysics (outline) Philosophy of Action Determinism and Free will Ontology Philosophy of mind Philosophy of pain Philosophy of artificial intelligence Philosophy of perception Philosophy of space and time Teleology Theism and Atheism Philosophical traditions and schools African philosophy Analytic philosophy Aristotelianism Continental philosophy Eastern philosophy Feminist philosophy Platonism Social philosophy and political philosophy Anarchism (outline) Feminist philosophy Libertarianism (outline) Marxism Theology Biblical studies Biblical Hebrew, Koine Greek, Aramaic Religious studies Buddhist theology Pali Studies Christian theology Anglican theology Baptist theology Catholic theology Eastern Orthodox theology Protestant theology Hindu theology Sanskrit Studies Dravidian Studies Jewish theology Muslim theology Arabic Studies Social science Anthropology Biological anthropology Linguistic anthropology Cultural anthropology Social anthropology Archaeology Biocultural anthropology Evolutionary anthropology Feminist archaeology Forensic anthropology Maritime archaeology Palaeoanthropology Economics Agricultural economics Anarchist economics Applied economics Behavioural economics Bioeconomics Complexity economics Computational economics Consumer economics Development economics Ecological economics Econometrics Economic geography Economic sociology Economic systems Education economics Energy economics Entrepreneurial economics Environmental economics Evolutionary economics Experimental economics Feminist economics Financial econometrics Financial economics Green economics Growth economics Human development theory Industrial organization Information economics Institutional economics International economics Islamic economics Labor economics Law and economics Macroeconomics Managerial economics Marxian economics Mathematical economics Microeconomics Monetary economics Neuroeconomics Participatory economics Political economy Public economics Public finance Real estate economics Resource economics Social choice theory Socialist economics Socioeconomics Transport economics Welfare economics Geography Physical geography Atmology Biogeography Climatology Coastal geography Emergency management Environmental geography Geobiology Geochemistry Geology Geomatics Geomorphology Geophysics Glaciology Hydrology Landscape ecology Lithology Meteorology Mineralogy Oceanography Palaeogeography Palaeontology Petrology Quaternary science Soil geography Human geography Behavioural geography Cognitive geography Cultural geography Development geography Economic geography Health geography Historical geography Language geography Mathematical geography Marketing geography Military geography Political geography Population geography Religion geography Social geography Strategic geography Time geography Tourism geography Transport geography Urban geography Integrated geography Cartography Celestial cartography Planetary cartography Topography Political science American politics Canadian politics Civics Comparative politics European studies Geopolitics (Political geography) International relations International organizations Nationalism studies Peace and conflict studies Policy studies Political behavior Political culture Political economy Political history Political philosophy Public administration Public law Psephology Social choice theory Singapore politics Psychology Abnormal psychology Applied psychology Biological psychology Clinical neuropsychology Clinical psychology Cognitive psychology Community psychology Comparative psychology Conservation psychology Consumer psychology Counseling psychology Criminal psychology Cultural psychology Asian psychology Black psychology Developmental psychology Differential psychology Ecological psychology Educational psychology Environmental psychology Evolutionary psychology Experimental psychology Group psychology Family psychology Feminine psychology Forensic developmental psychology Forensic psychology Health psychology Humanistic psychology Indigenous psychology Legal psychology Mathematical psychology Media psychology Medical psychology Military psychology Moral psychology and Descriptive ethics Music psychology Neuropsychology Occupational health psychology Occupational psychology Organizational psychology (a.k.a., Industrial Psychology) Parapsychology (outline) Pediatric psychology Pedology (children study) Personality psychology Phenomenology Political psychology Positive psychology Psychoanalysis Psychobiology Psychology of religion Psychometrics Psychopathology Child psychopathology Psychophysics Quantitative psychology Rehabilitation psychology School psychology Social psychology Sport psychology Traffic psychology Transpersonal psychology Sociology Analytical sociology Applied sociology Leisure studies Political sociology Public sociology Social engineering Architectural sociology Area studies African studies American studies Appalachian studies Canadian studies Latin American studies Asian studies Central Asian studies East Asian studies Indology Iranian studies Japanese studies Korean studies Pakistan studies Sindhology Sinology (outline) Southeast Asian studies Thai studies Australian studies European studies Celtic studies German studies Sociology in Poland Scandinavian studies Slavic studies Middle Eastern studies Arab studies Assyriology Egyptology Jewish studies Behavioral sociology Collective behavior Social movements Community informatics Social network analysis Comparative sociology Conflict theory Criminology/Criminal justice (outline) Critical management studies Critical sociology Cultural sociology Cultural studies/ethnic studies Africana studies Cross-cultural studies Culturology Deaf studies Ethnology Utopian studies Whiteness studies Demography/Population Digital sociology Dramaturgical sociology Economic sociology Educational sociology Empirical sociology Environmental sociology Evolutionary sociology Feminist sociology Figurational sociology Futures studies (outline) Gender studies Men's studies Women's studies Historical sociology Human ecology Humanistic sociology Industrial sociology Interactionism Interpretive sociology Ethnomethodology Phenomenology Social constructionism Symbolic interactionism Jealousy sociology Macrosociology Marxist sociology Mathematical sociology Medical sociology Mesosociology Microsociology Military sociology Natural resource sociology Organizational studies Phenomenological sociology Policy sociology Psychoanalytic sociology Science studies/Science and technology studies Sexology Heterosexism Human sexual behavior Human sexuality (outline) Queer studies/Queer theory Sex education Social capital Social change Social conflict theory Social control Pure sociology Social economy Social philosophy Social policy Social psychology Social stratification Social theory Social transformation Computational sociology Economic sociology/Socioeconomics Economic development Social development Sociobiology Sociocybernetics Sociolinguistics Sociology of aging Sociology of agriculture Sociology of art Sociology of autism Sociology of childhood Sociology of conflict Sociology of culture Sociology of cyberspace Sociology of development Sociology of deviance Sociology of disaster Sociology of education Sociology of emotions Sociology of fatherhood Sociology of finance Sociology of food Sociology of gender Sociology of generations Sociology of globalization Sociology of government Sociology of health and illness Sociology of human consciousness Sociology of immigration Sociology of knowledge Sociology of language Sociology of law Sociology of leisure Sociology of literature Sociology of markets Sociology of marriage Sociology of motherhood Sociology of music Sociology of natural resources Sociology of organizations Sociology of peace, war, and social conflict Sociology of punishment Sociology of race and ethnic relations Sociology of religion Sociology of risk Sociology of science Sociology of scientific knowledge Sociology of social change Sociology of social movements Sociology of space Sociology of sport Sociology of technology Sociology of terrorism Sociology of the body Sociology of the family Sociology of the history of science Sociology of the Internet Sociology of work Sociomusicology Structural sociology Theoretical sociology Urban studies or Urban sociology/Rural sociology Victimology Visual sociology Social work Clinical social work Community practice Mental health Psychosocial rehabilitation Person-centered therapy Family therapy Financial social work Natural science Biology Aerobiology Anatomy Comparative anatomy Human anatomy (outline) Biochemistry (outline) Bioinformatics Biophysics (outline) Biotechnology (outline) Botany (outline) Ethnobotany Phycology Cell biology (outline) Chronobiology Computational biology Cryobiology Developmental biology Embryology Teratology Ecology (outline) Agroecology Ethnoecology Human ecology Landscape ecology Endocrinology Epigenetics Ethnobiology Anthrozoology Evolutionary biology Genetics (outline) Behavioural genetics Molecular genetics Population genetics Histology Human biology Immunology (outline) Limnology Linnaean taxonomy Marine biology Mathematical biology Microbiology Bacteriology Protistology Molecular biology Mycology Neuroscience (outline) Behavioral neuroscience Nutrition (outline) Paleobiology Paleontology Parasitology Pathology Anatomical pathology Clinical pathology Dermatopathology Forensic pathology Hematopathology Histopathology Molecular pathology Surgical pathology Physiology Human physiology Exercise physiology Structural Biology Systematics (Taxonomy) Systems biology Virology Molecular virology Xenobiology Zoology (outline) Animal communications Apiology Arachnology Arthropodology Batrachology Bryozoology Carcinology Cetology Cnidariology Entomology Forensic entomology Ethnozoology Ethology Helminthology Herpetology Ichthyology (outline) Invertebrate zoology Mammalogy Cynology Felinology Malacology Conchology Limacology Teuthology Myriapodology Myrmecology (outline) Nematology Neuroethology Oology Ornithology (outline) Planktology Primatology Zootomy Zoosemiotics Chemistry Agrochemistry Analytical chemistry Astrochemistry Atmospheric chemistry Biochemistry (outline) Chemical biology Chemical engineering (outline) Cheminformatics Computational chemistry Cosmochemistry Electrochemistry Environmental chemistry Femtochemistry Flavor Flow chemistry Geochemistry Green chemistry Histochemistry Hydrogenation Immunochemistry Inorganic chemistry Marine chemistry Mathematical chemistry Mechanochemistry Medicinal chemistry Molecular biology Molecular mechanics Nanotechnology Natural product chemistry Neurochemistry Oenology Organic chemistry (outline) Organometallic chemistry Petrochemistry Pharmacology Photochemistry Physical chemistry Physical organic chemistry Phytochemistry Polymer chemistry Quantum chemistry Radiochemistry Solid-state chemistry Sonochemistry Supramolecular chemistry Surface chemistry Synthetic chemistry Theoretical chemistry Thermochemistry Earth science Edaphology Environmental chemistry Environmental science Gemology Geochemistry Geodesy Physical geography (outline) Atmospheric science / Meteorology (outline) Biogeography / Phytogeography Climatology / Paleoclimatology / Palaeogeography Coastal geography / Oceanography Edaphology / Pedology or Soil science Geobiology Geology (outline) (Geomorphology, Mineralogy, Petrology, Sedimentology, Speleology, Tectonics, Volcanology) Geostatistics Glaciology Hydrology (outline)/ Limnology / Hydrogeology Landscape ecology Quaternary science Geophysics (outline) Paleontology Paleobiology Paleoecology Space science Astrobiology Astronomy (outline) Observational astronomy Gamma ray astronomy Infrared astronomy Microwave astronomy Optical astronomy Radio astronomy UV astronomy X-ray astronomy Astrophysics Gravitational astronomy Black holes Cosmology Physical cosmology Interstellar medium Numerical simulations Astrophysical plasma Galaxy formation and evolution High-energy astrophysics Hydrodynamics Magnetohydrodynamics Star formation Stellar astrophysics Helioseismology Stellar evolution Stellar nucleosynthesis Planetary science Physics Acoustics Aerodynamics Applied physics Astrophysics Atomic, molecular, and optical physics Biophysics (outline) Computational physics Condensed matter physics Cryogenics Electricity Electromagnetism Elementary particle physics Experimental physics Fluid dynamics Geophysics (outline) Mathematical physics Mechanics Medical physics Molecular physics Newtonian dynamics Nuclear physics Optics Plasma physics Quantum physics Solid mechanics Solid state physics Statistical mechanics Theoretical physics Thermal physics Thermodynamics Formal science Computer science Also a branch of electrical engineering Logic in computer science Formal methods (Formal verification) Logic programming Multi-valued logic Fuzzy logic Programming language semantics Type theory Algorithms Computational geometry Distributed algorithms Parallel algorithms Randomized algorithms Artificial intelligence (outline) Cognitive science Automated reasoning Computer vision (outline) Machine learning Artificial neural networks Natural language processing (Computational linguistics) Expert systems Robotics (outline) Data structures Computer architecture Computer graphics Image processing Scientific visualization Computer communications (networks) Cloud computing Information theory Internet, World Wide Web Ubiquitous computing Wireless computing (Mobile computing) Computer security and reliability Cryptography Fault-tolerant computing Computing in mathematics, natural sciences, engineering, and medicine Algebraic (symbolic) computation Computational biology (bioinformatics) Computational chemistry Computational mathematics Computational neuroscience Computational number theory Computational physics Computer-aided engineering Computational fluid dynamics Finite element analysis Numerical analysis Scientific computing (Computational science) Computing in social sciences, arts, humanities, and professions Community informatics Computational economics Computational finance Computational sociology Digital humanities (Humanities computing) History of computer hardware History of computer science (outline) Humanistic informatics Databases (outline) Distributed databases Object databases Relational databases Data management Data mining Information architecture Information management Information retrieval Knowledge management Multimedia, hypermedia Sound and music computing Distributed computing Grid computing Human-computer interaction Operating systems Parallel computing High-performance computing Programming languages Compilers Programming paradigms Concurrent programming Functional programming Imperative programming Logic programming Object-oriented programming Program semantics Type theory Quantum computing Software engineering Formal methods (Formal verification) Theory of computation Automata theory (Formal languages) Computability theory Computational complexity theory Concurrency theory VLSI design Mathematics Pure mathematics Mathematical logic and Foundations of mathematics Intuitionistic logic Modal logic Model theory Proof theory Recursion theory Set theory Algebra (outline) Associative algebra Category theory Topos theory Differential algebra Field theory Group theory Group representation Homological algebra K-theory Lattice theory (Order theory) Lie algebra Linear algebra (Vector space) Multilinear algebra Non-associative algebra Representation theory Ring theory Commutative algebra Noncommutative algebra Universal algebra Analysis Complex analysis Functional analysis Operator theory Harmonic analysis Fourier analysis Non-standard analysis Ordinary differential equations p-adic analysis Partial differential equations Real analysis Calculus (outline) Probability theory Ergodic theory Measure theory Integral geometry Stochastic process Geometry (outline) and Topology Affine geometry Algebraic geometry Algebraic topology Convex geometry Differential topology Discrete geometry Finite geometry Galois geometry General topology Geometric topology Integral geometry Noncommutative geometry Non-Euclidean geometry Projective geometry Number theory Algebraic number theory Analytic number theory Arithmetic combinatorics Geometric number theory Applied mathematics Approximation theory Combinatorics (outline) Coding theory Cryptography Dynamical systems Chaos theory Fractal geometry Game theory Graph theory Information theory Mathematical physics Quantum field theory Quantum gravity String theory Quantum mechanics Statistical mechanics Numerical analysis Operations research Assignment problem Decision analysis Dynamic programming Inventory theory Linear programming Mathematical optimization Optimal maintenance Real options analysis Scheduling Stochastic processes Systems analysis Statistics (outline) Actuarial science Demography Econometrics Mathematical statistics Data visualization Theory of computation Computational complexity theory Applied science Agriculture Aeroponics Agroecology Agrology Agronomy Animal husbandry (Animal science) Beekeeping (Apiculture) Anthroponics Agricultural economics Agricultural engineering Biological systems engineering Food engineering Aquaculture Aquaponics Enology Entomology Fogponics Food science Culinary arts Forestry Horticulture Hydrology (outline) Hydroponics Pedology Plant science (outline) Pomology Pest control Purification Viticulture Architecture and design Architecture (outline) Interior architecture Landscape architecture Architectural analytics Historic preservation Interior design (interior architecture) Landscape architecture (landscape planning) Landscape design Urban planning (urban design) Visual communication Graphic design Type design Technical drawing Industrial design (product design) Ergonomics (outline) Toy and amusement design User experience design Interaction design Information architecture User interface design User experience evaluation Decorative arts Fashion design Textile design Business Accounting Accounting research Accounting scholarship Business administration Business analysis Business ethics Business law Business management E-Business Entrepreneurship Finance (outline) Industrial and labor relations Collective bargaining Human resources Organizational studies Labor economics Labor history Information systems (Business informatics) Management information systems Health informatics Information technology (outline) International trade Management (outline) Marketing (outline) Operations management Purchasing Risk management and insurance Systems science Divinity Canon law Church history Field ministry Pastoral counseling Pastoral theology Religious education techniques Homiletics Liturgy Sacred music Missiology Hermeneutics Scriptural study and languages Biblical Hebrew Biblical studies/Sacred scripture Vedic Study New Testament Greek Latin Old Church Slavonic Theology (outline) Dogmatic theology Ecclesiology Sacramental theology Systematic theology Christian ethics Hindu ethics Moral theology Historical theology Education Comparative education Critical pedagogy Curriculum and instruction Alternative education Early childhood education Elementary education Secondary education Higher education Mastery learning Cooperative learning Agricultural education Art education Bilingual education Chemistry education Counselor education Language education Legal education Mathematics education Medical education Military education and training Music education Nursing education Outdoor education Peace education Physical education/Sports coaching Physics education Reading education Religious education Science education Special education Sex education Sociology of education Technology education Vocational education Educational leadership Educational philosophy Educational psychology Educational technology Distance education Engineering and technology Chemical Engineering Bioengineering Biochemical engineering Biomolecular engineering Catalysis Materials engineering Molecular engineering Nanotechnology Polymer engineering Process design Petroleum engineering Nuclear engineering Food engineering Process engineering Reaction engineering Thermodynamics Transport phenomena Civil Engineering Coastal engineering Earthquake engineering Ecological engineering Environmental engineering Geotechnical engineering Engineering geology Hydraulic engineering Mining engineering Transportation engineering Highway engineering Structural engineering Architectural engineering Structural mechanics Surveying Educational Technology Instructional design Distance education Instructional simulation Human performance technology Knowledge management Electrical Engineering Applied physics Computer engineering (outline) Computer science Control systems engineering Control theory Electronic engineering Instrumentation engineering Engineering physics Photonics Information theory Mechatronics Power engineering Quantum computing Robotics (outline) Semiconductors Telecommunications engineering Materials Science and Engineering Biomaterials Ceramic engineering Crystallography Nanomaterials Photonics Physical Metallurgy Polymer engineering Polymer science Semiconductors Mechanical Engineering Aerospace engineering Aeronautics Astronautics Acoustical engineering Automotive engineering Biomedical engineering Biomechanical engineering Neural engineering Continuum mechanics Fluid mechanics Heat transfer Industrial engineering Manufacturing engineering Marine engineering Mass transfer Mechatronics Nanoengineering Ocean engineering Optical engineering Robotics Thermodynamics Systems science Chaos theory Complex systems Conceptual systems Control theory Affect control theory Control engineering Control systems Dynamical systems Perceptual control theory Cybernetics Biocybernetics Engineering cybernetics Management cybernetics Medical cybernetics New Cybernetics Second-order cybernetics Sociocybernetics Network science Operations research Systems biology Computational systems biology Synthetic biology Systems immunology Systems neuroscience System dynamics Social dynamics Systems ecology Ecosystem ecology Systems engineering Biological systems engineering Earth systems engineering and management Enterprise systems engineering Systems analysis Systems psychology Ergonomics Family systems theory Systemic therapy Systems theory Biochemical systems theory Ecological systems theory Developmental systems theory General systems theory Living systems theory LTI system theory Mathematical system theory Sociotechnical systems theory World-systems theory Systems theory in anthropology Environmental studies and forestry Environmental management Coastal management Fisheries management Land management Natural resource management Waste management Wildlife management Environmental policy Wildlife observation Recreation ecology Silviculture Sustainability studies Sustainable development Toxicology Ecology Family and consumer science Consumer education Housing Interior design Nutrition (outline) Foodservice management Textiles Human physical performance and recreation Biomechanics / Sports biomechanics Sports coaching Escapology Ergonomics Physical fitness Aerobics Personal trainer / Personal fitness training Game design Exercise physiology Kinesiology / Exercise physiology / Performance science Leisure studies Navigation Outdoor activity Physical activity Physical education / Pedagogy Sociology of sport Sexology Sports / exercise Sports journalism / sportscasting Sport management Athletic director Sport psychology Sports medicine Athletic training Survival skills Batoning Bushcraft Scoutcraft Woodcraft Toy and amusement design Journalism, media studies and communication Journalism (outline) Broadcast journalism Digital journalism Literary journalism New media journalism Print journalism Sports journalism / sportscasting Media studies (Mass media) Newspaper Magazine Radio (outline) Television (outline) Television studies Film (outline) Film studies Game studies Fan studies Narratology Internet (outline) Communication studies Advertising Animal communication Communication design Conspiracy theory Digital media Electronic media Environmental communication Hoax Information theory Intercultural communication Marketing (outline) Mass communication Nonverbal communication Organizational communication Popular culture studies Propaganda Public relations (outline) Speech communication Technical writing Translation Law Legal management Corporate law Mercantile law Business law Administrative law Canon law Comparative law Constitutional law Competition law Criminal law Criminal procedure Criminal justice (outline) Police science Forensic science (outline) Islamic law Jewish law (outline) Jurisprudence (Philosophy of Law) Civil law Admiralty law Animal law/Animal rights Common law Corporations Civil procedure Contract law Environmental law Family law Federal law International law Public international law Supranational law Labor law Paralegal studies Property law Tax law Tort law (outline) Law enforcement (outline) Procedural law Substantive law Library and museum studies Archival science Archivist Bibliographic databases Bibliometrics Bookmobile Cataloging Citation analysis Categorization Classification Library classification Taxonomic classification Scientific classification Statistical classification Security classification Film classification Collections care Collection management Collection Management Policy Conservation science Conservation and restoration of cultural heritage Curator Data storage Database management Data modeling Digital preservation Dissemination Film preservation Five laws of library science Historic preservation History of library science Human-computer interaction Indexer Informatics Information architecture Information broker Information literacy Information retrieval Information science (outline) Information systems and technology Integrated library system Interlibrary loan Knowledge engineering Knowledge management Library Library binding Library circulation Library instruction Library portal Library technical services Management Mass deacidification Museology Museum education Museum administration Object conservation Preservation Prospect research Readers' advisory Records management Reference Reference desk Reference management software Registrar Research methods Slow fire Special library Statistics Medicine and health Alternative medicine Audiology Clinical laboratory sciences/Clinical pathology/Laboratory medicine Clinical biochemistry Cytogenetics Cytohematology Cytology (outline) Haemostasiology Histology Clinical immunology Clinical microbiology Molecular genetics Parasitology Clinical physiology Dentistry (outline) Dental hygiene and epidemiology Dental surgery Endodontics Implantology Oral and maxillofacial surgery Orthodontics Periodontics Prosthodontics Dermatology Emergency medicine (outline) Epidemiology Geriatrics Gynaecology Health informatics/Clinical informatics Hematology Holistic medicine Infectious disease Intensive care medicine Internal medicine Cardiology Cardiac electrophysiology Endocrinology Gastroenterology Hepatology Nephrology Neurology Oncology Pulmonology Rheumatology Medical toxicology Music therapy Nursing Nutrition (outline) and dietetics Obstetrics (outline) Occupational hygiene Occupational therapy Occupational toxicology Ophthalmology Neuro-ophthalmology Optometry Otolaryngology Pathology Pediatrics Pharmaceutical sciences Pharmaceutical chemistry Pharmaceutical toxicology Pharmaceutics Pharmacocybernetics Pharmacodynamics Pharmacogenomics Pharmacognosy Pharmacokinetics Pharmacology Pharmacy Physical fitness Group Fitness / aerobics Kinesiology / Exercise science / Human performance Personal fitness training Physical therapy Physiotherapy Podiatry Preventive medicine Primary care General practice Psychiatry (outline) Forensic psychiatry Psychology (outline) Public health Radiology Recreational therapy Rehabilitation medicine Respiratory therapy Sleep medicine Speech–language pathology Sports medicine Surgery Bariatric surgery Cardiothoracic surgery Neurosurgery Orthoptics Orthopedic surgery Plastic surgery Trauma surgery Traumatology Traditional medicine Urology Andrology Veterinary medicine Military sciences Amphibious warfare Artillery Battlespace Air Information Land Sea Space Campaigning Military engineering Doctrine Espionage Game theory Grand strategy Containment Limited war Military science (outline) Philosophy of war Strategic studies Total war War (outline) Leadership Logistics Materiel Supply chain management Military operation Military history Prehistoric Ancient Medieval Early modern Industrial Modern Fourth-generation warfare Military intelligence Military law Military medicine Naval science Naval engineering Naval tactics Naval architecture Organization Command and control Doctrine Education and training Engineers Intelligence Ranks Staff Technology and equipment Military exercises Military simulation Military sports Strategy Attrition Deception Defensive Offensive Counter-offensive Maneuver Goal Naval Tactics Aerial Battle Cavalry Charge Counter-attack Counter-insurgency Counter-intelligence Counter-terrorism Foxhole Endemic warfare Guerrilla warfare Infiltration Irregular warfare Morale Naval tactics Siege Surgical strike Tactical objective Trench warfare Military weapons Armor Artillery Biological Cavalry Conventional Chemical Cyber Economic Electronic Infantry Nuclear Psychological Unconventional Other Military Arms control Arms race Assassination Asymmetric warfare Civil defense Clandestine operation Collateral damage Cold war (general term) Combat Covert operation Cyberwarfare Defense industry Disarmament Intelligence agency Laws of war Mercenary Military campaign Military operation Mock combat Network-centric warfare Paramilitary Principles of war Private defense agency Private military company Proxy war Religious war Security Special forces Special operations Theater (warfare) Theft Undercover War crimes Warrior Public administration Civil service Corrections Conservation biology Criminal justice (outline) Disaster research Disaster response Emergency management Emergency services Fire safety (Structural fire protection) Fire ecology (Wildland fire management) Governmental affairs International affairs Law enforcement Peace and conflict studies Police science Policy studies Policy analysis Public administration Nonprofit administration Non-governmental organization (NGO) administration Public policy doctrine Public policy school Regulation Public safety Public service Public policy Agricultural policy Commercial policy Cultural policy Domestic policy Drug policy Drug policy reform Economic policy Fiscal policy Incomes policy Industrial policy Investment policy Monetary policy Tax policy Education policy Energy policy Nuclear energy policy Renewable energy policy Environmental policy Food policy Foreign policy Health policy Pharmaceutical policy Vaccination policy Housing policy Immigration policy Knowledge policy Language policy Military policy Science policy Climate change policy Stem cell research policy Space policy Technology policy Security policy Social policy Public policy by country Social work Child welfare Community practice Community organizing Social policy Human Services Corrections Gerontology Medical social work Mental health School social work Transportation Highway safety Infographics Intermodal transportation studies Logistics Marine transportation Port management Seafaring Operations research Mass transit Travel Vehicles See also Academia (outline) Academic genealogy Curriculum Interdisciplinarity Transdisciplinarity Professions Classification of Instructional Programs Joint Academic Coding System List of fields of doctoral studies in the United States List of academic fields International Academic Association for the Enhancement of Learning in Higher Education References US Department of Education Institute of Education Sciences. Classification of Instructional Programs (CIP). National Center for Education Statistics. External links Classification of Instructional Programs (CIP 2000): Developed by the U.S. Department of Education's National Center for Education Statistics to provide a taxonomic scheme that will support the accurate tracking, assessment, and reporting of fields of study and program completions activity. Complete JACS (Joint Academic Classification of Subjects) from Higher Education Statistics Agency (HESA) in the United Kingdom Australian and New Zealand Standard Research Classification (ANZSRC 2008) (web-page ) Chapter 3 and Appendix 1: Fields of research classification. Fields of Knowledge, a zoomable map allowing the academic disciplines and sub-disciplines in this article be visualised. Sandoz, R. (ed.), Interactive Historical Atlas of the Disciplines, University of Geneva academic disciplines academic disciplines Education-related lists Science-related lists Higher education-related lists
1167763
https://en.wikipedia.org/wiki/Disney%20Interactive%20Studios
Disney Interactive Studios
Disney Interactive Studios, Inc. (originally established as Walt Disney Computer Software; also known as Disney Software, Buena Vista Software, Disney Interactive, Buena Vista Interactive and Buena Vista Games) was an American video game developer and publisher owned by The Walt Disney Company through Disney Interactive. Prior to its closure in 2016, it developed and distributed multi-platform video games and interactive entertainment worldwide. Most of the games released by Disney Interactive Studios were typically tie-in products to existing character franchises. On May 10, 2016, as a result of the discontinuation of its Disney Infinity series, Disney shut down Disney Interactive Studios, and exited the first-party home console game development business in order to focus on third-party development of home console video games through other developers such as Electronic Arts (Star Wars games), WB Games (owned by rival company Warner Bros., which handles the publishing of Disney-related Lego video games and Cars 3: Driven to Win), Bandai Namco Entertainment (Disney Tsum Tsum Festival), Square Enix (Kingdom Hearts), and Capcom (several Disney games, Willow games and Marvel vs. Capcom). However, it continues to release games for iOS and Android devices under its own label, Disney Mobile. History Walt Disney Computer Software Disney established its own in house gaming unit, Walt Disney Computer Software, Inc. (WDCS), and it was incorporated on September 15, 1988. WDCS generally used third-party development studios to design spin-off games using its existing portfolio of characters. WDCS had little success attributed by senior Disney executives due to low product quality and lack of understanding the differences between film and games. The few market successes were third-party-published games based on major Disney animated features like Aladdin and The Lion King in 1993 and 1994 respectively. This led to a move from self-developed and self-published to funding and development management of games with third parties published the game. Disney Interactive Using the film studio style formula, WDCS was reorganized into Disney Interactive, Inc. (DI) on December 5, 1994 with the merging of WDCS and Walt Disney Television and Telecommunications. On April 15, 1997, DI reduced its staff by 20% ending in-house video game production. This increased the requests for licensing from third-party games companies. Under this plan, development and production cost risks were transferred to the game companies but reduced the per-unit revenue generated to Disney and effectively yielded a near 100 percent margin of licensed game sales. A thirteen-game agreement was made between Nintendo of America and Disney Interactive in 1999 for both the Nintendo 64 and Game Boy Color. In May 2001, the company signed a deal with Sony Computer Entertainment Europe to allow the latter to publish titles based on Atlantis: The Lost Empire, Monsters, Inc., Treasure Planet, Lilo and Stitch, and Peter Pan: Return to Never Land on the PlayStation and PlayStation 2. In European territories, Infogrames formerly distributed several of Disney Interactive's PC titles., however, this agreement was later replaced with several separate distribution deals, including JoWooD Productions in Germany. Buena Vista Games (2003–2007) Buena Vista Games, Inc. (BVG) was spun out of Disney Interactive in 2003 after a 2002 strategic review that chose to return to being a dedicated games publisher. With DI focused on children's games, BVG took on all other content game including mobile and online mediums. Buena Vista Games is probably best known for the Kingdom Hearts series along with Japanese developer Square Enix. In April 2005, BVG purchased Avalanche Software in Salt Lake City, Utah and started a Vancouver, British Columbia based game development studio, Propaganda Games. In September 2006, Buena Vista acquired Climax Racing. BVG formed a new game studio, Fall Line Studio, in November 2006 to create Disney and new game titles for the Nintendo DS and the Wii console. Disney Interactive Studios On February 8, 2007; The Walt Disney Company renamed Buena Vista Games to Disney Interactive Studios as part of a larger company initiative to phase out the Buena Vista brand that year. The studio publishes both Disney and non-Disney branded video games for all platforms worldwide, with titles that feature its consumer brands including Disney, ABC, ESPN, and Touchstone (which is used as a label for Disney). In July 2007, the studio acquired Junction Point Studios. On June 5, 2008, Disney Interactive Studios and the Walt Disney Internet Group, merged into a single business unit now known as the Disney Interactive Media Group, and it merged its subsidiary Fall Line Studios with its sister studio, Avalanche Software, in January 2009. In February 2009, Disney Interactive acquired GameStar, a Chinese game development company. On September 8, 2009, Disney Interactive announced that it had acquired Wideload Games. In November 2010 the executive Graham Hopper left the company. He announced his departure via an internal e-mail saying "the time has come for me to move on from the company and set my sights on new horizons." DIS in October 2012 announced "Toy Box", a cross platform gaming initiative where Pixar and Disney characters will interact from a console game to multiple mobile and online applications. The first Toy Box cross platform game is Disney Infinity based on the Toy Story 3 game's Toy Box mode crossed with a toy line. After the purchase of Lucasfilm by The Walt Disney Company in 2012, Disney Interactive assumed the role of developing Star Wars games for the casual gaming market, while Electronic Arts would develop Star Wars games for the core gaming market through an exclusive license (although LucasArts did retain the ability to license Star Wars games to other developers for the casual gaming market). At E3 2013, Disney and Square Enix released a teaser trailer for Kingdom Hearts III, after going seven years of not declaring any console Kingdom Hearts game since Kingdom Hearts II. The game would release nearly six years later in January 2019. Disney Interactive Studios has lost more than $200 million per year from 2008–2012 during a period in which it shut down Propaganda Games, Black Rock Studio and Junction Point Studios and its co-president John Pleasants stepped down in November 2013 after the launch of Disney Infinity. On March 6, 2014, 700 employees were laid off. After the cancellation of Disney Infinity, Disney Interactive Studios closed in 2016. List of games The company also publishes games from Q Entertainment worldwide except Asia: Lumines II, the sequel to the puzzle game for the PSP system; Lumines Plus, a new version of Lumines for the PlayStation 2; Every Extend Extra, a puzzle shooter; and a Disney Interactive Studios's Meteos: Disney Edition, the popular Meteos game for the Nintendo DS with Disney characters. The company revealed a lineup of games at E3 2006, which include DIE's Turok, a re-imagining of the video game series of the same name and Desperate Housewives: The Game, based on the hit television show. Disney Interactive Studios is credited in all entries to the Kingdom Hearts franchise, with the original release box art of each entry to the series having different logos and name of the company seeing as coincidentally, the company is re-branded in between the releases. Notably however, the company is not credited to actually developing the game. Divisions Moved to Disney Interactive Disney Mobile Disney Online Playdom (later defunct) Former/defunct Avalanche Software, based in Salt Lake City, Utah. Acquired April 2005. Shut down May 2016. Later re-opened and sold to Warner Bros. Interactive Entertainment in January 2017. Black Rock Studio, acquired as Climax Racing in September 2006 and closed in July 2011. Creature Feep, 2009–2015. Fall Line Studios, 2006–2009, merged into Avalanche Software. Junction Point Studios, based in Austin, Texas. Acquired July 2007. Shut down in January 2013. Propaganda Games, 2005–2011. Wideload Games, based in Chicago, Illinois. Acquired September 8, 2009. Shut down March 6, 2014. Rocket Pack, 2010–2015. Gamestar, based in China. Acquired February 2009, defunct. References 1988 establishments in California 2016 disestablishments in California Companies based in Glendale, California Defunct companies based in Greater Los Angeles Defunct software companies of the United States Defunct video game companies of the United States Disney Interactive Disney video games Software companies based in California Software companies disestablished in 2016 Software companies established in 1988 Technology companies based in Greater Los Angeles Video game companies based in California Video game companies disestablished in 2016 Video game companies established in 1988 Video game development companies Video game publishers
24594899
https://en.wikipedia.org/wiki/Prey%20%28software%29
Prey (software)
Prey is a software and online platform for mobile device tracking, management, and protection available for laptops, tablets, and mobiles. The software and service is developed by the Chilean company Prey Inc., successor of the funding company Fork Ltd. Prey was originally created by the developer Tomás Pollak who together with Carlos Yaconi, the current CEO of the company, founded Fork Ltd. and released the first version of Prey for Linux. Functioning Prey started as an anti-theft software for recovering lost mobile devices which evolved into remote management with device and data protection. The service works through a client, or application, installed on the devices that are to be protected. Company Prey Inc. is a private software development company. Currently, it operates offices in San Francisco and in Santiago de Chile. Features The main feature in the platform is location tracking for any of the supported devices, with active monitoring of device movement. The service also has automatic reactions to location triggers, such as unwanted movements, that trigger security actions instantly when the event is detected. Provides a set of features that give the user remote security capabilities in case of theft or loss. Remotely recover or wipe information from a lost or stolen devices. Tools that streamline management processes and maintain a device inventory for global monitoring. Evidence generation tool for device recovery. Source code The source code in Prey's agents, or applications, is open source code under the GNU General Public License (GPLv3). The online platform's code and infrastructure are private and property of Prey Inc. Versions The first version of the agent was released originally for Linux and Mac OS X in March 2009 and for Microsoft Windows in April 2009. Currently, Prey is available for laptops, tablets and mobiles in the macOS, Windows, Linux, Android, and iOS operating systems. References External links Interview with Tomás Pollak Cross-platform free software Laptops Free security software Free and open-source Android software Location-based software Theft Security software
25315238
https://en.wikipedia.org/wiki/David%20Mount
David Mount
David Mount is a professor at the University of Maryland, College Park department of computer science whose research is in computational geometry. Biography Mount received a B.S. in Computer Science at Purdue University in 1977 and received his Ph.D. in Computer Science at Purdue University in 1983 under the advisement of Christoph Hoffmann. He began teaching at the University of Maryland in 1984 and is Professor in the department of Computer Science there. As a teacher, he has won the University of Maryland, College of Computer Mathematical and Physical Sciences Dean's Award for Excellence in Teaching in 2005 and 1997 as well as other teaching awards including the Hong Kong Science and Technology, School of Engineering Award for Teaching Excellence Appreciation in 2001. Research Mounts's main area of research is computational geometry, which is the branch of algorithms devoted to solving problems of a geometric nature. This field includes problems from classic geometry, like the closest pair of points problem, as well as more recent applied problems, such as computer representation and modeling of curves and surfaces. In particular, Mount has worked on the k-means clustering problem, nearest neighbor search, and point location. Mount has worked on developing practical algorithms for k-means clustering, a problem known to be NP-hard. The most common algorithm used is Lloyd's algorithm, which is heuristic in nature but performs well in practice. He and others later showed how k-d trees could be used to speed up Lloyd's algorithm. They have implemented this algorithm, along with some additional improvements, in the software library Kmeans. Mount has worked on the nearest neighbor and approximate nearest neighbor search problems. By allowing the algorithm to return an approximate solution to the nearest neighbor query, a significant speedup in space and time complexity can be obtained. One class of approximate algorithms takes as input the error distance, , and forms a data structure that can be stored efficiently (low space complexity) and that returns the -approximate nearest neighbor quickly (low time complexity). In co-authored work with Arya, Netanyahu, R. Silverman and A. Wu, Mount showed that the approximate nearest neighbor problem could be solved efficiently in spaces of low dimension. The data structure described in that paper formed the basis of the ANN open-source library for approximate nearest neighbor searching. In subsequent work, he investigated the computational complexity of approximate nearest neighbor searching. Together with co-authors Arya and Malamatos, he provided efficient space–time tradeoffs for approximate nearest neighbor searching, based on a data structure called the AVD (or approximate Voronoi diagram). Mount has also worked on point location, which involves preprocessing a planar polygonal subdivision S of size to determine the cell of a subdivision that a query point is in. The paper gives an time to construct a data structure of space that when asked what cell a query point lies in, takes expected time where is the entropy of the probability distribution of which cells the query points lie in. In addition to the design and analysis of algorithms in computational geometry, Mount has worked on the implementation of efficient algorithms in software libraries such as: ANN - approximate nearest neighbor searching ISODATA - efficient implementation of a popular clustering algorithm KMeans - k-means clustering Most cited works As of December 8, 2009, here is a list of his most cited works (according to Google Scholar) and their main contributions, listed in decreasing order of citations: An Optimal Algorithm for Approximate Nearest Neighbor Searching in Fixed Dimensions - In this paper they give a n algorithm (where depends on both the number of dimensions and the approximate error ) to find a neighbor that is at most a factor distance from the nearest neighbor. An Efficient k-Means Clustering Algorithm: Analysis and Implementation - In this paper they provide a simpler and more efficient implementation of Lloyd's algorithm, which is used in k-means clustering. The algorithm is called the filtering algorithm. The Discrete Geodesic Problem - In this paper they compute the shortest path from a source to a destination constrained to having to travel on the surface of a given (possibly nonconvex) polyhedron. Their algorithm takes time to find the first shortest path to the first destination and the shortest path to any additional destination (from the same source) can be computed in time. Here, is the number of vertices. References External links Data Structures and Algorithms in C++ Year of birth missing (living people) Living people American computer scientists Researchers in geometric algorithms Purdue University alumni University of Maryland, College Park faculty
40103226
https://en.wikipedia.org/wiki/Comparison%20of%20embroidery%20software
Comparison of embroidery software
Embroidery software is software that helps users create embroidery designs. While a large majority of embroidery software is specific to machine embroidery, there is also software available for use with hand embroidery, such as cross-stitch. Comparison of embroidery software This chart is not up to date. Besides the 6D/Premier Plus/Premier Plus 2 family what else is missing? Corrections are solicited. References Embroidery software
1470123
https://en.wikipedia.org/wiki/EAGLE%20%28program%29
EAGLE (program)
EAGLE is a scriptable electronic design automation (EDA) application with schematic capture, printed circuit board (PCB) layout, auto-router and computer-aided manufacturing (CAM) features. EAGLE stands for Easily Applicable Graphical Layout Editor () and is developed by CadSoft Computer GmbH. The company was acquired by Autodesk Inc. in 2016. Features EAGLE contains a schematic editor, for designing circuit diagrams. Schematics are stored in files with .SCH extension, parts are defined in device libraries with .LBR extension. Parts can be placed on many sheets and connected together through ports. The PCB layout editor stores board files with the extension .BRD. It allows back-annotation to the schematic and auto-routing to automatically connect traces based on the connections defined in the schematic. EAGLE saves Gerber and PostScript layout files as well as Excellon and Sieb & Meyer drill files. These are standard file formats accepted by PCB fabrication companies, but given EAGLE's typical user base of small design firms and hobbyists, many PCB fabricators and assembly shops also accept EAGLE board files (with extension .BRD) directly to export optimized production files and pick-and-place data themselves. EAGLE provides a multi-window graphical user interface and menu system for editing, project management and to customize the interface and design parameters. The system can be controlled via mouse, keyboard hotkeys or by entering specific commands at an embedded command line. Keyboard hotkeys can be user defined. Multiple repeating commands can be combined into script files (with file extension .SCR). It is also possible to explore design files utilizing an EAGLE-specific object-oriented programming language (with extension .ULP). History The German CadSoft Computer GmbH was founded by Rudolf Hofer and Klaus-Peter Schmidinger in 1988 to develop EAGLE, a 16-bit PCB design application for DOS. Originally, the software consisted of a layout editor with part libraries only. An auto-router module became available as optional component later on. With EAGLE 2.0, a schematics editor was added in 1991. The software used BGI video drivers, and XPLOT to print. In 1992, version 2.6 changed the definition of layers, but designs created under older versions (up to 2.05) could be converted into the new format using the provided UPDATE26.EXE utility. EAGLE 3.0 was changed to be a 32-bit extended DOS application in 1994. Support for OS/2 Presentation Manager was added with version 3.5 in April 1996. This version also introduced multi-window support with forward-/backward-annotation, user-definable copper areas, and a built-in programming language with ULPs. It was also the first to no longer require a dongle. In 2000, EAGLE version 4.0 officially dropped support for DOS and OS/2, but now being based on Qt 3 it added native support for Windows and was among the first professional electronic CAD tools available for Linux. A 32-bit DPMI version of EAGLE 4.0 running under DOS was still available on special request in order to help support existing customers, but it was not released commercially. Much later, in 2015, a special version of EAGLE 4.09r2 was made available by CadSoft to ease installation under Windows 7. Starting with version 4.13, EAGLE became available for Mac OS X, with versions before 5.0.0 still requiring X11. Version 5.0.0 officially dropped support for Windows 9x and Windows NT 3.x/4.x in 2008. This version was based on Qt 4 and introduced user-definable attributes. On 24 September 2009, Premier Farnell announced the acquisition of CadSoft Computer GmbH. Version 5.91.0 introduced an XML-based file format in 2011 but continued to read the older binary format. It could not, however, write files in the former format, thereby not allowing collaboration with EAGLE 5.12.0 and earlier. EAGLE 6.0.0 no longer supported Mac OS X on the Power PC platform (only on Intel Macs), and the minimum requirements were changed to Mac OS X 10.6, Linux 2.6 and Windows XP. This version also introduced support for assembly variants and differential pair routing with length matching and automatic meandering. Version 7.0.0 brought hierarchical designs, a new gridless topological pre-router called "TopRouter" for the conventional ripup-and-retry auto-router as well as multi-core support. Version 7.3.0 introduced native 64-bit versions for all three platforms in 2015. Version 7.6.0 dropped support for the 32-bit Mac OS X version in 2016. EAGLE 6.x.x continues to read EAGLE 7.x.x design files for as long as the hierarchical design feature isn't used. On 27 June 2016, Autodesk announced the acquisition of CadSoft Computer GmbH from Premier Farnell, with Premier Farnell continuing to distribute CadSoft products for Autodesk. Autodesk changed the license to a subscription-only model starting with version 8.0.0 in 2017. Only 64-bit versions remain available. The file format used by EAGLE 8.0.0 and higher is not backward compatible with earlier EAGLE versions, however it does provide an export facility for saving an EAGLE 7.x compatible version of the design. On 7 January 2020, EAGLE 9.5.2 was discontinued as a standalone product and only licensed to users as a bundled item (Fusion Electronics) with an Autodesk Fusion 360 subscription license. The last standalone version of EAGLE is 9.6.2 as of 27 May 2020. Fusion Electronics design files carry a version 9.7.0 designation. License model Since EAGLE version 8.0.0, there are Premium, Standard, Free, and Student & educator editions, with the Standard and Premium versions sold on a monthly or annual subscription basis, requiring online reactivation at least every 14 days (30 days since version 9.0). In January 2020, EAGLE 9.5.2 was discontinued as a standalone product and is only licensed to users as a bundled item with an Autodesk Fusion 360 subscription. Comparison of features for the various available editions: For comparison, the former (no longer obtainable) perpetual licensing scheme for EAGLE 7.x.x with costs referring to the 2016 prices for a single-user license: Community A large group of textual and video tutorials exists for beginners to design their own PCBs. The DIY electronics site SparkFun uses EAGLE and releases the EAGLE files for boards designed in-house. SparkFun Electronics is a company that has grown due to the hobbyist market exemplified by Make magazine and others. Many of these companies offer EAGLE part libraries which define schematic shapes, pinouts, and part sizes to allow for correct layout in the PCB layout editor. Other popular libraries include Adafruit, Arduino, SnapEDA, and Dangerous Prototypes, element14 (a subsidiary of Farnell, former owners of CadSoft) also have some libraries available from their site. Using ULPs to convert EAGLE .BRD files into Specctra-compatible design files (with file extension .DSN) it is possible to export designs for usage in conjunction with advanced external autorouters such as KONEKT ELECTRA, Eremex TopoR or Alfons Wirtz's FreeRouting. For further touching-up the finished designs in session format can be imported back into EAGLE via .SES to .SCR script file converters. Controversies In spring 1991 the dongle protection scheme of EAGLE 2.0 had been cracked causing a decline of 30% in sales, while sales for a reduced demo version with a printed manual saw a significant increase. As a consequence in 1992 CadSoft sent thousands of floppy disks containing a new demo of EAGLE 2.6 to potential users, in particular those who had ordered the former demo but had not subsequently bought the full product. The new demo, however, also contained spy code scanning the user's hard disk for illegal copies of EAGLE. If the program found traces of such, it would show a message indicating that the user was entitled to order a free printed manual using the displayed special order code, which, however, was actually a number encoding the evidence found on the user's machine. Users sending in the filled out form would receive a reply from CadSoft's attorneys. The act of spying, however, was illegal as well by German law. In 2014, EAGLE 7.0.0 introduced a new Flexera FLEXlm-based licensing model, which wasn't well received by the user community, so that CadSoft returned to the former model of independent perpetual licenses with EAGLE 7.1.0. Despite announcements to the contrary in 2016, Autodesk switched to a subscription-only licensing model with EAGLE 8.0.0 in January 2017. Without an online connection to a licensing server to verify the licensing status every two weeks (four weeks since version 9.0.0), the software would fall back to the functionality of the freeware version. This caused an uproar in the user community, in particular among those who work in secure or remote environments without direct Internet access and users for whom it is mandatory to be able to gain full access to their designs even after extended periods of time (several years up to decades) without depending on third-parties such as Autodesk to allow reactivation (who may no longer be around or support the product by then). Many users have indicated they would refuse to upgrade under a subscription model and rather migrate to other electronic design applications such as KiCad. See also Comparison of EDA software List of free electronics circuit simulators Video Disk Recorder (VDR) – another software written by Klaus-Peter Schmidinger Notes References Further reading (NB. Includes a copy of EAGLE 4.09r2.) External links (Autodesk's EAGLE web support forums) news://news.cadsoft.de (CadSoft's EAGLE support newsgroups via NNTP) ftp://ftp.cadsoft.de/eagle/ (CadSoft's archive of old EAGLE versions) Autodesk acquisitions Electronic design automation software Electronic design automation software for Linux Engineering software that uses Qt Proprietary software that uses Qt Software that uses Qt 1988 software
32325016
https://en.wikipedia.org/wiki/Survey%20data%20collection
Survey data collection
With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys (CASI, CSAQ) are increasingly replaced by web surveys. Modes of data collection There are several ways of administering a survey. Within a survey, different methods can be used for different parts. For example, interviewer administration can be used for general topics but self-administration for sensitive topics. The choice between administration modes is influenced by several factors, including 1) costs, 2) coverage of the target population (including group-specific preferences for certain modes), 3) flexibility of asking questions, 4) respondents’ willingness to participate and 5) response accuracy. Different methods create mode effects that change how respondents answer. The most common modes of administration are listed under the following headings. Mobile surveys Mobile data collection or mobile surveys is an increasingly popular method of data collection. Over 50% of surveys today are opened on mobile devices. The survey, form, app or collection tool is on a mobile device such as a smart phone or a tablet. These devices offer innovative ways to gather data, and eliminate the laborious "data entry" (of paper form data into a computer), which delays data analysis and understanding. By eliminating paper, mobile data collection can also dramatically reduce costs: one World Bank study in Guatemala found a 71% decrease in cost while using mobile data collection, compared to the previous paper-based approach. SMS surveys can reach any handset, in any language and in any country. As they are not dependent on internet access and the answers can be sent when its convenient, they are a suitable mobile survey data collection channel for many situations that require fast, high volume responses. As a result, SMS surveys can deliver 80% of responses in less than 2 hours and often at much lower cost compared to face-to-face surveys, due to the elimination of travel/personnel costs. Apart from the high mobile phone penetration, further advantages are quicker response times and the possibility to reach previously hard-to-reach target groups. In this way, mobile technology allows marketers, researchers and employers to create real and meaningful mobile engagement in environments different from the traditional one in front of a desktop computer. However, even when using mobile devices to answer the web surveys, most respondents still answer from home. Online surveys Online (Internet) surveys are becoming an essential research tool for a variety of research fields, including marketing, social and official statistics research. According to ESOMAR online survey research accounted for 20% of global data-collection expenditure in 2006. They offer capabilities beyond those available for any other type of self-administered questionnaire. Online consumer panels are also used extensively for carrying out surveys but the quality is considered inferior because the panelists are regular contributors and tend to be fatigued. However, when estimating the measurement quality (defined as product of reliability and validity) using a multitrait-mutlimethod approach (MTMM), some studies found a quite reasonable quality and even that the quality of a series of questions in an online opt-in panel (Netquest) was very similar to the measurement quality for the same questions asked in the European Social Survey (ESS), which is a face-to-face survey. Some studies have compared the quality of face-to-face surveys and/or telephone surveys with that of online surveys, for single questions, but also for more complex concepts measured with more than one question (also called Composite Scores or Index). Focusing only on probability-based surveys (also for the online ones), they found overall that the face-to-face (using show-cards) and web surveys have quite similar levels of measurement quality, whereas the telephone surveys were performing worse. Other studies comparing paper-and-pencil questionnaires with web-based questionnaires showed that employees preferred online survey approaches to the paper-and-pencil format. There are also concerns about what has been called "ballot stuffing" in which employees make repeated responses to the same survey. Some employees are also concerned about privacy. Even if they do not provide their names when responding to a company survey, can they be certain that their anonymity is protected? Such fears prevent some employees from expressing an opinion. Advantages of online surveys Web surveys are faster, simpler, and cheaper. However, lower costs are not so straightforward in practice, as they are strongly interconnected to errors. Because response rate comparisons to other survey modes are usually not favourable for online surveys, efforts to achieve a higher response rate (e.g., with traditional solicitation methods) may substantially increase costs. The entire data collection period is significantly shortened, as all data can be collected and processed in little more than a month. Interaction between the respondent and the questionnaire is more dynamic compared to e-mail or paper surveys. Online surveys are also less intrusive, and they suffer less from social desirability effects. Complex skip patterns can be implemented in ways that are mostly invisible to the respondent. Pop-up instructions can be provided for individual questions to provide help with questions exactly where assistance is required. Questions with long lists of answer choices can be used to provide immediate coding of answers to certain questions that are usually asked in an open-ended fashion in paper questionnaires. Online surveys can be tailored to the situation (e.g., respondents may be allowed save a partially completed form, the questionnaire may be preloaded with already available information, etc.). Online questionnaires may be improved by applying usability testing, where usability is measured with reference to the speed with which a task can be performed, the frequency of errors and user satisfaction with the interface. Key methodological issues of online surveys Sampling. The difference between probability samples (where the inclusion probabilities for all units of the target population is known in advance) and non-probability samples (which often require less time and effort but generally do not support statistical inference) is crucial. Probability samples are highly affected by problems of non-coverage (not all members of the general population have Internet access) and frame problems (online survey invitations are most conveniently distributed using e-mail, but there are no e-mail directories of the general population that might be used as a sampling frame). Because coverage and frame problems can significantly impact data quality, they should be adequately reported when disseminating the research results. Invitations to online surveys. Due to the lack of sampling frames many online survey invitations are published in the form of an URL link on web sites or in other media, which leads to sample selection bias that is out of research control and to non-probability samples. Traditional solicitation modes, such as telephone or mail invitations to web surveys, can help overcoming probability sampling issues in online surveys. However, such approaches are faced with problems of dramatically higher costs and questionable effectiveness. Non-response. Online survey response rates are generally low and also vary extremely – from less than 1% in enterprise surveys with e-mail invitations to almost 100% in specific membership surveys. In addition to refusing participation, terminating surveying during the process or not answering certain questions, several other non-response patterns can be observed in online surveys, such as lurking respondents and a combination of partial and item non-response. Response rates can be increased by offering monetary or some other type of incentive to the respondents, by contacting respondents several times (follow-up), and by keeping the questionnaire difficulty as low as possible. There are draw-backs to using an incentive to garner a response. Non-bias responses could be questioned in this type of situation. The most concrete way to gain feedback is to publicize what is done with the results. To take concrete actions based on feedback and to show that to the customer base is extremely motivating to customers to continue to let their voice be heard. Acquiescence bias. Due to a phenomenon inherently present in human nature, many people have acquiescent personalities and are more likely to agree with statements than disagree - regardless of the content. Often, those people see the question-asker as an expert in their field which causes them to be more likely to react positively to the question asked. That being said, acquiescence bias (also known as the friendliness bias or “yea-saying”) manifests itself when a respondent shows a tendency to agree with whatever it is that you are asking or stating, even though they might not actually agree. Platform Issues. Lack of familiarity with the platform used can cause participants and clients confusion. Questionnaire design. While modern web questionnaires offer a range of design features (different question types, images, multimedia), the use of such elements should be limited to the extent necessary for respondents to understand questions or to stimulate the response. It should not affect their responses, because that would mean lower validity and reliability of data. Appropriate questionnaire design can help lowering the measurement error that can arise also due to the respondents or the survey mode itself (respondent’s motivation, computer literacy, abilities, privacy concerns, etc.). Post-survey adjustments. Various robust procedures have been developed for situations where sampling deviate from probability selection, or, when we face non-coverage and non-response problems. The standard statistical inference procedures (e.g. confidence interval calculations and hypothesis testing) still require a probability sample. The actual survey practice, particularly in marketing research and in public opinion polling, which massively neglects the principles of probability samples, increasingly requires from the statistical profession to specify the conditions where non-probability samples may work. These issues, and potential remedies, are discussed in a number of sources. Telephone Use of interviewers encourages sample persons to respond, leading to higher response rates. Interviewers can increase comprehension of questions by answering respondents' questions. Fairly cost efficient, depending on local call charge structure Good for large national (or international) sampling frames Some potential for interviewer bias (e.g., some people may be more willing to discuss a sensitive issue with a female interviewer than with a male one) Cannot be used for non-audio information (graphics, demonstrations, taste/smell samples) Three types: Traditional telephone interviews Computer assisted telephone dialing Computer assisted telephone interviewing (CATI) Mail The questionnaire may be handed to the respondents or mailed to them, but in all cases they are returned to the researcher via mail. An advantage is, is that cost is very low, since bulk postage is cheap in most countries Long delays, often several months, before the surveys are returned and statistical analysis can begin Not suitable for issues that may require clarification Respondents can answer at their own convenience (allowing them to break up long surveys; also useful if they need to check records to answer a question) No interviewer bias Non-response bias can be corrected by extrapolation across waves Large amount of information can be obtained: some mail surveys are as long as 50 pages Response rates can be improved by using mail panels Response rates can be improved by using prepaid monetary incentives Response rates are affected by the class of mail through which the survey was sent Members of the panel have agreed to participate Panels can be used in longitudinal designs where the same respondents are surveyed several times Face-to-face Suitable for locations where telephone or mail are not developed Potential for interviewer bias Easy to manipulate by completing multiple times to skew results Mixed-mode surveys Researchers can combine several above methods for the data collection. For example, researchers can invite shoppers at malls, and send willing participants questionnaires by emails. With the introduction of computers to the survey process, survey mode now includes combinations of different approaches or mixed-mode designs. Some of the most common methods are: Computer-assisted personal interviewing (CAPI): The computer displays the questions on screen, the interviewer reads them to the respondent, and then enters the respondent's answers. Audio computer-assisted self-interviewing (audio CASI): The respondent operates the computer, the computer displays the question on the screen and plays recordings of the questions to the respondents, who then enters his/her answers. Computer-assisted telephone interviewing (CATI) Interactive voice response (IVR): The computer plays recordings of the questions to respondents over the telephone, who then respond by using the keypad of the telephone or speaking their answers aloud. Web surveys: The computer administers the questions online. See also Assessment Comparison of survey software Data collection system References
67447051
https://en.wikipedia.org/wiki/Carlos%20Sainz%3A%20World%20Rally%20Championship
Carlos Sainz: World Rally Championship
Carlos Sainz: World Rally Championship is a 1990 racing video game co-developed by the Spanish companies Zigurat Software (previously known as Made in Spain) and Arcadia Software, and published by Zigurat for Amstrad CPC, MS-DOS, MSX and ZX Spectrum. Featuring Spanish rally driver Carlos Sainz and themed around rallying, the game pit players with races across various locations to qualify for the next couse in the World Rally Championship and modify characteristics of the Toyota Celica to accommodate each courses. Carlos Sainz: World Rally Championship was created in conjunction with Sito Pons 500cc Grand Prix by most of the same team at Zigurat who worked on licensed sports titles such as Paris-Dakar (1988) and Emilio Sanchez Vicario Grand Slam with co-developer Arcadia, serving as their final release prior to abandoning the video game industry. The game originated during a meeting between Zigurat and Arcadia to discuss future projects, where various ideas were pitched. The idea of creating an accurate rally simulator came from motorsports being a hobby among Zigurat staff, with the programmers finding rally as a spectacular discipline and fitting due to graphic and dynamic possibilities. Zigurat hired Sainz, who had yet to become a world champion at the time, and development of the project started afterwards. Conversions for Amiga and Atari ST were planned but never released. Carlos Sainz: World Rally Championship proved to be a success for Zigurat and garnered positive reception from critics across all platforms since its release; praise was given to the addictive gameplay, sense of speed, controls and sound but other reviewers felt mixed in regards to the graphics and difficulty, while the limited technical complexity was criticized. After its launch, Zigurat was contacted by Gaelco to work on a arcade game based on the World Rally Championship featuring Sainz before he changed teams from Toyota to Lancia near the end of development, being ultimately reworked and released as World Rally (1993). Gameplay Carlos Sainz: World Rally Championship is a top-down rally racing game reminiscent of Paris-Dakar (1988), where players observe from above and race as Spanish rally driver Carlos Sainz driving the Toyota Celica Turbo 4WD across locations conforming the World Rally Championship such as Portugal, Acropolis, 1000 Lakes, San Remo, Cataluña and RAC by participating in increasingly difficult two-lap races to obtain the best time record possible and qualify for the next course to win the championship. The game employs a isometric perspective as Sito Pons 500cc Grand Prix to portray a television broadcast-style viewpoint. Every location, composed of six courses each, has its own weather conditions and hazards that change how the vehicle is controlled through the track. Unlike other racing simulators released at the time, there are no AI-controlled opponents participating during races on-screen but their best times are shown when the race is finished. Players have the option to modify the car's characteristics to better accommodate each course and a training option is also available prior to starting a race. Players can resume their progress via password as well. Development Carlos Sainz: World Rally Championship was created in conjunction with Sito Pons 500cc Grand Prix by Zigurat Software (previously Made in Spain), whose staff worked on licensed sports titles such as Paris-Dakar (1988) and Emilio Sanchez Vicario Grand Slam (featuring former Spanish tennis player Emilio Sánchez), and co-developer Arcadia Software. Fernando Rada and Jorge Granados of Zigurat acted as co-producers. José Miguel Saiz, Manuel Rosas and José Antonio Carrera Merino of Arcadia served as co-programmers and artist respectively. The team recounted the project's development process through interviews. Carlos Sainz: World Rally originated during a meeting between Zigurat and Arcadia to discuss future projects, where various ideas were pitched such as a golf game and a car game. The idea of creating an accurate rally simulator is because motorsports was a hobby among Zigurat staff and the programmers found rally to be a spectacular discipline and fitting due to graphic and dynamic possibilities. Zigurat hired Spanish rally driver Carlos Sainz, who had yet to become a world champion at the time, and the project started development afterwards. Due to being occupied with development of Sito Pons 500cc, Zigurat assigned the license to Arcadia, supervising production and acting as advisors. According to Miguel Saiz, Arcadia received support from Sainz and his team early in production, as well as advising from Sainz's co-pilot Luis Moya through Zigurat to create different terrain effects. Sainz also advised the team of Zigurat and collaborated with development, designing the vehicle and race courses. Carlos Sainz shares the same isometric perspective as Sito Pons, allowing the team to adequately simulate car movement, as well as easing composition of curves and giving a greater sense of speed unlike several racing simulation titles that made use of different perspectives. However Antonio Carrera Merino stated that the graphics proved time-consuming and difficult due to the vehicle having 360 degree movement, requiring help to create the graphics and opting for digitization. Saiz also stated that working with terrain effects like skids and jumps proved complicated, while designing courses was a complex issue as well due to memory constrains and the number of maps needed to be implemented. Saiz claimed that the game made use of variables to make driving realistically accurate as possible, with Zigurat providing documentation of car behavior to Arcadia in order to program the driving physics. Saiz also claimed that the title used streaming code that modified driving physics depending on the conditions. Both Zigurat and Arcadia spent many testing hours prior to launch, with Rada polishing the driving physics at Arcadia. Release Carlos Sainz: World Rally Championship was published exclusively in Spain by Zigurat Software for Amstrad CPC, MS-DOS, MSX and ZX Spectrum on December 3, 1990. Both the CPC and MSX versions were also distributed by Erbe Software. The CPC version makes use of the computer's Mode 0 display resolution. It was the final release of Arcadia prior to abandoning the video game industry due to external factors according to José Miguel Saiz and José Antonio Carrera Merino. Prior to launch, Carlos Sainz and the development team presented the game to the press at the Toyota headquarters in Madrid. Versions for the Amiga and Atari ST were planned but never released. In 1991, the title was included as part of the Pack Powersports compilation for all platforms. Reception Carlos Sainz: World Rally Championship was met with positive reception from critics across all platforms since its release and proved to be a local success for Zigurat. Spanish magazine MicroHobby reviewed the ZX Spectrum version, praising the fast movement, visuals, sound, sense of realism and addictive factor. Similarly, Micromanías José Emilio Barbero also reviewed the Spetrum version, commending the addictive and original gameplay, graphics, difficulty and sense of speed, stating that "Carlos Sainz is a new proof of the absolute dominance of Zigurat in the 8-bit field." Reviewing the MSX version, MSX Clubs Jesús Manuel Montané drew comparison with Konami titles like Hyper Rally and Road Fighter due to its premise, feeling mixed in regards to the graphical presentation and criticized the limited technical complexity but gave positive remarks to the playability and controls, stating that "Carlos Sainz outperforms Sito Pons in many ways, despite being made by the usually mediocre Arcadia programming team." Mega Ocios César Valencia P. reviewed the Amstrad CPC version and praised the detailed visuals, controls and sound but criticized the excessive difficulty in some sections. Legacy After the release of Carlos Sainz: World Rally, Zigurat was contacted by Gaelco to work on a arcade game based on the World Rally Championship featuring Carlos Sainz before he changed teams from Toyota to Lancia near the end of development, being ultimately reworked and released as World Rally (1993). World Rally proved to be a breakthrough title for Gaelco across Europe and attained "mass success", selling around 23,000 units in the region and achieving similar response worldwide. World Rally also gained a cult following, with Santa Ragione designer Pietro Riva stating that the game served as inspiration for Wheels of Aurelia (2016). Notes References External links Carlos Sainz: World Rally Championship at GameFAQs Carlos Sainz: World Rally Championship at MobyGames 1990 video games Amstrad CPC games Cancelled Amiga games Cancelled Atari ST games DOS games MSX games Off-road racing video games Racing video games Single-player video games Top-down racing video games Video games based on real people Video games developed in Spain Video games set in Finland Video games set in Italy Video games set in Greece Video games set in Portugal Video games set in Spain Video games set in the United Kingdom ZX Spectrum games
52044964
https://en.wikipedia.org/wiki/David%20Venable
David Venable
David "Dave" Venable (born January 11, 1978) is a former intelligence officer with the United States National Security Agency, and current cyber security professional and businessman. He is an author and speaker on the topics of cyber security, cyberwarfare, and international security; has developed security-related internet protocols; is a US patent holder; and has been named as one of the most influential people in security. Early life and education Venable was born in and grew up in Little Rock, Arkansas, and later attended the University of Arkansas, majoring in mathematics. After college, he joined the United States Air Force and studied Korean at the Defense Language Institute in Monterey, California, a Department of Defense educational and research institution which provides linguistic and cultural instruction to the DoD and other Federal Agencies. Venable has also pursued graduate education in mathematics at the University of Texas, and international relations at Harvard University. Career Until 2005 Venable served in several intelligence roles with the National Security Agency, including Computer Network Exploitation, Cyberwarfare, Information Operations, and Digital Network Intelligence in support of global anti-terrorism operations. He has also taught about these subjects while serving as adjunct faculty at the National Cryptologic School, a school within the National Security Agency that provides training to members of the United States Intelligence Community. After leaving federal service Venable founded and served as CEO of Vanda Security, a Dallas-based security consultancy, which ultimately became the security professional services practice of Masergy Communications where Venable currently serves as Vice President of Cyber Security. Venable regularly speaks at industry and government conferences including Black Hat Briefings and the Warsaw Security Forum, serves as a cyber security expert with think tanks and policy research institutes, serves on The Colony, Texas technology board, and is a cybersecurity expert and speaker with the United States Department of State. Bibliography Venable frequently contributes to and appears in Forbes, BBC, Harvard Business Review, Bloomberg Businessweek, InformationWeek, IDG Connect, and other media outlets in matters pertaining to cyber security, cyberwarfare, and international security. References 1978 births Living people American technology writers People associated with computer security Writers from Little Rock, Arkansas Businesspeople from Little Rock, Arkansas Military personnel from Little Rock, Arkansas National Security Agency people United States Air Force airmen Defense Language Institute alumni University of Texas alumni Harvard Graduate School of Arts and Sciences alumni American technology chief executives
40312837
https://en.wikipedia.org/wiki/Devicetree
Devicetree
In computing, a devicetree (also written device tree) is a data structure describing the hardware components of a particular computer so that the operating system's kernel can use and manage those components, including the CPU or CPUs, the memory, the buses and the integrated peripherals. The device tree was derived from SPARC-based computers via the Open Firmware project. The current Devicetree specification is targeted at smaller systems, but is still used with some server-class systems (for instance, those described by the Power Architecture Platform Reference). Personal computers with the x86 architecture generally do not use device trees, relying instead on various auto configuration protocols (e.g. ACPI) to discover hardware. Systems which use device trees usually pass a static device tree (perhaps stored in ROM) to the operating system, but can also generate a device tree in the early stages of booting. As an example, Das U-Boot and kexec can pass a device tree when launching a new operating system. On systems with a boot loader that does not support device trees, a static device tree may be installed along with the operating system; the Linux kernel supports this approach. The Devicetree specification is currently managed by a community named devicetree.org, which is associated with, among others, Linaro and Arm. Device Tree formats A device tree can hold any kind of data as internally it is a tree of named nodes and properties. Nodes contain properties and child nodes, while properties are name–value pairs. Device trees have both a binary format for operating systems to use and a textual format for convenient editing and management. Usage in Linux Given the correct device tree, the same compiled kernel can support different hardware configurations within a wider architecture family. The Linux kernel for the ARC, ARM, C6x, H8/300, MicroBlaze, MIPS, NDS32, Nios II, OpenRISC, PowerPC, RISC-V, SuperH, and Xtensa architectures reads device tree information; on ARM, device trees have been mandatory for all new SoCs since 2012. This can be seen as a remedy to the vast number of forks (of Linux and Das U-Boot) that have historically been created to support (marginally) different ARM boards. The purpose is to move a significant part of the hardware description out of the kernel binary, and into the compiled device tree blob, which is handed to the kernel by the boot loader, replacing a range of board-specific C source files and compile-time options in the kernel. It is specified in a Devicetree Source file (.dts) and is compiled into a Devicetree Blob or device tree binary (.dtb) file through the Devicetree compiler (DTC). Device tree source files can include other files, referred to as device tree source includes. It has been customary for ARM-based Linux distributions to include a boot loader, that necessarily was customized for specific boards, for example Raspberry Pi or Hackberry A10. This has created problems for the creators of Linux distributions as some part of the operating system must be compiled specifically for every board variant, or updated to support new boards. However, some modern SoCs (for example, Freescale i.MX6) have a vendor-provided boot loader with device tree on a separate chip from the operating system. A proprietary configuration file format used for similar purposes, the FEX file format, is a de facto standard among Allwinner SoCs. Usage in Windows In Windows, an ACPI device tree is maintained by the Plug-and-Play manager to evaluate things like whether devices can be safely ejected. Example Example of Devicetree Source (DTS) format: /dts-v1/; / { soc { flash_controller: flash-controller@4001e000 { reg = <0x4001e000 0x1000>; flash0: flash@0 { label = "SOC_FLASH"; erase-block = <4096>; }; }; }; }; In the example above, the line /dts-v1/; signifies version 1 of the DTS syntax. The tree has four nodes: / (root node), soc (stands for "system on a chip"), flash-controller@4001e000 and flash@0 (instance of flash which uses the flash controller). Besides these node names, the latter two nodes have labels flash_controller and flash0 respectively. The latter two nodes have properties, which represent name/value pairs. Property label has string type, property erase-block has integer type and property reg is an array of integers (32-bit unsigned values). Property values can refer to other nodes in the devicetree by their phandles. Phandle for a node with label flash0 would be written as &flash0. Phandles are also 32-bit values. Parts of the node names after the "at" sign (@) are unit addresses. Unit addresses specify a node's address in the address space of its parent node. The above tree could be compiled by the standard DTC compiler to binary DTB format or assembly. In Zephyr RTOS, however, DTS files are compiled into C header files (.h), which are then used by the build system to compile code for a specific board. See also PCI configuration space Hardware abstraction Open Firmware References External links devicetree.org website Device Tree Reference eLinux.org Embedded Power Architecture Platform Requirements (ePAPR) About The Device Tree Data structures by computing platform Operating system technology Firmware ARM architecture
2421243
https://en.wikipedia.org/wiki/Path-vector%20routing%20protocol
Path-vector routing protocol
A path-vector routing protocol is a network routing protocol which maintains the path information that gets updated dynamically. Updates that have looped through the network and returned to the same node are easily detected and discarded. This algorithm is sometimes used in Bellman–Ford routing algorithms to avoid "Count to Infinity" problems. It is different from the distance vector routing and link state routing. Each entry in the routing table contains the destination network, the next router and the path to reach the destination. Border Gateway Protocol (BGP) is an example of a path vector protocol. In BGP, the autonomous system boundary routers (ASBR) send path-vector messages to advertise the reachability of networks. Each router that receives a path vector message must verify the advertised path according to its policy. If the message complies with its policy, the router modifies its routing table and the message before sending the message to the next neighbor. It modifies the routing table to maintain the autonomous systems that are traversed in order to reach the destination system. It modifies the message to add its AS number and to replace the next router entry with its identification. Exterior Gateway Protocol (EGP) does not use path vectors. It has three phases: Initiation Sharing Updating Of note, BGP is commonly referred to as an External Gateway Protocol (EGP) given its role in connecting Autonomous Systems (AS). Communication protocols within AS are therefore referred to as Internal Gateway Protocols (IGP) which contain OSPF and IS-IS among others. This being said, BGP can be used within an AS, which typically occurs within very large organizations such as Facebook or Microsoft. See also Link-state routing protocol Routing protocols
60961296
https://en.wikipedia.org/wiki/Pingdom
Pingdom
Pingdom AB is a Swedish website monitoring software as a service company launched in Stockholm and later acquired by the Austin, Texas-based SolarWinds. The company releases annual reports on global internet use, which are frequently cited in academic publications and by media organizations as a source of Internet-related statistics. History Pingdom was launched in 2005 in Västerås, Sweden, but only became popular in 2007. The website monitoring company was founded by the Swedish entrepreneur Sam Nurmi, who had previously founded Loopia Web Hosting and who would go on to found Dooer. As of 2012, the company reported sales of 22.5 million Swedish kronor. By 2014, the company, then owned by the private equity firm Nurmi Drive controlled by its CEO, reported 500,000 customers in 211 countries and employed 30 people. In June 2014, the software company was acquired by the Austin, Texas-based software developer SolarWinds for $103 million. In May 2017, SolarWinds acquired the San Francisco-based company Scout Server Monitoring and merged the software with Pingdom. Product and reports Pingdom has servers located in several countries used to measure the latency of the websites it monitors. It can report whether a website is down due to network splits or failure in DNS servers. Pingdom functions by regularly accessing websites to check whether the site is accessible to users. The software will continuously monitor the website at higher rates until it determines that it is again operational. Pingdom also generates a report detailing how long the site was down. The user receives an email notifying them of any downtime as soon as it occurs and again when it ends. The monitoring tool can also determine how long it takes a website to load fully, how many files it constitutes, and the number of scripts and images required to load. Pingdom publishes reports on global Internet use and country-specific data on visitors to popular websites like Facebook. The report also includes data on the location of the hosts for many of the most visited websites in the world as determined by Alexa Internet. In 2012, Pingdom was able to determine that about 43.1% of the top 1 million websites were hosted in the United States, compared to the 31.3% hosted in all of Europe. The company also publishes Royal Pingdom, a blog on a variety of Internet-related topics. Royal Pingdom is frequently cited in academic publications and by media organizations as a source of statistics on a variety of websites. Reception In its September 2017 review of the service, PC Magazine praised the software as "fast and comprehensive", pointing out that "the only downside is that all this goodness is wrapped in a difficult interface that requires a steep learning curve to leverage." References External links 2005 establishments in Sweden Website monitoring software
10504791
https://en.wikipedia.org/wiki/Lincoln%20biscuit
Lincoln biscuit
A Lincoln biscuit is a type of circular short dough biscuit of the shortcake variety, usually decorated on one side with a series of raised dots. The McVitie's version had the word 'Lincoln' embossed on to the biscuit at the centre. Recently it has been difficult to obtain in the United Kingdom. Lincoln biscuits are still available in Irish supermarkets manufacturered by Jacob's. Despite losing popularity in recent years, the basic recipe has come under academic scrutiny and commercial analysis. In 2004, Campden and Chorleywood Food Research Association set up a research project to "understand the textural properties which influence consumer acceptance of short dough biscuits (e.g. Lincoln type): ingredient functionality will enable the hardness, crunchiness and breakdown properties to be varied and their acceptability measured." 08.02.04/365 In Argentina, Kraft Foods produces Galletitas Lincoln, rectangular Lincoln biscuits with the familiar dot pattern, under the Terrabusi brand name. Bibliography Baker, J.S.; Boobier, W.J.; Davies, B. Development of a healthy biscuit: an alternative approach to biscuit manufacture Nutrition Journal March 2006, 5:7 Fearn T.; Miller A.R.; Thacker D.: Rotary moulded short dough biscuits Part 3: The effects of flour characteristics and recipe water level on the properties of Lincoln biscuits. Flour Milling and Baking Research Association Report (FMBRA) 1983, 102:8-12. Lawson R.; Miller A.R.; Thacker D.: Rotary moulded short dough biscuits: Part 2. The effects of the level of ingredients on the properties of Lincoln biscuits. Flour Milling and Baking Research Association Report (FMBRA) 1981, 93:15-20. Miller A.R.; Thacker D.; Turrell S.G.: Performance of single wheat flours in a small-scale baking test for semi-sweet biscuits. Flour Milling and Baking Research Association Report (FMBRA) 1986, 123:17-24. Lawson R.; Miller A.R.; Thacker D.: Rotary moulded short-dough biscuits Part 4. The effects of rotary moulder control settings on the properties of Lincoln biscuits. Flour Milling and Baking Research Association Report (FMBRA) 1983, 106:9-17. References External links Biscuit enthusiasts web-site Enhanced recipe Biscuits Culture in Lincolnshire
5803876
https://en.wikipedia.org/wiki/Frantic%20Films
Frantic Films
Frantic Films Corporation is a Canadian branded content and live action production company based in Winnipeg, Manitoba. Frantic Films is known for producing live action reality shows, documentaries and for its work in feature film visual effects. History Frantic Films was founded in 1997 by Ken Zorniak and Chris Bond. The company initially produced work for commercial clients such as Procter & Gamble and created computer-generated imagery (CGI) for Stephen King's Storm of the Century. In 2001, Frantic Films attracted international attention after creating stunning visual effects sequences for the blockbuster film Swordfish. Also in 2001, Frantic Films founded a research and development division that focussed on creating software and tools for future visual effects projects. The division produced a large suite of in-house and commercial software. Early efforts were focused on Deadline (a commercial render farm management tool) and Flood (an in-house fluid simulation tool). In 2004, Flood was said to be one of the top three fluid simulation tools in the world. In 2002, Jamie Brown joined Frantic Films as a partner. In 2007, the visual effects and software R&D divisions were acquired by the Prime Focus Group. As a result, Frantic Films stopped all visual effects and previsualization projects and refocused on live action and branded content. In 2010, the Frantic Films co-founder Chris Bond started a new company called Thinkbox Software and reacquired the software R&D division back from Prime Focus Group. Thinkbox Software has since been acquired by Amazon Web Services and most of the commercial software launched by Frantic Films remains in active development. In 2009, Frantic Films acquired Red Apple Entertainment, enabling rights over Red Apple's syndication catalog and mode of production. In 2017, the special-purpose acquisition company, Kew Media Group, spent $108 million buying Canadian production companies, among them Bristow Global Media, Our House Media and Frantic Films. in 2020, Brown financed a deal for Frantic Films to buy back its shares from Kew Media Group. Branded content The branded content division, Frantic Branded Content, specializes in creating original integrated entertainment for traditional and emerging media platforms. In 2009, Frantic Branded Content partnered with the advertising agency TAXI and commercial the production company Soft Citizen to produce the branded entertainment television property Commercial Break. Live action (past and current) High Maintenance Stand! Killer in Plain Sight Into Invisible Light The Writers' Block The Stats of Life Indictment: The Crimes of Shelly Cartier Backyard Builds Baroness von Sketch Show Still Standing Meet the Family Winnipeg Comedy Festival Buy It, Fix It, Sell It Pitch'n In Todd and the Book of Pure Evil A pitchn' In Christmas Verdict Til Debt Do Us Part Keep Your Head Up Kid: The Don Cherry Story Breakbound Devil's Brigade Guinea Pig Music Rising Retail Princess Of All Places Visual effects and previsualization (past) Alien: Resurrection (1997) – visual effects contributed to DVD release Swordfish (2001) – visual effects The Italian Job (2003) – previsualization, visual effects X2 (2003) – previsualziation, visual effects Paycheck (2003) – visual effects The Core (2003) – visual effects Resident Evil: Apocalypse (2004) – visual effects Catwoman (2004) – previsualziation, visual effects Scooby-Doo 2: Monsters Unleashed (2004) – visual effects Cursed (2005) – visual effects Poseidon (2006) – previsualization X-Men: The Last Stand (2006) – previsualziation Superman Returns (2006) – visual effects, software development Mr. Magorium's Wonder Emporium (2007) – visual effects Journey to the Center of the Earth (2008) – visual effects Runaway (2009) – visual effects References External links Frantic Films official website Thinkbox Software Commercial Break news release Companies based in Winnipeg Special effects companies Visual effects companies Film production companies of Canada Television production companies of Canada Cinema of Manitoba
9536773
https://en.wikipedia.org/wiki/TeleMation%20Inc.
TeleMation Inc.
TeleMation Inc. was a company specializing in products for the television industry, post-production and film industry, located in Salt Lake City, Utah. TeleMation started with a line of black-and-white video equipment, and later manufactured color video products. Lyle Keys was the founder and president of TeleMation, Inc., started in the late 1960s. Early equipment was for the B&W broadcast, cable television, and CCTV market. History In 1954, Lyle Oscar Keys was an itinerant equipment salesman from Wibaux, Montana. John F. Fitzpatrick was president of The Salt Lake Tribune at the time. Fitzpatrick's assistant John W. Gallivan hired Keys as an engineer for KUTV Channel 2, of which the Tribune was part owner. In a time when the electronics industry was burgeoning, Keys knew how to get essential parts fast in a time when these parts were unavailable or slow to get. By 1962, the Tribunes owner, Kearns-Tribune Corporation, and their partners in KUTV organized Electronic Sales Corporation (ELCO) to help meet these needs. Keys was installed as president with an office in the Kearns Building in Salt Lake City. Within eight years, the company, which had been incorporated as Telemation, had 420 employees, producing and marketing 156 products for the television industry with annual sales of $10 million. It became the nation's largest supplier of closed circuit TV systems and developed scores of proprietary items for cable television, industrial, educational and commercial TV. Keys personally conceptualized many of the firm's products, helped engineer them, produced millions of dollars in sales, and even wrote Telemation's news releases and advertising copy. He also laid out the blueprint for the company's development of of space in southwest Salt Lake County's technological park. The Kearns-Tribune Corporation's interest in this publicly owned enterprise as of early 1971 was twenty-four and one-half percent. In 1977, TeleMation inc. became a division of Bell and Howell. In October 1979, Bell and Howell entered a joint venture with Robert Bosch GmbH, Bosch's Fernseh Division, called Fernseh Inc. Bosch Fernseh Division was located in Darmstadt, Germany and for many years manufactured a full line of video and film equipment, professional video camera, VTR and Telecine, under Robert Bosch Fernsehanlagen GmbH. In April 1982, Bosch fully acquired Fernseh Inc., renaming the company Robert Bosch Corporation, Fernseh Division. In 1986, Bosch entered into a new joint venture with Philips Broadcast in Breda, Netherlands. This new company was called Broadcast Television Systems Inc. (BTS). Philips had been in the broadcast market for many years with a line of Norelco professional video cameras and other products. In 1995, Philips Electronics North America Corp. fully acquired BTS Inc., renaming it Philips Broadcast - Philips Digital Video Systems. In March 2001, this division was sold to Thomson SA, the current owner; the division was called Thomson Multimedia. In 2002, the French electronics giant Thomson SA also acquired the Grass Valley Group from Tektronix in Beaverton, Oregon, US. Grass Valley was sold to Belden on February 6, 2014. Belden also owns Miranda. Products Various Telemation B&W video products TSE-200 Special Effects Generator TPC-100 Porta-Studio TMV-529 Waveform Sampler TMV-708 Camera Control Unit TMC 2100 Camera TVM-650 Multicaster Switcher - vision mixer TMM-203 Film Chain-Multiplexer - Film Island TMU-100 Uniplexers TVM-550 video distribution amplifier TPA-550 Pulse distribution amplifier AP C.A.T.V. Character Generators (using Teletype machine & video camera) (1965) A line of Character Generators Automation equipment, like the BCS 2000 Digital Noise Reducer, also called Digital Noise Filter, 1984 TVU-175 Ventilation Unit Some color products (made in Salt Lake City under various brands) TVS-1000 TAS-1000 routers and line of party line control panels Phone remote router control interface MCS 2000 Master Control Switch - vision mixer MC Machine Control TSG-550 Sync Generator Tmt-101 Stairstep Generator Tmt-102 Multiburst Generator Tmt-103 Sin Pulse/Window Generator Compositor character generator TCF-3000 Color film chain-Multiplexer - Film Island Digital Encoder Pal and NTSC Mach One Editor (acquired) - a Non-linear editing system Alamar Automation (acquired) TVS-2000 router and line of party line Control panels with and w/o mnemonic displays CE 2200 party line controller, CE 2500 Status Display TVS-3000 router Venus router BCS 3000 HP UNIX Based controller - VG 3000 VGS card Jupiter Controller - Windows-based router control system VM 3000 VGA Status Display, V board SC 3000 Serial Control Interface S board CE 3000 Matrix Controller, M board - can support 3 level switching and other brands ES 3000 ESnet Interface PL 3000 party line Controller SI 3000 Control Processor Jupiter Control panels: CP 3200, CP 3300, CP 3310, CP 3320 Jupiter XPress, CM4000 Trinix router, DM–33100 Saturn Master Control Switch Weather Channel 97 Mars SDR–400 GS–400 FGS 4000 3D Character – Graphic Generator - Computer-generated imagery Vidifont Character generator (acquired from Thomson SA) The Media Pool - disk recorder Trivia Fernseh is German for "television". In German the words fern and seh literally mean "far" and "see", respectively. Because of all the mergers, customers sometimes fondly called these company(ies): Tele-bella-bosch-a-mation. Thomson still operates offices in the cities of all these acquisitions: Cergy, France (Thomson World Headquarters) Salt Lake City, Utah, US - from TeleMation Inc Beaverton, Oregon, US - from Tektronix Nevada City, California, US - from Grassvalley Group Breda, the Netherlands - from Philips-Norelco Weiterstadt - Darmstadt, Germany - from Bosch Fernseh Awards: Outstanding Achievement in Technical/Engineering Development Awards from National Academy of Televisio Arts and Sciences 1966-1967: Plumbicon Tube - N.V. Philips 1987-1888: FGS 4000 computer animation system - BTS - SLC, UT 1992-1993: Prism Technology for Color Television Cameras - N.V. Philips 1993-1994: Controlled Edge Enhancement Utilizing Skin Hue KeyingBTS and Ikegami (joint award) 1997-1998: Development of a High Resolution Digital Film Scanner Eastman Kodak and Philips Germany 2000-2001: Pioneering developments in shared video-data storage systems for use in television video servers - Thomson/Philips - SLC, UT 2002-2003: Technology to simultaneously encode multiple video qualities and the corresponding metadata to enable real-time conformance and / or playout of the higher quality video (nominally broadcast) based on the decisions made using the lower quality proxiesMontage. Philips and Thomson. Telemation Productions Telemation Productions was a post-production house in Seattle, Washington; Chicago, Illinois; Phoenix, Arizona; and Denver, Colorado in the 1970s and early 80s. Offices were sold or closed in the late 1980s. Telemation Productions was started as a marketing tool by Telemation Inc. in the early 1970s. It started as a single office located in Glenview, Illinois, a suburb of Chicago. In 1978, a second office was opened in Denver. Also in 1978, the television equipment manufacturing operation was sold to Bell & Howell. At that time, Telemation Inc. owned only the two production facilities and the manufacturing building in Salt Lake City, which was leased to Bell & Howell. In 1979 Telemation acquired a production facility in Seattle and renamed it Telemation Productions. In the early 1980s, Telemation acquired a facility in Phoenix, also renaming it Telemation Productions. In the early 1980s, Telemation Productions added a distribution division located in Chicago which provided duplication and shipping services to advertising agencies and a mobile division equipped with a television remote truck. Telemation Productions ownership changed in 1987 and again in 1990, with the Home Shopping Network buying the company. The Phoenix office and distribution division were sold in 1989 prior to this acquisition. The remote truck was sold in 1990. The Seattle office was closed in 1991, the Chicago office was closed in 1993, and the Denver office was closed the following year. External links Remembering TeleMation, Inc. Thomson takeover Robert Bosch Fernseh Philips Thomson Grassvalley Times, A.P. at Home, Aug. 6, 1965. References TeleMation to B&H BTS: Philips and Bosch Noise filter Robert Bosch Fernseh Division BTS Bosch BTS PDF, Page 15 Telecine TM camera Electronics companies of the United States Film and video technology Technicolor SA Manufacturing companies based in Salt Lake City 1962 establishments in Utah
42315456
https://en.wikipedia.org/wiki/SMART%20Process%20Acceleration%20Development%20Environment
SMART Process Acceleration Development Environment
SPADE (SMART Process Acceleration Development Environment) is a software development productivity and quality tool used to create professional software in a short time and with little effort. As seen in the diagram SPADE (green icon) automates many manual activities of the Software development process. It therefore takes less time and effort to perform the full software development cycle. With SPADE the remaining manual steps are: Reqs: gathering the wishes of the customer and documenting them in clear requirements, user stories or similar. Test cases: creating integration tests cases that will be run automatically during Test. Test: usability testing and testing integration with external (non-SPADE) systems. Accept: accepting the created solution Automation of the other steps is possible because the (business) requirements are specified clearly by using the method SMART Requirements 2.0. SPADE then uses intelligent algorithms that incorporate dependency analysis and a set of design patterns to transform these requirements to an optimized business process design, including the flow of user interactive and fully automated steps that are part of this business process. SPADE is a domain specific model based software development tool that is well suited for creating both complex as well as simple information processing systems. It is currently less suitable for creating software that can control hardware in real time or advanced graphical user interfaces. One can however add plug-ins for accessing functionality that is not created by SPADE. Details We will explain creating clear requirements using the language SMART notation which is part of the method SMART Requirements 2.0 followed by explaining how and what SPADE will automatically create from these requirements. We will also explain creating and running test cases and the typical architecture of the software solution that is created by SPADE. Creating clear requirements The input of SPADE are end result oriented business requirement specifications. These explain: This information is placed in a document, usually a file of some sort and is written down using a formal specification language. Below is an example in gray with explanation in italic text. Start by naming the process and its most important piece of information as its subject Process 'Order products' with subject #('Order': ORDER) Sum up the high level results. Double quotes are used to define requirements and help to create a result oriented break down structure. The following applies: "Customer has ordered products" and "Customer has an invoice if approved" and "The order needs to be approved if needed" Define the requirements clearly. Use if-then-else to define 'When' results should apply or should be produced. 'Where' the information is coming from is defined using references. For instance ORDER.approved is a piece of available information that is either produced during the process or is already an available piece of information. Some requirements (results) can be specified visually. To your right the "Customer has an invoice" is specified as an e-mail. "Customer has an invoice if approved" = if ORDER.approved then "Customer has an invoice" "The order needs to be approved if needed" = if "too expensive" then "The order needs to be approved" else ORDER.approved = true "too expensive" = ORDER.total > 500 A person can also be a source of information by stating 'input from' followed by a name that identifies the role of this person or the actual user. In the example below a person with the role CONTROLLER. If this persons in turn need information to be able to give this input, you need to state that this input can be given 'based on' certain other information. In this case the date, the BUYER and the LINES of the ORDER. "The order needs to be approved" = ORDER.approved = input from CONTROLLER based on #(ORDER.date, ORDER.BUYER, ORDER.LINES) The actual person that is giving the input at the time the system is used (the current user), can also be used as a piece of information. The example below defines the ORDER and its attributes. One if the attributes is called BUYER and this is filled with the actual CUSTOMER that (at the time the actual process runs) is playing that role, in other words giving the input. "Customer has ordered products" = One ORDER exists in ORDERS with: date = currentDate() BUYER = CUSTOMER LINES = "order lines" "order lines" = Several LINE exist in ORDER_LINES with: PRODUCT = input from CUSTOMER number = input from CUSTOMER The requirements also require a business or logical data model. Most of the logical data model can be derived from the requirements. For instance it knows which entities are needed (ORDERS, ORDER_LINES and PRODUCST) and in some cases it also can derive the type of an attribute. For instance __approved__ can only be true or false because it is used as a condition and LINES should be a relation to ORDER_LINES. Some types however cannot be derived and need to be defined explicitly in this data model. Below is an example of this data model. ORDERS = date : date BUYER : USERS(1) LINES : ORDER_LINES(*) opposite of ORDER approved : boolean total : decimal(10,2) = sum(LINES.total) summary : text displayed = '{total} euro by {BUYER.firstName} {BUYER.lastName} on {date}' ORDER_LINES = PRODUCT : PRODUCTS(1) number : integer ORDER : ORDERS(1) opposite of LINES total : decimal(10,2) = number * PRODUCT.price PRODUCTS = name : text price : decimal(10,2) summary : text displayed = '{name} ({price} euro)'Most of this data model is pretty straight forward and resemble other data modelling techniques. Some things stand out: Relational attributes: relations are specified using relational attributes. For instance BUYER, which contains 1 instance in the standard entity USERS and LINES which contain multiple (*) instances of the entity ORDER_LINES and is the opposite of the relation ORDER (which is a relational attribute of the entity ORDER_LINES). Calculated attributes: attributes can be calculated which means they are not stored but calculated when needed. For instance the total of one instance of ORDERS is the sum of the total of its LINES. The summary is a textual value that is a template text with some placeholders inside that contain total, the first and last name of the BUYER and the date. Displayed: which means that if the system needs to render instances from ORDERS and it doesn't know how to do that, it will use the attribute marked with displayed. SPADE automates design and the creation of code SPADE perform the following steps: Parse: in other words read the business requirements Analyse dependencies: the dependencies between the different parts of the business requirements are analysed. Create process designs: an intelligent algorithm transform dependencies to process designs. It uses a set of design patterns and several optimization techniques to create an optimized process design that has no waste in it. The design is both a high level design (e.g. chains of business processes) as well as a low level design (e.g. at statement level). Generate sources: for the work flow and all the screens and steps in the process design. To your right is an example process design that was created by SPADE. The whole process is the business process, the yellow steps are the user interactive steps or the steps in which the system interacts with an external actor, for instance an external system. The blue steps are the fully automated steps. Example screen captures of the forms are added below the process diagram. Creating and running test cases When you are using the created solution, you are also recording test cases at the same time. Those test cases are then expanded with asserts that verify the outcome of the process. Below is an example in gray with explanation in italic text.Each test scenario starts with stating which process is started by which user. In this case process 'Order products' for user 'edwinh'. START_PROCESS = Order products, edwinhThe next part describes which roles and users will claim and then enter data in which task. In this case a customer with user name marcusk will enter 2 LINEs and each line will have a selected product and a number of products. The second task is for the manager with user name edwinh and he will fill approved with true. # -------- FIRST CLAIM AND ENTER THE 1ST TASK ---------- task01.CLAIM_NEXT_GROUP_TASK = customer, marcusk task01.LINEs = 2 task01.LINEs[0]-c-product = 1 task01.LINEs[0]-c-number = 10 task01.LINEs[1]-c-product = 2 task01.LINEs[1]-c-number = 20 # -------- FIRST CLAIM AND ENTER THE 2ND TASK ---------- task02.CLAIM_NEXT_GROUP_TASK = manager, edwinh task02.approved = trueThe next part are the asserts the check if the process achieved the predicted end result. These are not recorded and need to be added manually. In this example we have added 2 asserts. The first checks if there is +1 more instance of ORDERS with attribute approved filled with TRUE. The second checks if +2 new instances of ORDER_LINES have been added. ASSERT_PROCESS_VALUE_COUNT_01 = ORDERS.approved = TRUE, +1 ASSERT_PROCESS_ROW_COUNT_02 = ORDER_LINES, +2 Deploying the solution SPADE can run on its own but it often runs as an Apache Maven plugin and is therefore part of a Maven build cycle. This build cycle also includes running the test scenarios, which in turn deploys the generated functionality as a .jar file, loads tests data, executes the test scenario's and verifies the result. The Maven build cycle can be used in daily builds all the way up to continuous delivery / deployment. For demo purposes, the steps mentioned can also be executed in the standard front-end of the resulting software solution. With the standard front end it is also possible to automate the following: analyze the existing database to check if the database already complies to the generated functionality; if there is no database present, a compliant database can be created automatically; if the database does not yet comply, tables and relations can be create or updated automatically. Migrating data from an old database or from the old release to the new release is also performed automated. However, the migration software (e.g. by using SQL or ETL) is created manually. Note that automation that SPADE provides during deployment is often used for smaller systems and for sprint demos. For deploying bigger projects, other more advanced deployment tools are more commonly used. The resulting software solution The diagram to your right shows how SPADE relates to the created solution, as well as a global architecture of this solution. Below the different elements of the diagram are explained: SMART Business requirements: are (manually) gathered and documented using the requirements specification language SMART notation. This is a Domain-specific language that can be used to define information based end results that business or organizations would want to produce. Automatically creates designs, docs, source code: from the requirements SPADE then automatically creates designs, documentation and source code that can be compiled to the software solution. Users and GUI: the solution can interact with role based authorized users by different GUI's. The solution will already have standard GUI's for all functionality but can be expanded with Custom GUI's. GUI's of both types can be mixed if needed. REST/SOAP: all functionality will always have a matching REST or SOAP service that are used by the different GUI's but can also be used by authorized external systems. DBAL: the server also has a hibernate or similar database abstraction layer to communicate with the database. Plug-ins: can be used or added to the server to communicate with either external systems or with external devices. This enables solution is also able to use devices from the Internet Of Things domain. All plug-ins can be called upon from the business requirements but always in a non-technical way. For instance, if you define a DOCUMENT as a result, SPADE will know to call the plug-in associated with the entity DOCUMENTS. The plug-in will actually create and store a physical document. Specific functionality: this is the functionality that is created based upon the business requirements. With it you can create a wide variety of functionality. SPADE users can use a library of off-the-shelf requirements for example CRM, HR, profile matching and financial functionality. This can be inserted and adjusted to fit the specific needs of the client. The specific functionality can use all plug-ins as well as all generic functionality to extend the domain of available functionality. Generic functionality: by default, the solution is already equipped with a lot of standard generic functionality. For instance DMS, document generation, auto e-mails, SMS, messaging, advanced search, browse through data and export. Which software development activities are automated and which are manual? The next table shows which software development activities are automated by SPADE and which exceptions apply. History 2000: in 2000 Edwin Hendriks of the company CMG (company) (now part of CGI Group) developed a process improvement method called Process Acceleration. At the core of this method was a way to define the desired end result of a business process fully unambiguous as well as a structured approach to deduce the most optimal business process that would achieve this end result. This was the birth of the first version of SMART notation (at that time called PA notation) which is a formal language that can be used to specify end results of entire process chains (versus specifying the process chain itself). CMG (company) used this method and SMART notation for several of their projects and their clients. 2007: although successful, CMG (company) at that time was not known for delivering process improvement consultancy. That was the reason for CMG (company) (at that time merged with Logica) to focus on the core of Process Acceleration, thus resulting in 2007 in a method that improves software development called PA SMART Requirements (now called SMART Requirements 2.0). Since that time SMART Requirements 2.0 has been used by CGI Group and their customers as well as other companies and organizations. 2008: having an unambiguous definition of the end result of a process chain, and having a structured approach to deduce the most optimal process from this end result, sprung the idea to create a tool that could read the end result, deduce the most optimal process from it, and generate the software for each step in the process. Edwin Hendriks, Marcus Klimstra and Niels Bergsma developed a first working prototype of the SPADE (at that time called the PA Generator) using [.NET] and also producing systems using a [.NET] architecture. 2010: Logica decided to start the development of a commercial usable version of the SPADE''. 2011: version 2.12 of the SPADE was used to create the first 2 systems that were made operational. Being a cross departmental time tracking system and an anonymous voting system both used by Logica itself. 2012: version 3 of the SPADE was created. This was the first commercial usable version of the SPADE. From that time SPADE was used to create solutions for the clients. It was often used to recreate existing legacy systems because of the short time and cost associated when creating solutions using SPADE. Despite the increased development speed and low costs, SPADE still had teething problems. This made it difficult to estimate the actual time needed to (re)create solutions making it hard to plan projects. This was the same year that Logica was acquired by CGI Group. 2015: version 4 of SPADE was used for the first time by elementary school children to create an exam system. It showed that creating SMART requirements and then asking SPADE to create a professional system for them was relatively easy when compared to other ways of creating software. In the same year a small rocket was launched which interacted with SPADE created ground station software. It showed that SPADE could in fact interact with external devices pretty fast (but still not yet fast enough to be usable to create real-time systems). 2016: in version 4.4 SPADE most teething problems were solved making it possible to (re)create large and complex systems in a short time. SPADE is currently being expanded to provide an easier way to create and change requirements as well as an easier way to customize the standard GUI. This will make it possible for more non-developers to use SPADE to create solutions. Advantages, disadvantages and considerations On the upside SPADE shows remarkable development speeds. International benchmarks show that the complete development cycle will be completed on average 20 times faster when compared to conventional techniques and in many cases it is even faster to completely recreate the functionality of existing software solutions compared to buying and configuring them. This development speed of course makes it easier for clients to see and try out the newly created solution. Of course by automating design and coding there will be almost no design and coding errors. The fact that the resulting solutions has no vendor-lock and is completely based on free to use open source components is also a big plus. Of course SPADE is also easy to learn. On the downside SPADE will remains a domain specific language and will therefore not be suitable for any type of functionality. This will require conventional development or other tools. Besides this real-time performance and the ability to change the GUI more easily is something that needs extra development. SPADE is also rather new and is not yet considered a mainstream development tool. Of course creating SMART requirements takes more effort and skill compared to just describing them in a couple of sentences. One should always consider that in normal software development the requirements define a fixed "contract" of the functionality that should be created. For instance the user story in a Scrum development team should also be fixed before the user story can be developed during a sprint. This is the same for SPADE projects. However, when the requirements or the user stories are ready to be developed, the sprint will be performed by SPADE and this will take only a couple of minutes. This has resulted in the tendency to move the requirements phase (the creation of the user stories) to the sprint. This is therefore considered to be a bad practice in both normal Agile development as well as Agile development using SPADE. Another consideration is that it is so easy to large and complex functionality. Although this poses no problem for SPADE, it does make it hard for certain people to handle the sheer size and complexity of the functionality of system. It is therefore advisable to still tackle size and complexity in the same way as you would in normal system development. By chopping up and structuring functionality in comprehensible pieces. See also Disciplined Agile Delivery References Agile software development
390257
https://en.wikipedia.org/wiki/Ratfor
Ratfor
Ratfor (short for Rational Fortran) is a programming language implemented as a preprocessor for Fortran 66. It provides modern control structures, unavailable in Fortran 66, to replace GOTOs and statement numbers. Features Ratfor provides the following kinds of flow-control statements, described by Kernighan and Plauger as "shamelessly stolen from the language C, developed for the UNIX operating system by D.M. Ritchie" ("Software Tools", p. 318): statement grouping with braces if-else, while, for, do, repeat-until, break, next "free-form" statements, i.e., not constrained by Fortran format rules <, >, >=, ... in place of .LT., .GT., .GE., ... include # comments For example, the following code if (a > b) { max = a } else { max = b } might be translated as IF(.NOT.(A.GT.B))GOTO 1 MAX = A GOTO 2 1 CONTINUE MAX = B 2 CONTINUE The version of Ratfor in Software Tools is written in Ratfor, as are the sample programs, and inasmuch as its own translation to Fortran is available, it can be ported to any Fortran system. Ratfor source code file names end in .r or .rat. History Ratfor was designed and implemented by Brian Kernighan at Bell Telephone Laboratories in 1974, and described in Software—Practice & Experience in 1975. It was used in the book "Software Tools" (Kernighan and Plauger, 1976). In 1977, at Purdue University, an improved version of the ratfor preprocessor was written. It was called Mouse4, as it was smaller and faster than ratfor. A published document by Dr. Douglas Comer, professor at Purdue, concluded "contrary to the evidence exhibited by the designer of Ratfor, sequential search is often inadequate for production software. Furthermore, in the case of lexical analysis, well-known techniques do seem to offer efficiency while retaining the simplicity, ease of coding and modularity of ad hoc methods." (CSD-TR236). In comparison to the ratfor preprocessor on a program of 3000 source lines running on a CDC 6500 system took 185.470 CPU seconds. That was cut by 50% when binary search was used in the ratfor code. Rewriting the ad hoc lexical scanner using a standard method based on finite automata reduced run time to 12.723 seconds. With the availability of Fortran 77, a successor named ratfiv (ratfor=rat4 => rat5=ratfiv) could, with an option /f77, output a more readable Fortran 77 code: IF (A .GT. B) THEN MAX = A ELSE MAX = B ENDIF Initial Ratfor source code was ported to C in 1985 and improved to produce Fortran 77 code too. A git tree has been set in 2010 in order to revive ratfor . Although the GNU C compiler had the ability to directly compile a Ratfor file (.r) without keeping a useless intermediate Fortran code (.f) (gcc foo.r), this functionality was lost in version 4 during the move in 2005 from f77 to GNU Fortran. Source packages, .deb or src.rpm package are still available for users who needs to compile old Ratfor software on any operating system. Ratfiv Ratfiv is an enhanced version of the Ratfor programming language, a preprocessor for Fortran designed to give it C-like capabilities. Fortran was widely used for scientific programming but had very basic control-flow primitives ("do" and "goto") and no "macro" facility which limited its expressiveness. The name of the language is a pun (Ratfor (RATional FORtran) -> "Rat Four" -> "Rat Five" -> RatFiv). Ratfiv was developed by Bill Wood at the Institute for Cancer Research, Philadelphia, PA in the early 1980s and released on several DECUS (Digital Equipment Users Group) SIG (Special Interest Group) tapes. It is based on the original Ratfor by B. Kernighan and P. J. Plauger, with rewrites and enhancements by David Hanson and friends (U. of Arizona), Joe Sventek and Debbie Scherrer (Lawrence Berkeley National Laboratory). Ratfiv V2.1 was distributed on the DECUS RSX82a SIG tape. See also Ratfiv Fortran References External links Ratfor Ratfor90 History of Programming Languages: Ratfor Purdue summary Ratfor90 Fortran programming language family Programming languages created in 1976
1236150
https://en.wikipedia.org/wiki/Apple%20Pascal
Apple Pascal
Apple Pascal is an implementation of Pascal for the Apple II and Apple III computer series. It is based on UCSD Pascal. Just like other UCSD Pascal implementations, it ran on its own operating system (Apple Pascal Operating System, a derivative of UCSD p-System with graphical extensions). Originally released for the Apple II in August 1979, just after Apple DOS 3.2, Apple Pascal pioneered a number of features that would later be incorporated into DOS 3.3, as well as others that would not be seen again until the introduction of ProDOS. The Apple Pascal software package also included disk maintenance utilities, and an assembler meant to complement Apple's built-in "monitor" assembler. A FORTRAN compiler (written by Silicon Valley Software, Sunnyvale California) compiling to the same p-code as Pascal was also available. Comparison of Pascal OS with DOS 3.2 Apple Pascal Operating System introduced a new disk format. Instead of dividing the disk into 256-byte sectors as in DOS 3.2, Apple Pascal divides it into "blocks" of 512 bytes each. The p-System also introduced a different method for saving and retrieving files. Under Apple DOS, files were saved to any available sector that the OS could find, regardless of location. Over time, this could lead to file system fragmentation, slowing access to the disk. Apple Pascal attempted to rectify this by saving only to consecutive blocks on the disk. Other innovations introduced in the file system included the introduction of a timestamp feature. Previously only a file's name, basic type, and size would be shown. Disks could also be named for the first time. Limitations of the p-System included new restrictions on the naming of files. Writing files only on consecutive blocks also created problems, because over time free space tended to become too fragmented to store new files. A utility called Krunch was included in the package to consolidate free space. The biggest problem with the Apple Pascal system was that it was too big to fit on one floppy disk. This meant that on a system with only one floppy disk drive, frequent disk swapping was needed. A system needed at least two disk drives in order to use the operating system properly. Release history Sources Notes External links The History of Apple's Pascal "Syntax" Poster, 1979-80. Pascal Syntax Poster Pascal Pascal Disk operating systems Pascal programming language family Discontinued operating systems
855623
https://en.wikipedia.org/wiki/NetApp
NetApp
NetApp, Inc. is an American hybrid cloud data services and data management company headquartered in Sunnyvale, California. It has ranked in the Fortune 500 since 2012. Founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of applications and data both online and physically. History NetApp was founded in 1992 by David Hitz, James Lau, and Michael Malcolm as Network Appliance, Inc. At the time, its major competitor was Auspex Systems. In 1994, NetApp received venture capital funding from Sequoia Capital. It had its initial public offering in 1995. NetApp thrived in the internet bubble years of the mid 1990s to 2001, during which the company grew to $1 billion in annual revenue. After the bubble burst, NetApp's revenues quickly declined to $800 million in its fiscal year 2002. Since then, the company's revenue has steadily climbed. In 2006, NetApp sold the NetCache product line to Blue Coat Systems. In 2008, Network Appliance officially changed its legal name to NetApp, Inc., reflecting the nickname by which it was already well-known. On June 1, 2015, Tom Georgens stepped down as CEO and was replaced by George Kurian. In May 2018 NetApp announced its first End to End NVMe array called All Flash FAS A800 with release of ONTAP 9.4 software. NetApp claims over 1.3 million IOPS at 500 microseconds per high-availability pair. In January 2019 Dave Hitz announced retirement from NetApp. Acquisitions 1997 - Internet Middleware (IMC) acquired for $10.5 million. IMC's web proxy caching software became the NetCache product line (which was resold in 2006). 2004 - Spinnaker Networks acquired for $300 million. Technologies from Spinnaker integrated into Data ONTAP GX and first released in 2006, later Data ONTAP GX become Clustered Data ONTAP 2005 - Alacritus acquired for $11 million. The tape virtualization technology Alacritus brought to NetApp was integrated into the NetApp NearStore Virtual Tape Library (VTL) product line, introduced in 2006. 2005 - Decru: Storage security systems and key management. 2006 - Topio acquired for $160 million. Software that helped replicate, recover, and protect data over any distance regardless of the underlying server or storage infrastructure. This technology became known as ReplicatorX (Open System SnapVault), and has since been abandoned. 2008 - Onaro acquired for $120 million. Storage service management software which helps customers manage storage more efficiently with guaranteed service levels for availability and performance. Onaro's SANscreen technology launched as such and probably later influencing NetApp OnCommand Insight. 2010 - Bycast acquired for $50 million. Technologies from Bycast gave birth to the StorageGRID object storage product. 2011 - Akorri acquired for $60 million, allowing for cross-domain analysis and advanced analytics across data center infrastructures. 2011 - Engenio (LSI) acquired for $480 million. Engenio was the external storage systems business unit of the LSI Corporation. Launched as NetApp E-Series product line 2012 - Cache IQ: Development of NAS cache systems 2013 - IonGrid: A technology developer that allows iOS devices to access users and internal business applications through a secure connection 2014 - SteelStore: NetApp acquired Riverbed Technology's SteelStore line of data backup and protection products, which it later renamed as AltaVault and then to Cloud Backup 2015 - SolidFire: In December 2015 (closing in January 2016), NetApp acquired founded in 2009 flash storage vendor SolidFire for $870 million. 2017 - Plexistor: NetApp first announced the acquisition of a company and technology called Plexistor in May 2017. Technologies from Plexistor gave start for MAX Data product 2017 - Greenqloud was acquired with its Qstack product. Greenqloud was a private startup company that created cloud services, orchestration and management platform for hybrid cloud and multi-cloud environments. 2017 - Immersive Partner Solutions, a Littleton, Colorado-based developer of software to validate multiple converged infrastructures through their lifecycles 2018 - StackPointCloud: NetApp acquired StackPointCloud, a project for multi-cloud Kubernetes as-a-service and a contributor to the Kubernetes which started the Kubernetes Service product 2019 - Cognigo: Israeli AI-driven data compliance and security supplier 2020 - Talon: Cloud Data Storage company enabling data consolidation and security for enterprises. 2020 - CloudJumper: Cloud software in VDI and remote desktop services 2020 - Spot: handled compute management and cost optimization in the public clouds 2021 - CloudHawk.io: AWS Cloud Security Posture. 2021 - CloudCheckr: Cloud Optimization Platform. Competition NetApp competes in the computer data storage hardware industry. In 2009, NetApp ranked second in market capitalization in its industry behind EMC Corporation, now Dell EMC, and ahead of Seagate Technology, Western Digital, Brocade, Imation, and Quantum. In total revenue of 2009, NetApp ranked behind EMC, Seagate, Western Digital, and ahead of Imation, Brocade, Xyratex, and Hutchinson Technology. According to a 2014 IDC report, NetApp ranked second in the network storage industry "Big 5's list", behind EMC (DELL), and ahead of IBM, HP and Hitachi. According to Gartner's 2018 Magic Quadrant for Solid-State Arrays, NetApp was named a leader, behind Pure Storage Systems. In 2019, Gartner named NetApp as #1 in Primary Storage. Products NetApp's OnCommand management software controls and automates data-storage. ActiveIQ comes to NetApp with the acquisition of SolidFire. ActiveIQ is SaaS portal with built-in monitoring, prediction, recommendations for optimizing configurations and performance for NetApp storage systems based on machine-learning capabilities and artificial intelligence. Later ONTAP Analytics and Telemetry Service (OATS) product, which can be installed in AWS cloud and on-premise, was renamed to Active IQ Performance Analytics Services (ActiveIQ PAS). NetApp ONTAP-based Hardware Appliances NetApp's FAS (Fabric-Attached Storage), AFF (All-Flash FAS), and ASA (All SAN Array) storage systems are the company's flagship products. Such products are made up of storage controllers, and one or more enclosures of hard disks, known as shelves. In entry-level systems, the drives may be physically located in the storage controller itself. In the early 1990s, NetApp's storage systems initially offered NFS and SMB protocols based on standard local area networks (LANs), whereas block storage consolidation required storage area networks (SANs) implemented with the Fibre Channel (FC) protocol. In 2002, in an attempt to increase market share, NetApp added block-storage access as well, supporting the Fiber Channel and iSCSI protocols. NetApp systems support Fibre Channel, iSCSI, Fibre Channel over Ethernet (FCoE) and the FC-NVMe protocol. ONTAP Many of NetApp's products use the company's proprietary ONTAP data management operating system, under continuous development since 1992 which includes code from Berkeley Net/2 BSD Unix, Spinnaker Networks technology and other operating systems. There are three ONTAP platforms: FAS/AFF systems, software on commodity servers (ONTAP Select) as virtual machine or in the cloud (Cloud Volumes ONTAP). All ONTAP systems are using WAFL file systems which provide basis for snapshots and other snapshot-based and data protection technologies. Key IP from ONTAP is also used in NetApp Astra, a newer data management-as-a-service system built for Kubernetes. Cloud Backup Previously known as Riverbed SteelStor before its acquisition by NetApp, this product was later renamed to AltaVault and then to Cloud Backup. Cloud Backup was initially available in three forms: as a hardware appliance, virtual appliance, and cloud appliance. Later NetApp announced the end of sale for hardware and virtual appliances. Data placed on NAS share on Cloud Backup deduplicated, compressed, encrypted and transferred with Object Protocols to object storage systems like Amazon S3, Azure Blob Storage or StorageGRID; thus Cloud Backup appears as a transparent gateway for archiving data to a private or public cloud. NetApp HCI NetApp Hyper-converged infrastructure (HCI) or sometimes referred by NetApp as Hybrid Cloud Infrastructure, NetApp HCI is based on commodity blade and rack servers, NetApp Element software and VMware vSphere. NetApp HCI includes a web-based GUI with installation wizard called NetApp Deployment Engine (NDE) for configuring vCenter, IP addresses, login and password, and storage nodes. NetApp HCI is different from conventional HCI designs because it has dedicated storage nodes, while other HCI systems like Dell EMC VxRail or vSAN do not have dedicated storage nodes and utilize disk drives installed in each server. Dedicated storage nodes allow the cluster to grow or decrease storage capacity and performance separately from compute nodes. Minimum NetApp HCI configuration requires two compute blade server nodes and additionally, Element software requires a minimum of 4 storage nodes but is available to customers as four physical storage nodes or two physical storage nodes and two nodes as a virtual machine playing witness role on compute nodes. 2U HCI Chassis with four half-width blade servers Each storage node drive set consists of 6 SSD drives directly connected to a dedicated storage node and installed in front of the blade chassis. Each storage and compute blade nodes have 25 Gigabit Ethernet ports which could be used as 10Gbit/s ports as well as dedicated 1Gb ports for management purposes. Network switches were not included, and in NetApp HCI with Element software release 11 NetApp announced H-Series Switch as part of HCI, so all hardware components must be bought from NetApp. ONTAP Select available as SDS on NetApp HCI for customers interested in NAS protocols. The self-service portal allows automating common provisioning and management tasks without involving the IT team. NetApp Kuberneties Service will support NetApp HCI with the acquisition of Stackpoint. NetApp SolidFire storage and NetApp HCI can be expanded & mixed in a single cluster. At the NetApp Insight 2018 conference in Las Vegas NetApp presented two new compute nodes: H410C and H610C, where H610C includes additional GPU cards which can be used in VDI environments. Starting with Element OS version 11, automatically detected and enabled by default with the upgrade, HCI has Protection Domains functionality to provide resiliency into HCI chassis. In the case of maintenance or chassis failure, helix algorithm spans data blocks workload will automatically fail-over to another operational chassis. SolidFire SolidFire storage system uses OS called NetApp Element Software (formally SolidFire Element OS) based on Linux and designed for SSDs and scale-out architecture with the ability to expand up to 100 nodes and provide access to data through SAN protocols iSCSI natively and Fiber Channel with two gateway nodes. Element OS provides a REST-based API for storage automation, configuration, management, and consumption. SF node H610S has 12 2.5" NVMe SSD drives and can install only Element version 10.4, while previous models have 10 SSD drives. Element SW version 11, will not support FC. SolidFire uses iSCSI login redirection to distribute reads and writes across the cluster using helix algorithm. This architecture does not have disk shelves like traditional storage systems and expands with adding nodes to the cluster. Each node has pre-installed SSD drives. Each node can have only one type of SSD drives with the same capacity. Each SolidFire cluster can have a mix of different node models & generations. Element X uses the replication factor of 2, where blocks of data spread across the cluster which has no performance impact but require more space in contrary to Erasure Coding technology. Such architecture allows users to expand performance and capacity separately as needed. Also, SolidFire has the ability to set three types of QoS for its LUNs: minimum, maximum and burst. Burst is used as credits which were not used by the LUN while it was not received its maximums. Element X available as software-only on commodity servers. SolidFire systems using S3 protocol could backup data to an Object storage systems like StorageGRID. SolidFire could replicate data with SnapMirror protocol to ONTAP systems and starting with Element OS 11 to Cloud Volumes ONTAP. VEEAM backup & Replication 9.5 Update 4 will implement seamless integration with NetApp HCI and Solidfire provide application consistent storage snapshot capabilities, Instant VM Recovery, and Single Item Restore for some applications. CommVault Simpana also provides application-consistent storage snapshot capability for NetApp HCI and Solidfire. All HCI configurations require at least 4 10/25 Gbit/s ports for connections until Element OS 11, where two ports are enough. StorageGRID StorageGRID is a software-defined storage system which provides access to data via object IP-based protocols like S3 and OpenStack Swift. It is available in the form of hardware or as software. A node in a StorageGRID cluster is an appliance, virtual machine or docker container. StorageGRID is a geo-dispersed namespace clustered storage system, also known as "the grid", with an ability to make and store multiple copies (replicas) of objects (also known as Replication Factor) or in Erasure Coding (EC) manner among cluster storage nodes with object granularity based on configured policies for data availability and durability purposes. StorageGRID stores metadata separately from the objects and allows users to configure Information Lifecycle Management (ILM) policies on a per-object level to automatically satisfy and confirm changes in the cluster once changes introduced to the cluster like the cost of network usage, storage media usage changes a node was added or removed, etc. ONTAP, Cloud Backup, SANtricity, and Element X can replicate data to StorageGRID systems. SG6060 is optimized for high transactional throughput, MA, AI, and FabricPool. StorageGRID on NetApp HCI Solution Deployment of StorageGRID on NetApp HCI can be deployed in three forms: Fully contained; High performance and scale; NetApp HCI and StorageGRID appliance. E-Series Previously known as LSI Engenio RDAC after NetApp acquisition the product renamed to NetApp E-Series. It is a general-purpose enterprise storage system with two controllers for SAN protocols such as Fibre Channel, iSCSI, SAS and InfiniBand (includes SRP, iSER, and NVMe over Fabrics protocol). NetApp E-Series platform uses proprietary OS SANtricity and proprietary RAID called Dynamic Disk Pool (DDP) alongside traditional RAIDs like RAID 10, RAID 6, RAID 5, etc. In DDP pool each D-Stripe works similar to traditional RAID-4 and RAID-6 but on block level instead of entire disk level, therefore, have no dedicated parity drives. DDP compare to traditional RAID groups restores data from lost disk drive to multiple drives which provide a few times faster reconstruction time while traditional RAIDs restores lost disk drive to a dedicated parity drive. Starting with SANtricity 11.50 E-Series systems EF570 and E5700 support NVMe over Ethernet (RoCEv2) with 100Gbit/s Ethernet ports and NVMe over InfiniBand. Starting with EF600 systems are end-to-end NVMe and capable of NVMe/FC in addition to NVMe/RoCE & NVMe/InfiniBand. Sync and async mirroring are supported with SANtricity 11.50. SANtricity Unified Manager is a web-based manager that supports up to 500 EF/E-Series arrays and supports LDAP, RBAC, CA & SSL for authorization & authentication. In August 2019 NetApp announced E600 with support for NVMe/IB, NVMe/RoCE, NVMe/FC protocols, up to 44GBps of bandwidth and full-function embedded REST API. Converged Infrastructure FlexPod, nFlex and ONTAP AI are commercial names for Converged Infrastructure (CI). Converged Infrastructures are joint products of a few vendors and consists from 3 main hardware components: computing servers, switches (in some cases switches are not necessary) and NetApp storage systems: FlexPod based on Cisco Servers and Cisco Nexus switches nFlex based on Fujitsu Servers with Extreme Networks switching ONTAP AI using NVIDIA supercomputers with Melanox or Cisco Nexus switches. Converged Infrastructures have tested and validated design configurations from vendors available to end users and typically include popular infrastructure software like Docker Enterprise Edition (EE), Red Hat OpenStack Platform, VMware vSphere, Microsoft Servers and Hyper-V, SQL, Exchange, Oracle VM and Oracle DB, Citrix Xen, KVM, OpenStack, SAP HANA etc. and might include self-service portals PaaS or IaaS like Cisco UCS Director (UCSD) or others. FlexPod, nFlex and ONTAP AI allows an end user to modify validated design and add or remove some of the components of the Converged Infrastructure while not all of the other Converged Infrastructures from competitors allows modification. FlexPod There are few FlexPod types: FlexPod Datacenter, FlexPod Select, FlexPod Express (Small, Medium, Large and UCS-managed), and FlexPod SF. FlexPod Datacenter usually uses Nexus switches like 5000, 7000 & 9000; Cisco UCS Blade Servers; Mid-Range or High-End NetApp FAS or AFF systems. FlexPod Select often used with BigData framework software like Hortonworks, Cloudera, or more recently, Confluent. FlexPod SF has in its architecture Nexus 9000 switches, Cisco UCS Blade servers and NetApp SolidFire storage based on Cisco UCS rack servers. Cisco UCS Director used as the orchestrator for FlexPod for a self-service portal, workflow automation and billing platform to build PaaS & IaaS. FlexPod systems supported under the cooperative center of competence. NetApp Converged Systems Advisor (CSA) is a software-as-a-service (SaaS) platform that consists of an on-premises agent and a cloud-based portal. Multi-Pod is a FlexPod Datacenter solution with a FAS or AFF system leveraging MetroCluster technology for stretching storage system between two sites. NetApp and Cisco looking to incorporate NetApp MAX Data product into FlexPod solutions once persistent memory technology will be available in UCS servers. FlexPod Datacenter has the biggest variety of designed and validated by Cisco and NetApp architectures and applications including: Microsoft: SQL, Exchange, SharePoint Hypervisors: Microsoft Hyper-V, VMware vSphere, Citrix XenServer Red Hat Enterprise Linux OpenStack, Citrix XenDesktop/XenApp, Docker Datacenter for Container Management IBM Cloud Private, Cisco Hybrid Cloud with Cisco CloudCenter, Microsoft Private Cloud, Citrix CloudPlatform, Apprenda PaaS SAP, Oracle Database, Oracle RAC on Oracle Linux, Oracle RAC on Oracle VM 3D Graphics Visualization with Citrix and NVIDIA GPU. FlexPod Datacenter for AI leveraging UCS servers with NVIDIA GPU. Epic EHR, MEDITECH EHR FlexPod types: FlexPod Express (Small, Medium, Large and UCS-managed) FlexPod Datacenter FlexPod SF FlexPod Select ONTAP AI Converged infrastructure solution based on Cisco Nexus 3000 or Mellanox Spectrum switches with 100Gbit/s ports, NetApp AFF storage systems, Nvidia DGX supercomputer servers. DGX servers interconnected with each other over RDMA over RoCE, and developed for Deep Learning based on Docker containers with NetApp Docker Plugin Trident. DGX servers connected to the storage with Ethernet connection and consume space over NFS protocol. With SnapMirror ONTAP AI solution can deliver data between edge computing, on-prem & the cloud as part of Data Fabric vision. ONTAP AI tested & validated for use with NFS & FlexGroup technologies. Combined technical support provided to the customers to all the architecture components. OnCommand Insight OnCommand Insight (OCI) is data center management software, capacity management, infrastructure analytics, centralized view into historical trends to forecast performance and capacity requirements and workload placement. OCI works with all NetApp storage systems and with competitor storage systems and in public cloud. Licensed server-based software. Memory Accelerated Data NetApp MAX Data for short, MAX Data is a proprietary Linux file system with auto-tiering from PMEM to SSD and data protection features for businesses. NetApp officially announced MAX Data's availability at NetApp Insight 2018 in October, supported by a number of server brands. MAX Data came from the acquisition of Plexistor in May 2017. MAX Data consists of two tiers: Tier 1 and Tier 2, where cold data destaged to Tier 2 from Tier 1 or promoted from Tier 2 to Tier 1 when accessed, by MAX Data tiering algorithm, transparently to the applications. Currently, NetApp has recommended ratio for MAX Data as 1 to 25 for Tier 1 and Tier 2 respectively. MAX Data according to NetApp will have two modes: to use MAX Data as a POSIX-compatible (the internal name is M1FS) file system or as API memory extension. Usage of MAX Data as POSIX FS does not require application modifications while API memory extension requires applications to be modified in order to utilize this functionality. MAX Data installed on Linux hosts to utilize ultra-low latency with persistent memory such as the Optane DC persistent memory (Optane DCPMM), NVDIMM or DRAM (when persistence not needed, for example for testing purposes) memory for Tier 1 and a NetApp AFF storage system for Tier 2. Optane DCPMM is the Intel brand name of products that use 3D XPoint technology and supported starting with MAX Data version 1.3. MAX FS is a Persistent Memory-based Filesystem (PM-based FS) that doesn't require application modification but also can be a Direct Access enabled File system (DAX-enabled FS) for applications with optimization for Persistent Memory using SPDK. DAX is the mechanism that enables direct access to files stored in persistent memory arrays without the need to copy the data through the page cache. MAX data have a per-server license and do not depend on CPU, Memory or storage capacity. MAX Data has two different licensing tiers: Basic and Advanced. Cloud Business Cloud Central is a web-based GUI interface that provides a multi-cloud interface based on Qstack for NetApp's cloud products like Cloud Volumes Service, Cloud Sync, Cloud Insights, Cloud Volumes ONTAP, SaaS Backup in multiple public cloud providers. Cloud Manager is a service for high-level management of ONTAP-based systems on-premise and in the cloud: CVO, CVS, ONTAP Select, FAS, and AFF. Cloud Manager allow setup SnapMirror data protection replication between systems through the GUI interface with drag-and-drop. Cloud Volumes On-Prem It is a storage system installed on-premises in a customer's data center and available to the customer as service. All work for updates & technical support provided by NetApp while the customer consumes space from the storage using web-based GUI or API and performs data backup and replication if needed. Cloud Volumes ONTAP Formally ONTAP Cloud. Cloud Volumes ONTAP (CVO) is software-defined (SDS) version of ONTAP available in some public cloud providers like AWS, Azure, Google Cloud, and IBM Cloud. Cloud Volumes ONTAP is a virtual machine which is using commodity equipment and running ONTAP software as a service. Cloud Volumes Service Cloud Volumes Service is a service in Amazon AWS & Google Cloud - it is public cloud provider based on NetApp All-Flash FAS systems and ONTAP software, allowing for synchronizing data between cloud and on-premises NetApp systems. NetApp Private Storage NetApp Private Storage (NPS) is based on Equinix partner provided colocation service in its data centers for NetApp Storage Systems with 10 Gbit/s direct connection to public cloud providers like Azure and AWS etc. NPS storage could be connected to a few cloud providers or on-premise infrastructure, thus in case of switching between clouds does not require data migration between them. Astra Astra is NetApp's Kubernetes cloud service for consistent application-consistent backups, Data Cloning, and data mobility across clouds and on-premises. Astra can deploy and maintain data-rich applications across Amazon Web Services, Microsoft Azure, Google Cloud and on-premises datacenters, enabling easily backup and restoring data or migrating the applications from one Kubernetes cluster to another in a multi-cloud environment. SaaS Backup NetApp SaaS Backup (Previously Cloud Control) is back up and recovery service for SaaS Microsoft Office 365 and Salesforce which provide extended, granular and custom retention capabilities of backup and recovery process compare to native cloud backup. NetApp planning to extend SaaS Backup and recovery service for Google Workspace (formerly G Suite and Google Apps for Work), Slack and ServiceNow. Cloud Sync Cloud Sync is service for synchronizing any NAS storage system with another NAS storage, an Object Storage like Amazon S3 or NetApp Storage GRID using an object protocol. Cloud Insights Cloud Insights is an SaaS application for monitoring infrastructure application stack for customers consuming cloud resources and also build for the dynamic nature of microservices and web-scale infrastructures. Cloud Insights uses similar to OnCommand Insight front-end API but different technology on the back-end. Cloud Insights available as a preview and will have three editions: Free, Standard and Premium. Cloud Secure Cloud Secure is a SaaS security tool that identifies malicious data access and compromised users, in other words, user behavior analytics. Cloud Secure uses machine learning algorithms to identify unusual patterns, and can identify if users have been infected with ransomware, and prevent them from encrypting the files. Currently supported data repositories include NetApp Cloud Volumes, NetApp ONTAP, NetApp StorageGRID, OneDrive, AWS, Google Suite, HPE, DELLEMC Isilon, Dropbox, Box, @workspace and Office 365. NDAS NetApp Data Availability Services (NDAS) provides data protection in the cloud GUI. This cloud service is located only in AWS but can be copied to other clouds. NDAS is for backup, data protection and disaster recovery purposes from ONTAP storage. ONTAP systems starting with ONTAP 9.5 have a built-in proxy application that converting NetApp snapshots with WAFL data & metadata into the S3 format unlike FabricPool technology, which stores only data in the object storage. NDAS is one of the Data Fabric manifestations. Data Fabric Often referred as to "Data Fabric Story," the variety of integrations between NetApp's products and data mobility is considered by NetApp to be its Data Fabric vision . Data Fabric defines the NetApp technology architecture for hybrid cloud and includes: SnapMirror replication from SolidFire to ONTAP SnapMirror replication from ONTAP to Cloud Backup FabricPool tiering feature for de-staging cold data from ONTAP to StorageGRID, Amazon S3 or Azure Blob Volume Encryption with FabricPool provide secure data storage and secure over the wire transfer of enterprise data in a cloud provider; SnapMirror between FAS, AFF, ONTAP Select and Cloud Volumes ONTAP Archiving and DR to public cloud CloudMirror feature in StorageGRID replicates from on-premise object storage to Amazon S3 storage and triggers some actions in AWS Cloud SolidFire backup to StorageGRID or Amazon S3 Cloud Backup archiving to variety of object storage systems (including StorageGRID) or many cloud providers CloudSync is replication of NAS data to object format and back replication to Cloud Volumes Service; Data backup to on-premise storage from SaaS Backup SANtricity Cloud Connector for block-based backup, copy, and restore of E-Series volumes to an S3, NetApp Data Availability Services for data protection from ONTAP to cloud S3 storage with backup, DR & data mining capabilities, etc. Software Integrations NetApp products could be integrated with a variety of software products, mostly for ONTAP systems. Automation NetApp provides a variety of automation services directly to its products with HTTP protocol or through middle-ware software. Docker NetApp Trident software provides a persistent volume plugin for Docker containers with both orchestrators Kubernetes and Swarm and supports ONTAP, SolidFire, E-Series, Azure NetApp Files (ANF), Cloud Volumes and NetApp Kubernetes Service in Cloud. Also, NetApp with Cisco sells CI architectures which incorporate the Trident plugin: FlexPod Datacenter with Docker Enterprise Edition and ONTAP AI. CI/CD NetApp Jenkins Framework provides integration with ONTAP storage for DevOps, accelerating development with automation operations like provisioning and data-set cloning for test and development and leverage ONTAP for version control, create and delete checkpoints etc. Jenkins also integrate with NetApp Service Level Manager software which provides RESTful API for guarantee level of storage performance. Apprenda and CloudBees integrate and accelerate DevOps through Docker persistent volume plugin and Jenkins Framework integration. Apprenda could be integrated with OpenStack running on top of FlexPod. Backup and Recovery CommVault, Veeam and Veritas have integrations with ONTAP, SolidFire, Cloud Backup and E-Series leveraging storage capabilities like snapshots and cloning capabilities for testing backup copies and SnapMirror for Backup and Recovery (B&R), Disaster Recovery (DR) and Data Archiving for improving restore time and number of recovery points (see RPO/RTO). Cloud Backup integrates with nearly all B&R products for archiving capabilities since it is represented as ordinary NAS share for B&R software. Backup and recovery software from competitor vendors like IBM Spectrum Protect, EMC NetWorker, HP Data Protector, Dell vRanger, Acronis Backup and others also have some level of integrations with NetApp storage systems. Enterprise Applications NetApp systems can integrate with enterprise applications for backup purposes, cloning, provisioning, and other self-service storage features. Oracle DB can be connected using Direct NFS (dNFS) client build inside database app which will provide network performance, resiliency, load balancing for NFS protocol with ONTAP systems. Oracle DB, Microsoft SQL, IBM DB2, MySQL, Mongo DB, SAP HANA, MS Exchange, VMware vSphere, Citrix Xen, KVM integrate with NetApp systems for provisioning, cloning and additional backup and recovery build - this includes capabilities like SnapShots, SnapVault and SnapMirror with a variety of software including NetApp's SnapCenter and SnapCreator. OpenStack NetApp systems have integration with such open source projects as OpenStack Cinder for Block storage (SolidFire, ONTAP, E-Series, OnCommand Insight, Cloud Backup), OpenStack Manila for Shared file system (ONTAP, OnCommand Insight), Docker persistent volumes through Trident plugin (SolidFire, ONTAP, E-Series) and others. OEM IBM used to OEM NetApp FAS systems under its own brand known as IBM N-series and this partnership ended May 29, 2014. Dell OEM NetApp E-Series under its own name PowerVault MD. September 13, 2018, Lenovo and NetApp announced its technology partnership, so Lenovo OEM Netapp products under its own name: Lenovo ThinkSystem DE (using NetApp's EF & E-Series array technology), and ThinkSystem DM uses ONTAP software with Lenovo servers and supports FC-NVMe (analog for NetApp FAS & AFF systems). Vector Data builds rugged and carrier-grade versions of NetApp FAS, AFF, E-Series and SolidFire products with -48V DC power and other customizations under their Vault product line. Reception Controversy Syrian surveillance In November 2011, during the 2011 Syrian uprising, NetApp was named as one of several companies whose products were being used in the Syrian government crackdown. The equipment was allegedly sold to the Syrians by an authorized NetApp reseller. On April 7, 2014, NetApp was notified by the US Department of Commerce "that it had completed its review of this matter and determined that NetApp had not violated the U.S. export laws", and that the file on the matter had been closed. Legal dispute with Sun Microsystems In September 2007, NetApp started proceedings against Sun Microsystems, claiming that the ZFS File System developed by Sun infringed its patents. The following month, Sun announced plans to countersue based on alleged misuse by NetApp of Sun's own patented technology. Several of NetApp's patent claims were rejected on the basis of prior art after re-examination by the United States Patent and Trademark Office. On September 9, 2010, NetApp announced an agreement with Oracle Corporation (the new owner of Sun Microsystems) to dismiss the suits. Accolades NetApp was listed amongst Silicon Valley Top 25 Corporate Philanthropists in 2013. NetApp was Named Brand of the Year by the Think Global Awards in 2019. See also NetApp FAS NetApp ONTAP Operating System, used in NetApp storage systems Write Anywhere File Layout (WAFL), used in ONTAP storage systems Team NetApp Kaleidescape References Further reading External links NetApp Companies based in Sunnyvale, California Computer companies established in 1992 American companies established in 1992 1995 initial public offerings Computer companies of the United States Computer storage companies Storage Area Network companies Technology companies based in the San Francisco Bay Area Companies listed on the Nasdaq
281069
https://en.wikipedia.org/wiki/Sinbad%3A%20Legend%20of%20the%20Seven%20Seas
Sinbad: Legend of the Seven Seas
Sinbad: Legend of the Seven Seas (also known as Sinbad) is a 2003 American animated adventure film. It is produced by DreamWorks Animation and distributed by DreamWorks Pictures. The film, which combines traditional animation with some computer animation, was directed by Tim Johnson and Patrick Gilmore (in the latter's directorial debut) and written by John Logan, and stars the voices of Brad Pitt, Catherine Zeta-Jones, Michelle Pfeiffer, and Joseph Fiennes. It covers the story of Sinbad (voiced by Pitt), a pirate who travels the sea with his dog and his loyal crew, alongside Marina (voiced by Zeta-Jones), the fiancée of his childhood friend Prince Proteus (voiced by Fiennes), to recover the stolen Book of Peace from Eris (voiced by Pfeiffer) to save Proteus from accepting Sinbad's death sentence. The film blends elements from the One Thousand and One Nights and classical mythology. The film was released on July 2, 2003, and received mixed reviews from critics, who praised the animation, action scenes, and voice performances but criticized the storyline and polarizing CGI. Grossing $80.8 million on a $60 million budget, Sinbad was a box-office disappointment. DreamWorks suffered a $125 million loss on a string of films, which nearly bankrupted the company. It is, to date, the final DreamWorks Animation film to use traditional animation, as the studio abandoned it in favor of computer animation. DreamWorks did, however, bring 2D animation back for the 5-minute short film Bird Karma in 2018. Plot Sinbad and his pirate crew attempt to steal the magical "Book of Peace" and hold it for ransom as one last job before retiring to Fiji. Sinbad is surprised to see it is being protected while on-board to Syracuse, Sicily by Prince Proteus of Syracuse. Proteus was Sinbad's best friend as a child, and he tells him if it ever meant anything he can prove it. Sinbad tries to steal the book anyway but is prevented when Cetus attacks the ship. The two work together to fight off Cetus and for a moment reaffirm their bond. Just when it seems the beast is defeated, Sinbad is dragged off the ship. Proteus goes to save Sinbad, but he is stopped by his crew. Drawn underwater by Cetus, Sinbad is saved by the beautiful Goddess of Discord, Eris, who offers him any boon he desires in exchange for the Book of Peace. Sinbad and his crew go to Syracuse to steal the Book, but after seeing Proteus with his fiancé Lady Marina, Sinbad abandons the mission without giving a motive. Anticipating this, Eris impersonates Sinbad and steals the Book herself. Sinbad is sentenced to death, whereupon Proteus sends Sinbad to retrieve the Book instead, placing himself as a hostage, and Marina goes to make sure that Sinbad succeeds. To prevent them from succeeding, Eris sends a group of mythical siren s, who entrance and seduce the men aboard Sinbad's ship with their hypnotic singing voices, but do not affect Marina, who pilots the ship to safety and wins the favor of the crew. However, as she and Sinbad continue to argue with each other, Eris notices their disharmony and sends in a roc. The Roc captures Marina, but she is rescued by Sinbad, and they successfully defeat the creature, causing a reconciliation between the two. After these and other incidents, Sinbad and Marina talk in a brief moment of peace - Marina reveals that she's always dreamed of a life on the sea, and Sinbad reveals that he distanced himself from Proteus 10 years earlier because he loved Marina. They suddenly then reach and enter Eris' realm where she reveals that her plan was to maneuver Proteus into Sinbad's place, leaving Syracuse without an heir and collapse into chaos. Through Marina, Eris agrees to surrender the Book of Peace only if Sinbad truthfully tells whether he will return to Syracuse to accept blame and be executed if he does not get the Book. She gives him her word that she will honor the deal, making it unbreakable even for a god. When he answers that he will return, Eris calls him a liar, and returns him and Marina to the mortal world. Ashamed, Sinbad admits the Goddess of Discord is right, truly believing deep down that he is a selfish, black-hearted liar. Only, for Marina to tell him she is wrong, giving Sinbad a change of heart. In Syracuse, the time allotted to Sinbad has elapsed. Proteus readies himself to be beheaded, but at the last minute, Sinbad appears and takes his place. An enraged Eris appears suddenly and saves Sinbad by shattering the executioner's sword to pieces. Sinbad, shocked, realizes that this was still part of her test and that he has beaten her by proving his answer to be true after all. Eris is furious but cannot go back on her word as a goddess, and begrudgingly gives the book to Sinbad. With the true culprit revealed, Sinbad is pardoned for the crime of stealing the book and is now well-respected. With the Book restored to Syracuse, Proteus and Sinbad leave still as the best of friends. Sinbad and his crew prepare to leave on another voyage, leaving Marina in Syracuse. Unbeknownst to him, Proteus sees that Marina has fallen deeply in love with Sinbad, and life on the sea and releases her from their engagement, sending her to join Sinbad's ship. Marina surprises Sinbad by revealing her presence on the ship just as it begins to sail, and the two share a kiss. Now together, they and the crew set out on another long voyage as the ship sails into the sunset. Voice cast Brad Pitt as Sinbad, an adventurous pirate and sailor who plans on retiring to Fiji. Catherine Zeta-Jones as Lady Marina, a princess and Thracian ambassador to Syracuse. Michelle Pfeiffer as Eris, the beautiful and manipulative Goddess of Discord and Chaos who wants to create destruction throughout the world. Joseph Fiennes as Prince Proteus of Syracuse, Sinbad's noble childhood-friend and Marina's fiancé. Dennis Haysbert as Kale, Sinbad's first mate. Adriano Giannini as "Rat", an Italian lookout of Sinbad's crew. Timothy West as King Dymas of Syracuse, Proteus’ father. Jim Cummings as Luca, an elderly member of Sinbad's crew. Conrad Vernon as Jed, a comically heavily-armed member of Sinbad's crew. Raman Hui as Jin, an Asian member of Sinbad's crew who frequently makes bets with Li. Chung Chan as Li, an Asian member of Sinbad's crew who frequently makes bets with Jin. Andrew Birch as Grum and Chum, members of Sinbad's crew. Frank Welker (uncredited) as Spike, Sinbad's pet mastiff who Marina grows a soft spot for. Chris Miller as Tower Guard. Production Development Shortly after co-writing Aladdin (1992) for Disney, screenwriters Ted Elliott and Terry Rossio came up with the idea of adapting the story of Sinbad the Sailor in the vein of the story of Damon and Pythias before settling on a love triangle. They wrote a treatment inspired by screwball romantic comedy films with Sinbad as a reserved apprentice cartographer who joins Peri, a free-spirited female smuggler, on an adventure and falls in love. The story was based largely on the 'Simbad' comic book written and illustrated by Elena Poirier (1949-1956). In July 1992, Disney had announced they were adapting the story into a potential animated feature. The project was cancelled in 1993. In 1994, after Jeffrey Katzenberg founded DreamWorks and started Prince of Egypt 's works, he decided to re-start some ideas that Disney cancelled, like The Road to El Dorado or Sinbad; he involved Antz 's director Tim Johnson in the making of an animated feature about Sinbad the Sailor. Shortly after writing Gladiator (2000), John Logan was approached by Jeffrey Katzenberg to write the script for an animated film. When he was offered the story of Sinbad, Logan researched the multiple tales of the character before settling on depicting the Greek and Roman versions. He described his first draft script as "very complex, the relationships were very adult. It was too intense in terms of the drama for the audience that this movie was aimed at." Casting Russell Crowe was originally set to voice Sinbad, but he dropped out due to scheduling conflicts. He was replaced by Brad Pitt, who wanted to make a film that his nieces and nephews could see. He explained, "They can't get into my movies. People's heads getting cut off, and all that." Pitt had already tried to narrate DreamWorks' previous animated film Spirit: Stallion of the Cimarron, but "it didn't work", with Matt Damon taking over the role. Pitt's purist intentions worried him that his Missourian accent would not be suitable for the Middle Eastern character, but was persuaded by the filmmakers that his accent would lighten the mood. Michelle Pfeiffer, who voices Eris, the Goddess of Discord, had struggles with pinning down the character's personality, initially finding her "too sexual," and then too dull. After the third rewrite, Pfeiffer called Jeffrey Katzenberg and told him, "You know, you really can fire me," but he assured her that this was just part of the process. Animation In January 2001, it was reported that DreamWorks Animation would completely transfer their animation workflow into using the Linux operating system. Previously, their animation and rendering software had used Silicon Graphics Image servers and workstations, but as their hardware began to show slowness, DreamWorks began looking for an alternate platform for superior optimal performance in order to save hardware costs. In 2002, they decided to partner with Hewlett-Packard for a three-year deal for which they used their dual-processor HP workstations and ProLiant servers running Red Hat Linux software. Starting with Spirit: Stallion of the Cimarron (2002), they had replaced its entire render farm with x86-based Linux servers. Sinbad: Legend of the Seven Seas was the first DreamWorks Animation production to completely utilize Linux software, with more than 250 workstations used. Starting with storyboards, the artists first drew sketches on paper to visualize the scene, which were later edited into animatics. For the character animation, rough character sketches were passed through the ToonShooter software, which digitized the sketches. From that point, the animators were able to easily integrate the animation into existing scenes. Production software lead Derek Chan explained, "ToonShooter is an internal tool we wrote for Linux. It captures low resolution 640 x 480 line art that the artists use to time the film." The animated characters were then digitally colored using the Linux software application, InkAndPaint. For the visual effects, DreamWorks Animation had used Autodesk Maya to create water effects. However, the rendering was found to be too photorealistic, and senior software engineer for advanced R&D future films Galen Gornowicz sought to modify the effects so as to closely match the movie's visual development renderings. Craig Ring, who served as digital supervisor on the film, described four major approaches to water used in the film: compositing ripple distortion over the painted backgrounds; creating fluid simulation; developing a rapid slashing technique to create a surface and then send ripples through the surface; and better integrating the 3D visual effects with stylized, hand drawn splashes. Release Marketing A PC game based on the film was released by Atari, who worked closely with one of the film's directors, Patrick Gilmore. It was released before the VHS and DVD release of the film. Burger King released six promotional toys at the time of the film's release, and each toy came with a "Constellation Card." Hasbro produced a series of Sinbad figures as part of its G.I. JOE action figure brand. The figures were 12" tall and came with a mythical monster. Home media Sinbad: Legend of the Seven Seas was released on DVD and VHS on November 18, 2003, by DreamWorks Home Entertainment. The DVD included a six-minute interactive short animated film Cyclops Island, featuring an encounter with the eponymous Cyclopes. In July 2014, the film's distribution rights were purchased by DreamWorks Animation from Paramount Pictures (owners of the pre-2011 DreamWorks Pictures library) and transferred to 20th Century Fox before reverting to Universal Studios in 2018; Universal Pictures Home Entertainment subsequently released the film on Blu-ray Disc on June 4, 2019, with the Cyclops Island short removed. Cyclops Island Cyclops Island (also known as Sinbad and the Cyclops Island) is a traditionally animated interactive short film that acts as a sequel to Sinbad: Legend of the Seven Seas, taking place shortly after the events of the previous film. Instead of travelling to Fiji, Sinbad and his crew decide to spend their vacation on the tropical island of Krakatoa. While attempting to find a source of fresh water on the island, Marina and Spike run into a tribe of Cyclopes who they have to defeat with the help of Sinbad, Kale, and Rat. When Sinbad dislodges a large boulder during the fight, a volcano erupts and the island goes down in flames. Marina then suggests looking for a nicer destination for their next holiday, such as Pompeii. While watching the short film on DVD, the viewer can choose to follow different characters to see different angles of the same story. The viewer can follow Sinbad, the duo of Kale and Rat, Marina, or Spike. Brad Pitt, Catherine Zeta-Jones, Dennis Haysbert, and Adriano Giannini all reprised their roles from the original film. On the film's VHS release, the short film takes place after the movie ends but before the credits roll, and is shown in its entirety. Reception Critical response On the review aggregator website Rotten Tomatoes, Sinbad: Legend of the Seven Seas has an approval rating of 45% based on 128 reviews with an average rating of 5.63/10. The site's consensus reads: "Competent, but not magical." Metacritic, which assigns a normalized rating, has a score of 48 based on 33 reviews, indicating "mixed or average reviews." Kirk Honeycutt of The Hollywood Reporter praised the film, writing that "Sinbad is a cartoon that does what matinée [afternoon showings] moviemakers of old never had the resources to do: allow their imagination to run amok in an ancient world that never existed―but should have." He praised the animation and backgrounds as "lushly rendered by the animation artists, displaying details not only from the world according to Ray Harryhausen; but from the Greco-Roman world and Middle East. As with all good animation, these serve as backdrops to the comedy and adventure the characters encounter every second." Roger Ebert of the Chicago Sun-Times gave the film three-and-a-half stars, concluding that "Sinbad: Legend of the Seven Seas is another worthy entry in the recent renaissance of animation, and in the summer that has already given us Finding Nemo, it's a reminder that animation is the most liberating of movie genres, freed of gravity, plausibility, and even the matters of lighting and focus. There is no way that Syracuse could exist outside animation, and as we watch it, we are sailing over the edge of the human imagination." Claudia Puig, reviewing for USA Today, summarized that "Sinbad is a swashbuckling adventure saga that probably will appeal more to older kids. But it's not a wondrous tale. The effects are competent, the action has exciting moments, and the story is interesting enough, but the parts don't add up to a compelling sum." Todd McCarthy of Variety wrote, "A passably entertaining animated entry from DreamWorks that's closer to The Road to El Dorado than to Shrek, Sinbad: Legend of the Seven Seas tries too strenuously to contemporize ancient settings and characters for the sake of connecting with modern kids." Elvis Mitchell of The New York Times panned the film, suggesting the film featured a "boatload of celebrities slumming through another not-quite-thawed adventure story." Additionally, he claimed "more thought and care were lavished on the design of the monsters than on the hand-drawn lead characters, who have the same kind of sketchy features as the stars of those animated Bible story cartoons sold on late-night infomercials." There was additional criticism for the film's departure from its Arabic origin. Jack Shaheen, a critic of Hollywood's portrayal of Arabs, believed that "the studio feared financial and possibly political hardships if they made the film's hero Arab," and claimed that "if no attempt is made to challenge negative stereotypes about Arabs, the misperceptions continue. It's regrettable that the opportunity wasn't taken to change them, especially in the minds of young people." At one point, Shaheen asked Katzenberg to include some references to Arabic culture in the film. According to Shaheen, "he didn't seem surprised that I mentioned it, which presumably means that it was discussed early on in the development of the film." Box office On the film's opening weekend, the film earned $6.9 million and $10 million since its Wednesday start. It reached sixth place at the box office and faced early competition from Terminator 3: Rise of the Machines, Legally Blonde 2: Red, White & Blonde, Charlie's Angels: Full Throttle, Finding Nemo, and Hulk. The week after its release, the similarly themed film Pirates of the Caribbean: The Curse of the Black Pearl premiered, in which Sinbad grossed $4.3 million finishing seventh. The film closed on October 9, 2003, after earning $26.5 million in the United States and Canada and $54.3 million overseas, for a worldwide total of $80.7 million. The box office run of Sinbad flopped causing a loss of $125 million for DreamWorks Animation. When speaking of the disappointment, Katzenberg commented, "I think the idea of a traditional story being told using traditional animation is likely a thing of the past." Video game A video game based on the film developed by Small Rockets and published by Atari was released on October 21, 2003, for Microsoft Windows. References External links 2003 films American films 2003 animated films 2003 computer-animated films 2000s American animated films 2000s fantasy adventure films American children's animated adventure films American children's animated fantasy films American fantasy adventure films American fantasy-comedy films Animated adventure films Animated comedy films Animated films based on literature 2000s children's fantasy films DreamWorks Animation animated films DreamWorks Pictures films Films scored by Harry Gregson-Williams Films directed by Tim Johnson Films produced by Jeffrey Katzenberg Films set in Sicily Pirate films Films set in the Mediterranean Sea Films with screenplays by John Logan Films based on Sinbad the Sailor Animated films based on classical mythology 2000s children's animated films Roc (mythology)
20894
https://en.wikipedia.org/wiki/Maximum%20transmission%20unit
Maximum transmission unit
In computer networking, the maximum transmission unit (MTU) is the size of the largest protocol data unit (PDU) that can be communicated in a single network layer transaction. The MTU relates to, but is not identical to the maximum frame size that can be transported on the data link layer, e.g. Ethernet frame. Larger MTU is associated with reduced overhead. Smaller MTU values can reduce network delay. In many cases, MTU is dependent on underlying network capabilities and must be adjusted manually or automatically so as to not exceed these capabilities. MTU parameters may appear in association with a communications interface or standard. Some systems may decide MTU at connect time, e.g. using Path MTU Discovery. Applicability MTUs apply to communications protocols and network layers. The MTU is specified in terms of bytes or octets of the largest PDU that the layer can pass onwards. MTU parameters usually appear in association with a communications interface (NIC, serial port, etc.). Standards (Ethernet, for example) can fix the size of an MTU; or systems (such as point-to-point serial links) may decide MTU at connect time. Underlying data link and physical layers usually add overhead to the network layer data to be transported, so for a given maximum frame size of a medium, one needs to subtract the amount of overhead to calculate that medium's MTU. For example, with Ethernet, the maximum frame size is 1518 bytes, 18 bytes of which are overhead (header and frame check sequence), resulting in an MTU of 1500 bytes. Tradeoffs A larger MTU brings greater efficiency because each network packet carries more user data while protocol overheads, such as headers or underlying per-packet delays, remain fixed; the resulting higher efficiency means an improvement in bulk protocol throughput. A larger MTU also requires processing of fewer packets for the same amount of data. In some systems, per-packet-processing can be a critical performance limitation. However, this gain is not without a downside. Large packets occupy a slow link for more time than a smaller packet, causing greater delays to subsequent packets, and increasing network delay and delay variation. For example, a 1500-byte packet, the largest allowed by Ethernet at the network layer, ties up a 14.4k modem for about one second. Large packets are also problematic in the presence of communications errors. If no forward error correction is used, corruption of a single bit in a packet requires that the entire packet be retransmitted, which can be costly. At a given bit error rate, larger packets are more susceptible to corruption. Their greater payload makes retransmissions of larger packets take longer. Despite the negative effects on retransmission duration, large packets can still have a net positive effect on end-to-end TCP performance. Internet protocol The Internet protocol suite was designed to work over many different networking technologies, each of which may use packets of different sizes. While a host will know the MTU of its own interface and possibly that of its peers (from initial handshakes), it will not initially know the lowest MTU in a chain of links to other peers. Another potential problem is that higher-level protocols may create packets larger than even the local link supports. IPv4 allows fragmentation which divides the datagram into pieces, each small enough to accommodate a specified MTU limitation. This fragmentation process takes place at the internet layer. The fragmented packets are marked so that the IP layer of the destination host knows it should reassemble the packets into the original datagram. All fragments of a packet must arrive for the packet to be considered received. If the network drops any fragment, the entire packet is lost. When the number of packets that must be fragmented or the number of fragments is great, fragmentation can cause unreasonable or unnecessary overhead. For example, various tunneling situations may exceed the MTU by very little as they add just a header's worth of data. The addition is small, but each packet now has to be sent in two fragments, the second of which carries very little payload. The same amount of payload is being moved, but every intermediate router has to forward twice as many packets. The Internet Protocol requires that hosts must be able to process IP datagrams of at least 576 bytes (for IPv4) or 1280 bytes (for IPv6). However, this does not preclude link layers with an MTU smaller than this minimum MTU from conveying IP data. For example, according to IPv6's specification, if a particular link layer cannot deliver an IP datagram of 1280 bytes in a single frame, then the link layer must provide its own fragmentation and reassembly mechanism, separate from the IP fragmentation mechanism, to ensure that a 1280-byte IP datagram can be delivered, intact, to the IP layer. MTUs for common media In the context of Internet Protocol, MTU refers to the maximum size of an IP packet that can be transmitted without fragmentation over a given medium. The size of an IP packet includes IP headers but excludes headers from the link layer. In the case of an Ethernet frame this adds an overhead of 18 bytes, or 22 bytes with an IEEE 802.1Q tag for VLAN tagging or class of service. The MTU should not be confused with the minimum datagram size that all hosts must be prepared to accept. This is 576 bytes for IPv4 and of 1280 bytes for IPv6. Ethernet maximum frame size The IP MTU and Ethernet maximum frame size are configured separately. In Ethernet switch configuration, MTU may refer to Ethernet maximum frame size. In Ethernet-based routers, MTU normally refers to the IP MTU. If jumbo frames are allowed in a network, the IP MTU should also be adjusted upwards to take advantage of this. Since the IP packet is carried by an Ethernet frame, the Ethernet frame has to be larger than the IP packet. With the normal untagged Ethernet frame overhead of 18 bytes, the Ethernet maximum frame size is 1518 bytes. If a 1500 byte IP packet is to be carried over a tagged Ethernet connection, the Ethernet frame maximum size needs to be 1522 due to the larger size of an 802.1Q tagged frame. 802.3ac increases the standard Ethernet maximum frame size to accommodate this. Path MTU Discovery The Internet Protocol defines the path MTU of an Internet transmission path as the smallest MTU supported by any of the hops on the path between a source and destination. Put another way, the path MTU is the largest packet size that can traverse this path without suffering fragmentation. Path MTU Discovery is a technique for determining the path MTU between two IP hosts, defined for both IPv4 and IPv6. It works by sending packets with the DF (don't fragment) option in the IP header set. Any device along the path whose MTU is smaller than the packet will drop such packets and send back an ICMP Destination Unreachable (Datagram Too Big) message which indicates its MTU. This information allows the source host to reduce its assumed path MTU appropriately. The process repeats until the MTU becomes small enough to traverse the entire path without fragmentation. Standard Ethernet supports an MTU of 1500 bytes and Ethernet implementation supporting jumbo frames, allow for an MTU up to 9000 bytes. However, border protocols like PPPoE will reduce this. Path MTU Discovery exposes the difference between the MTU seen by Ethernet end-nodes and the Path MTU. Unfortunately, increasing numbers of networks drop ICMP traffic (for example, to prevent denial-of-service attacks), which prevents path MTU discovery from working. Packetization Layer Path MTU Discovery is a Path MTU Discovery technique which responds more robustly to ICMP filtering. In an IP network, the path from the source address to the destination address may change in response to various events (load-balancing, congestion, outages, etc.) and this could result in the path MTU changing (sometimes repeatedly) during a transmission, which may introduce further packet drops before the host finds a new reliable MTU. A failure of Path MTU Discovery carries the possible result of making some sites behind badly configured firewalls unreachable. A connection with mismatched MTU may work for low-volume data but fail as soon as a host sends a large block of data. For example, with Internet Relay Chat a connecting client might see the initial messages up to and including the initial ping (sent by the server as an anti-spoofing measure), but get no response after that. This is because the large set of welcome messages sent at that point are packets that exceed the path MTU. One can possibly work around this, depending on which part of the network one controls; for example one can change the MSS (maximum segment size) in the initial packet that sets up the TCP connection at one's firewall. In other contexts MTU is sometimes used to describe the maximum PDU sizes in communication layers other than the network layer. Cisco Systems use L2 MTU for the maximum frame size. Dell/Force10 use MTU for the maximum frame size. Hewlett Packard used just MTU for the maximum frame size including the optional IEEE 802.1Q tag. Juniper Networks use several MTU terms: Physical Interface MTU (L3 MTU plus some unspecified protocol overhead), Logical Interface MTU (consistent with IETF MTU) and Maximum MTU (maximum configurable frame size for jumbo frames). The transmission of a packet on a physical network segment that is larger than the segment's MTU is known as jabber. This is almost always caused by faulty devices. Network switches and some repeater hubs have a built-in capability to detect when a device is jabbering. References External links Tweaking your MTU / RWin for Orange Broadband Users How to set the TCP MSS value using iptables mturoute – a console utility for debugging mtu problems Packets (information technology)
24148652
https://en.wikipedia.org/wiki/Informatics%20Europe
Informatics Europe
Informatics Europe is the European association of university departments and research laboratories, in the field of informatics (also known as computer science). Overview Founded in 2006, Informatics Europe is a non-profit organization with head office in Zurich, Switzerland that has grown to represent over 140 members from 32 European countries. Members are institutes rather than individuals and include university departments of Informatics, Computer Science, Computing, IT, ICT as well as public and industrial IT research institutes and National Associations in Europe. In addition, Informatics Europe liaises with scientific organisations such as the European Research Consortium for Informatics and Mathematics (ERCIM), the Association for Computing Machinery ACM) and the Computing Research Association (CRA). History On 21 October 2005, the “1st European Computer Science Summit” brought together, for the first time, heads of Informatics and Computer Science departments throughout Europe. This landmark event was a joint undertaking of the Computer Science departments of the two branches of the Swiss Federal Institute of Technology: EPFL (Lausanne) and ETH (Zurich). Besides the keynotes, talks, panels and workshops, the result of the Summit was the unanimous view that European computer scientists needed an organisation with aims and scope similar to those of the CRA in the US, extended — in light of the situation in Europe — to cover education as well as research. As a result, Informatics Europe was created with the aim to become the recognized voice of the European computer science community, including both universities and research centres. Bertrand Meyer from ETH Zurich, one of the founding members of the organisation, served as its first President from 2006 to 2011. Carlo Ghezzi, Politecnico di Milano, was the second President from 2012 to 2015. Lynda Hardman, CWI / Utrecht University was the third President, serving from 2016 to 2017. The current President, Enrico Nardelli from Università di Roma 'Tor Vergata', took office in 2018. Mission and Activities Informatics Europe is involved in a number of activities with the mission to foster quality research, education, and knowledge transfer in Informatics in Europe. ECSS - European Computer Science Summit The European Computer Science Summit takes place once a year and offers a platform where leaders and decision makers in Informatics research and education in Europe gather to debate strategic themes and trends related to research, education and policies in Informatics. Past Summits since 2005: ECSS 2021, hybrid event (Madrid, online) ECSS 2020, online event ECSS 2019, Rome ECSS 2018, Gothenburg ECSS 2017, Lisbon ECSS 2016, Budapest ECSS 2015, Vienna ECSS 2014, Wroclaw ECSS 2013, Amsterdam ECSS 2012, Barcelona ECSS 2011, Milan ECSS 2010, Prague ECSS 2009, Paris ECSS 2008, Zurich ECSS 2007, Berlin ECSS 2006, Zurich ECSS 2005, Zurich Working Groups Informatics Europe fosters various working groups to shape strategic priorities within the European Informatics community. Each working group has thereby a focus on a specific topic or goal that is agreed on in the beginning of each year. The current groups are as follows: Data Collection and Reporting - aims at bringing forward solid, accurate facts and figures about Informatics research and education in Europe Ethics - Produces a summary report, outlining the possible approaches, the state of the art, and suggestions and guidelines for inclusion of topics related to ethics, responsibility and social impacts in Informatics university degree programs Informatics Education - aims at getting academia, industry, government and society together to influence education policy in Europe towards the full recognition and establishment of Informatics as a foundational discipline in schools Research Evaluation - examines all current changes and brings forward an updated set of recommendations for research evaluation in Informatics and closely related areas The Wide Role of Informatics at Universities - investigates what universities are doing to ensure that non-informatics teaching and research is informed by best practice in Informatics Women in Informatics Research and Education - promotes actions that help improve gender balance at all stages of the career path in Informatics. Awards Each year, Informatics Europe presents two awards recognizing outstanding initiatives that advance the quality of research and education in Informatics in Europe. The Best Practices in Education Award recognizes educational initiatives across Europe that improve the quality of Informatics teaching and the attractiveness of the discipline. The Minerva Informatics Equality Award recognizes best practices in Departments or Faculties of European Universities and Research Labs that encourage and support the careers of women in Informatics research and education. Services Informatics Higher Education Data Portal The Higher Education Data Portal is a project of Informatics Europe created with the goal of providing members, the Informatics academic community, policymakers, industry and other stakeholders a complete and reliable picture of the state of Informatics (Computer Science, Computing, IT, ICT) higher education in Europe. The full dataset, consisting of 8 years of data from over 20 countries (annually updated in October), is the only one of its kind in Europe. The focus on Informatics, and the central role played by data curation makes it unique, reliable and relevant. Leaders Workshop Every year Informatics Europe organises a special Workshop for Leaders of Informatics Research and Education, as part of the ECSS global program, to address specific challenges they encounter in their role. The workshop is a unique networking forum for leaders of Informatics institutions and research groups, and focuses on concrete issues and practical solutions. Informatics Job Platform The Informatics Europe Job Platform lists open scientific positions in Informatics (Computer Science, Computing, Computer Engineering, IT, ICT) and closely related fields and includes positions requiring a PhD degree or higher (e.g.: scientific researcher, post-doc, professor, etc.). Department Evaluation The Department Evaluation is a service offered by Informatics Europe to assess research quality in Informatics, Computer Science and IT in university departments/faculties. Informatics Research & Education Directory The Research & Education Directory includes institutions (faculties, departments, institutes, etc.) doing research and offering education in Informatics (Computer Science, Computing, Computer Engineering, IT, ICT) in Europe. The directory provides quick access to these institutions and to more detailed information presented by them. Publications Ethical/Social Impact of Informatics as a Study Subject in Informatics University Degree Programs (2019, Paola Mello, Enrico Nardelli) The Wide Role of Informatics at Universities (2019, Elisabetta Di Nitto, Susan Eisenbach, Inmaculada García Fernández, Eduard Gröller) Industry Funding for Academic Research in Informatics in Europe. Pilot Study (2018, Data Collection and Reporting Working Group of Informatics Europe) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2012-2017 (2018, Svetlana Tikhonenko, Cristina Pereira) Informatics Research Evaluation (2018, Research Evaluation Working Group of Informatics Europe) Informatics for All: The strategy (2018, Informatics Europe & ACM Europe) When Computers Decide: Recommendations on Machine-Learned Automated Decision Making (2018, Informatics Europe & EUACM, joint report with ACM Europe) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2011-2016 (2017, Cristina Pereira, Svetlana Tikhonenko) Informatics Education in Europe: Are We All In The Same Boat? (2017, The Committee on European Computing Education. Joint report with ACM Europe) Informatics in the Future: Proceedings of the 11th European Computer Science Summit (ECSS 2015), Vienna, October 2015 (2017, eds. Hannes Werthner and Frank van Harmelen, Springer Open) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2010-2015 (2016, Cristina Pereira) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2009-2014 (2015, Cristina Pereira) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2008-2013 (2014, Cristina Pereira, Bertrand Meyer, Enrico Nardelli, Hannes Werthner) Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries. Key Data 2008-2012 (2013, Cristina Pereira and Bertrand Meyer) Informatics Education: Europe cannot Afford to Miss the Boat (2013, ed. Walter Gander, Joint report with ACM Europe) Informatics Doctorates in Europe - Some Facts and Figures (2013, ed. Manfred Nagl) Future of the European Scientific Societies in Informatics – Blueprint, 2011 Future of the European Scientific Societies in Informatics - Extended panel report, 2011 Future of the European Scientific Societies in Informatics - ECSS panel report, 2010 Research Evaluation for Computer Science, 2008 Student Enrollment and Image of the Informatics Discipline (2008, ed. Jan van Leeuwen and Letitia Tanca) European Computer Science Takes its Fate in its Own Hands (2005, Bertrand Meyer and Willy Zwaenepoel) External links Informatics Europe Website Informatics Europe Members Informatics Europe Board Informatics Europe Office ECSS Publications Informatics Europe Higher Education Data Portal References Computer science organizations International organisations based in Switzerland Information technology organizations based in Europe International research institutes Pan-European learned societies
1606195
https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman%20algorithm
Smith–Waterman algorithm
The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure. The algorithm was first proposed by Temple F. Smith and Michael S. Waterman in 1981. Like the Needleman–Wunsch algorithm, of which it is a variation, Smith–Waterman is a dynamic programming algorithm. As such, it has the desirable property that it is guaranteed to find the optimal local alignment with respect to the scoring system being used (which includes the substitution matrix and the gap-scoring scheme). The main difference to the Needleman–Wunsch algorithm is that negative scoring matrix cells are set to zero, which renders the (thus positively scoring) local alignments visible. Traceback procedure starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment. Because of its quadratic complexity in time and space, it often cannot be practically applied to large-scale problems and is replaced in favor of less general but computationally more efficient alternatives such as (Gotoh, 1982), (Altschul and Erickson, 1986), and (Myers and Miller, 1988). History In 1970, Saul B. Needleman and Christian D. Wunsch proposed a heuristic homology algorithm for sequence alignment, also referred to as the Needleman–Wunsch algorithm. It is a global alignment algorithm that requires calculation steps ( and are the lengths of the two sequences being aligned). It uses the iterative calculation of a matrix for the purpose of showing global alignment. In the following decade, Sankoff, Reichert, Beyer and others formulated alternative heuristic algorithms for analyzing gene sequences. Sellers introduced a system for measuring sequence distances. In 1976, Waterman et al. added the concept of gaps into the original measurement system. In 1981, Smith and Waterman published their Smith–Waterman algorithm for calculating local alignment. The Smith–Waterman algorithm is fairly demanding of time: To align two sequences of lengths and , time is required. Gotoh and Altschul optimized the algorithm to steps. The space complexity was optimized by Myers and Miller from to (linear), where is the length of the shorter sequence, for the case where only one of the many possible optimal alignments is desired. Motivation In recent years, genome projects conducted on a variety of organisms generated massive amounts of sequence data for genes and proteins, which requires computational analysis. Sequence alignment shows the relations between genes or between proteins, leading to a better understanding of their homology and functionality. Sequence alignment can also reveal conserved domains and motifs. One motivation for local alignment is the difficulty of obtaining correct alignments in regions of low similarity between distantly related biological sequences, because mutations have added too much 'noise' over evolutionary time to allow for a meaningful comparison of those regions. Local alignment avoids such regions altogether and focuses on those with a positive score, i.e. those with an evolutionarily conserved signal of similarity. A prerequisite for local alignment is a negative expectation score. The expectation score is defined as the average score that the scoring system (substitution matrix and gap penalties) would yield for a random sequence. Another motivation for using local alignments is that there is a reliable statistical model (developed by Karlin and Altschul) for optimal local alignments. The alignment of unrelated sequences tends to produce optimal local alignment scores which follow an extreme value distribution. This property allows programs to produce an expectation value for the optimal local alignment of two sequences, which is a measure of how often two unrelated sequences would produce an optimal local alignment whose score is greater than or equal to the observed score. Very low expectation values indicate that the two sequences in question might be homologous, meaning they might share a common ancestor. Algorithm Let and be the sequences to be aligned, where and are the lengths of and respectively. Determine the substitution matrix and the gap penalty scheme. - Similarity score of the elements that constituted the two sequences - The penalty of a gap that has length Construct a scoring matrix and initialize its first row and first column. The size of the scoring matrix is . The matrix uses 0-based indexing. Fill the scoring matrix using the equation below. where is the score of aligning and , is the score if is at the end of a gap of length , is the score if is at the end of a gap of length , means there is no similarity up to and . Traceback. Starting at the highest score in the scoring matrix and ending at a matrix cell that has a score of 0, traceback based on the source of each score recursively to generate the best local alignment. Explanation Smith–Waterman algorithm aligns two sequences by matches/mismatches (also known as substitutions), insertions, and deletions. Both insertions and deletions are the operations that introduce gaps, which are represented by dashes. The Smith–Waterman algorithm has several steps: Determine the substitution matrix and the gap penalty scheme. A substitution matrix assigns each pair of bases or amino acids a score for match or mismatch. Usually matches get positive scores, whereas mismatches get relatively lower scores. A gap penalty function determines the score cost for opening or extending gaps. It is suggested that users choose the appropriate scoring system based on the goals. In addition, it is also a good practice to try different combinations of substitution matrices and gap penalties. Initialize the scoring matrix. The dimensions of the scoring matrix are 1+length of each sequence respectively. All the elements of the first row and the first column are set to 0. The extra first row and first column make it possible to align one sequence to another at any position, and setting them to 0 makes the terminal gap free from penalty. Scoring. Score each element from left to right, top to bottom in the matrix, considering the outcomes of substitutions (diagonal scores) or adding gaps (horizontal and vertical scores). If none of the scores are positive, this element gets a 0. Otherwise the highest score is used and the source of that score is recorded. Traceback. Starting at the element with the highest score, traceback based on the source of each score recursively, until 0 is encountered. The segments that have the highest similarity score based on the given scoring system is generated in this process. To obtain the second best local alignment, apply the traceback process starting at the second highest score outside the trace of the best alignment. Comparison with the Needleman–Wunsch algorithm The Smith–Waterman algorithm finds the segments in two sequences that have similarities while the Needleman–Wunsch algorithm aligns two complete sequences. Therefore, they serve different purposes. Both algorithms use the concepts of a substitution matrix, a gap penalty function, a scoring matrix, and a traceback process. Three main differences are: One of the most important distinctions is that no negative score is assigned in the scoring system of the Smith–Waterman algorithm, which enables local alignment. When any element has a score lower than zero, it means that the sequences up to this position have no similarities; this element will then be set to zero to eliminate influence from previous alignment. In this way, calculation can continue to find alignment in any position afterwards. The initial scoring matrix of Smith–Waterman algorithm enables the alignment of any segment of one sequence to an arbitrary position in the other sequence. In Needleman–Wunsch algorithm, however, end gap penalty also needs to be considered in order to align the full sequences. Substitution matrix Each base substitution or amino acid substitution is assigned a score. In general, matches are assigned positive scores, and mismatches are assigned relatively lower scores. Take DNA sequence as an example. If matches get +1, mismatches get -1, then the substitution matrix is: This substitution matrix can be described as: Different base substitutions or amino acid substitutions can have different scores. The substitution matrix of amino acids is usually more complicated than that of the bases. See PAM, BLOSUM. Gap penalty Gap penalty designates scores for insertion or deletion. A simple gap penalty strategy is to use fixed score for each gap. In biology, however, the score needs to be counted differently for practical reasons. On one hand, partial similarity between two sequences is a common phenomenon; on the other hand, a single gene mutation event can result in insertion of a single long gap. Therefore, connected gaps forming a long gap usually is more favored than multiple scattered, short gaps. In order to take this difference into consideration, the concepts of gap opening and gap extension have been added to the scoring system. The gap opening score is usually higher than the gap extension score. For instance, the default parameter in EMBOSS Water are: gap opening = 10, gap extension = 0.5. Here we discuss two common strategies for gap penalty. See Gap penalty for more strategies. Let be the gap penalty function for a gap of length : Linear A linear gap penalty has the same scores for opening and extending a gap: , where is the cost of a single gap. The gap penalty is directly proportional to the gap length. When linear gap penalty is used, the Smith–Waterman algorithm can be simplified to: The simplified algorithm uses steps. When an element is being scored, only the gap penalties from the elements that are directly adjacent to this element need to be considered. Affine An affine gap penalty considers gap opening and extension separately: , where is the gap opening penalty, and is the gap extension penalty. For example, the penalty for a gap of length 2 is . An arbitrary gap penalty was used in the original Smith–Waterman algorithm paper. It uses steps, therefore is quite demanding of time. Gotoh optimized the steps for an affine gap penalty to , but the optimized algorithm only attempts to find one optimal alignment, and the optimal alignment is not guaranteed to be found. Altschul modified Gotoh's algorithm to find all optimal alignments while maintaining the computational complexity. Later, Myers and Miller pointed out that Gotoh and Altschul's algorithm can be further modified based on the method that was published by Hirschberg in 1975, and applied this method. Myers and Miller's algorithm can align two sequences using space, with being the length of the shorter sequence. Gap penalty example Take the alignment of sequences and as an example. When linear gap penalty function is used, the result is (Alignments performed by EMBOSS Water. Substitution matrix is DNAfull. Gap opening and extension both are 1.0): When affine gap penalty is used, the result is (Gap opening and extension are 5.0 and 1.0 respectively): This example shows that an affine gap penalty can help avoid scattered small gaps. Scoring matrix The function of the scoring matrix is to conduct one-to-one comparisons between all components in two sequences and record the optimal alignment results. The scoring process reflects the concept of dynamic programming. The final optimal alignment is found by iteratively expanding the growing optimal alignment. In other words, the current optimal alignment is generated by deciding which path (match/mismatch or inserting gap) gives the highest score from the previous optimal alignment. The size of the matrix is the length of one sequence plus 1 by the length of the other sequence plus 1. The additional first row and first column serve the purpose of aligning one sequence to any positions in the other sequence. Both the first line and the first column are set to 0 so that end gap is not penalized. The initial scoring matrix is: Example Take the alignment of DNA sequences and as an example. Use the following scheme: Substitution matrix: Gap penalty: (a linear gap penalty of ) Initialize and fill the scoring matrix, shown as below. This figure shows the scoring process of the first three elements. The yellow color indicates the bases that are being considered. The red color indicates the highest possible score for the cell being scored. The finished scoring matrix is shown below on the left. The blue color shows the highest score. An element can receive score from more than one element, each will form a different path if this element is traced back. In case of multiple highest scores, traceback should be done starting with each highest score. The traceback process is shown below on the right. The best local alignment is generated in the reverse direction. The alignment result is: Implementation An implementation of the Smith–Waterman Algorithm, SSEARCH, is available in the FASTA sequence analysis package from UVA FASTA Downloads. This implementation includes Altivec accelerated code for PowerPC G4 and G5 processors that speeds up comparisons 10–20-fold, using a modification of the Wozniak, 1997 approach, and an SSE2 vectorization developed by Farrar making optimal protein sequence database searches quite practical. A library, SSW, extends Farrar's implementation to return alignment information in addition to the optimal Smith–Waterman score. Accelerated versions FPGA Cray demonstrated acceleration of the Smith–Waterman algorithm using a reconfigurable computing platform based on FPGA chips, with results showing up to 28x speed-up over standard microprocessor-based solutions. Another FPGA-based version of the Smith–Waterman algorithm shows FPGA (Virtex-4) speedups up to 100x over a 2.2 GHz Opteron processor. The TimeLogic DeCypher and CodeQuest systems also accelerate Smith–Waterman and Framesearch using PCIe FPGA cards. A 2011 Master's thesis includes an analysis of FPGA-based Smith–Waterman acceleration. In a 2016 publication OpenCL code compiled with Xilinx SDAccel accelerates genome sequencing, beats CPU/GPU performance/W by 12-21x, a very efficient implementation was presented. Using one PCIe FPGA card equipped with a Xilinx Virtex-7 2000T FPGA, the performance per Watt level was better than CPU/GPU by 12-21x. GPU Lawrence Livermore National Laboratory and the United States (US) Department of Energy's Joint Genome Institute implemented an accelerated version of Smith–Waterman local sequence alignment searches using graphics processing units (GPUs) with preliminary results showing a 2x speed-up over software implementations. A similar method has already been implemented in the Biofacet software since 1997, with the same speed-up factor. Several GPU implementations of the algorithm in NVIDIA's CUDA C platform are also available. When compared to the best known CPU implementation (using SIMD instructions on the x86 architecture), by Farrar, the performance tests of this solution using a single NVidia GeForce 8800 GTX card show a slight increase in performance for smaller sequences, but a slight decrease in performance for larger ones. However, the same tests running on dual NVidia GeForce 8800 GTX cards are almost twice as fast as the Farrar implementation for all sequence sizes tested. A newer GPU CUDA implementation of SW is now available that is faster than previous versions and also removes limitations on query lengths. See CUDASW++. Eleven different SW implementations on CUDA have been reported, three of which report speedups of 30X. SIMD In 2000, a fast implementation of the Smith–Waterman algorithm using the single instruction, multiple data (SIMD) technology available in Intel Pentium MMX processors and similar technology was described in a publication by Rognes and Seeberg. In contrast to the Wozniak (1997) approach, the new implementation was based on vectors parallel with the query sequence, not diagonal vectors. The company Sencel Bioinformatics has applied for a patent covering this approach. Sencel is developing the software further and provides executables for academic use free of charge. A SSE2 vectorization of the algorithm (Farrar, 2007) is now available providing an 8-16-fold speedup on Intel/AMD processors with SSE2 extensions. When running on Intel processor using the Core microarchitecture the SSE2 implementation achieves a 20-fold increase. Farrar's SSE2 implementation is available as the SSEARCH program in the FASTA sequence comparison package. The SSEARCH is included in the European Bioinformatics Institute's suite of similarity searching programs. Danish bioinformatics company CLC bio has achieved speed-ups of close to 200 over standard software implementations with SSE2 on an Intel 2.17 GHz Core 2 Duo CPU, according to a publicly available white paper. Accelerated version of the Smith–Waterman algorithm, on Intel and Advanced Micro Devices (AMD) based Linux servers, is supported by the GenCore 6 package, offered by Biocceleration. Performance benchmarks of this software package show up to 10 fold speed acceleration relative to standard software implementation on the same processor. Currently the only company in bioinformatics to offer both SSE and FPGA solutions accelerating Smith–Waterman, CLC bio has achieved speed-ups of more than 110 over standard software implementations with CLC Bioinformatics Cube. The fastest implementation of the algorithm on CPUs with SSSE3 can be found the SWIPE software (Rognes, 2011), which is available under the GNU Affero General Public License. In parallel, this software compares residues from sixteen different database sequences to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. It is faster than BLAST when using the BLOSUM50 matrix. An implementation of Smith–Waterman named diagonalsw, in C and C++, uses SIMD instruction sets (SSE4.1 for the x86 platform and AltiVec for the PowerPC platform). It is released under an open-source MIT License. Cell Broadband Engine In 2008, Farrar described a port of the Striped Smith–Waterman to the Cell Broadband Engine and reported speeds of 32 and 12 GCUPS on an IBM QS20 blade and a Sony PlayStation 3, respectively. Limitations Fast expansion of genetic data challenges speed of current DNA sequence alignment algorithms. Essential needs for an efficient and accurate method for DNA variant discovery demand innovative approaches for parallel processing in real time. Optical computing approaches have been suggested as promising alternatives to the current electrical implementations. OptCAM is an example of such approaches and is shown to be faster than the Smith–Waterman algorithm. See also Bioinformatics Sequence alignment Sequence mining Needleman–Wunsch algorithm Levenshtein distance BLAST FASTA References External links JAligner — an open source Java implementation of the Smith–Waterman algorithm B.A.B.A. — an applet (with source) which visually explains the algorithm FASTA/SSEARCH — services page at the EBI UGENE Smith–Waterman plugin — an open source SSEARCH compatible implementation of the algorithm with graphical interface written in C++ OPAL — an SIMD C/C++ library for massive optimal sequence alignment diagonalsw — an open-source C/C++ implementation with SIMD instruction sets (notably SSE4.1) under the MIT license SSW — an open-source C++ library providing an API to an SIMD implemention of the Smith–Waterman algorithm under the MIT license melodic sequence alignment — a javascript implementation for melodic sequence alignment DRAGMAP A C++ port of the Illumina DRAGEN FPGA implementation Bioinformatics algorithms Computational phylogenetics Sequence alignment algorithms Dynamic programming
48583314
https://en.wikipedia.org/wiki/Dorothy%20Vaughan
Dorothy Vaughan
Dorothy Jean Johnson Vaughan (September 20, 1910 – November 10, 2008) was an American mathematician and human computer who worked for the National Advisory Committee for Aeronautics (NACA), and NASA, at Langley Research Center in Hampton, Virginia. In 1949, she became acting supervisor of the West Area Computers, the first African-American woman to receive a promotion and supervise a group of staff at the center. She later was promoted officially to the position of supervisor. During her 28-year career, Vaughan prepared for the introduction of machine computers in the early 1960s by teaching herself and her staff the programming language of Fortran. She later headed the programming section of the Analysis and Computation Division (ACD) at Langley. Vaughan is one of the women featured in Margot Lee Shetterly's history Hidden Figures: The Story of the African-American Women Who Helped Win the Space Race (2016). It was adapted as a biographical film of the same name, also released in 2016. In 2019, Vaughan was awarded the Congressional Gold Medal posthumously. Early life Vaughan was born September 20, 1910, in Kansas City, Missouri, as Dorothy Jean Johnson. She was the daughter of Annie and Leonard Johnson. At the age of seven, her family moved to Morgantown, West Virginia, where she graduated from Beechurst High School in 1925 as her class valedictorian. Vaughan received a full-tuition scholarship from West Virginia Conference of the A.M.E. Sunday School Convention to attend Wilberforce University in Wilberforce, Ohio. She joined the Alpha Kappa Alpha chapter at Wilberforce and graduated in 1929 with a B.A. in mathematics. In 1932, she married Howard Vaughan, who died in 1955. The couple moved to Newport News, Virginia, where they had six children: Ann, Maida, Leonard, Kenneth, Michael and Donald. The family also lived with Howard's wealthy and respected parents and grandparents on South Main Street in Newport News, Virginia. Vaughan was very devoted to family and the church, which would play a huge factor in whether she would move to Hampton, Virginia, to work for NASA. As a college cum laude graduate and a teacher in Mathematics, she was seen as a woman of superior intellect and as the elite among the African American community. Career Vaughan graduated from Wilberforce University in 1929. Although encouraged by professors to do graduate study at Howard University, Vaughan worked as a mathematics teacher at Robert Russa Moton High School in Farmville, Virginia, in order to assist her family during the Great Depression. During the 14 years of her teaching career, Virginia's public schools and other facilities were still racially segregated under Jim Crow laws. In 1943, Vaughan began a 28-year-career as a mathematician and programmer at Langley Research Center, in which she specialized in calculations for flight paths, the Scout Project, and computer programming. Her career in this field kicked off during the height of World War II. She came to the Langley Memorial Aeronautical Laboratory thinking that it would be a temporary war job. One of her children later worked at NACA. In 1941, President Franklin D. Roosevelt issued Executive Order 8802, to desegregate the defense industry, and Executive Order 9346 to end racial segregation and discrimination in hiring and promotion among federal agencies and defense contractors. These helped ensure the war effort drew from all of American society after the United States entered World War II in 1942. With the enactment of the two Executive Orders, and with many men being swept into service, federal agencies such as the National Advisory Committee for Aeronautics (NACA) also expanded their hiring and increased recruiting of women, including women of color, to support the war production of airplanes. Two years following the issuance of Executive Orders 8802 and 9346, the Langley Memorial Aeronautical Laboratory (Langley Research Center), a facility of the NACA, began hiring more black women to meet the drastic increase in demand for processing aeronautical research data. The US believed that the war was going to be won in the air. It had already ramped up airplane production, creating a great demand for engineers, mathematicians, craftsmen and skilled tradesmen. In 1935, the NACA had established a section of women mathematicians, who performed complex calculations. Vaughan began work for NACA at the Langley Research Center in Hampton, Virginia, in 1943. Vaughan was assigned to the West Area Computing, a segregated unit, which consisted of only African Americans. This was due to prevailing Jim Crow laws that required newly hired African American women to work separately from their white women counterparts. They were also required to use separate dining and bathroom facilities. This segregated group consisted of African-American women who made complex mathematical calculations by hand, using tools of the time. The West Computers made contributions to every area of research at Langley. Their work expanded in the postwar years to support research and design for the United States' space program, which was emphasized under President John F. Kennedy. In 1949, Vaughan was assigned as the acting head of the West Area Computers, taking over from a white woman who had died. She was the first black supervisor at NACA and one of few female supervisors. She led a group composed entirely of African-American women mathematicians. She served for years in an acting role before being promoted officially to the position as supervisor. Vaughan worked for opportunities for the women in West Computing as well as women in other departments. Seeing that machine computers were going to be the future, she taught the women programming languages and other concepts to prepare them for the transition. Mathematician Katherine Johnson was initially assigned to Vaughan's group, before being transferred to Langley's Flight Mechanics Division. Vaughan moved into the area of electronic computing in 1961, after NACA introduced the first digital (non-human) computers to the center. Vaughan became proficient in computer programming, teaching herself FORTRAN and teaching it to her coworkers to prepare them for the transition. She contributed to the space program through her work on the Scout Launch Vehicle Program. Vaughan continued after NASA, the successor agency, was established in 1958. When NACA became NASA, segregated facilities, including the West Computing office, were abolished. In a 1994 interview, Vaughan recalled that working at Langley during the Space Race felt like being on "the cutting edge of something very exciting". Regarding being an African American woman during that time in Langley, she remarked, "I changed what I could, and what I couldn't, I endured." Vaughan worked in the Numerical Techniques division through the 1960s. Dorothy Vaughan and many of the former West Computers joined the new Analysis and Computation Division (ACD), a racially and gender-integrated group on the frontier of electronic computing. She worked at NASA-Langley for 28 years. During her career at Langley, Vaughan was also raising her six children. One of them later also worked at NASA-Langley. Vaughan lived in Newport News, Virginia, and commuted to work at Hampton via public transportation. Later years Vaughan wanted to continue at another management position at NASA, but never received an offer. She retired from NASA in 1971, at the age of 61. In her final years Vaughan worked with mathematicians Katherine G. Johnson and Mary Jackson on astronaut John Glenn's launch into orbit. She died on November 10, 2008, aged 98. Vaughan was a member of Alpha Kappa Alpha, an African-American sorority. She was also an active member of the African Methodist Episcopal Church where she participated in music and missionary activities. She also wrote a song called "Math Math". At the time of her passing, she was survived by four of her six children, ten grandchildren and fourteen great grandchildren. Legacy Vaughan is one of the women featured in Margot Lee Shetterly's 2016 non-fiction book Hidden Figures, and the feature film of the same name. She was portrayed by the Academy Award winner, Octavia Spencer. In 2019, Vaughan was awarded the Congressional Gold Medal. Also in 2019, the Vaughan crater on the far side of the Moon was named in her honor. On 6 November 2020, a satellite named after her (ÑuSat 12 or "Dorothy", COSPAR 2020-079D) was launched into space. Awards and honors 1925: Beechurst High School – Class Valedictorian 1925: West Virginia Conference of the A.M.E. Sunday School Convention – Full Tuition Scholarship 1929: Wilberforce University – Mathematician Graduate Cum Laude 1949–1958: Head of National Advisory Committee of Aeronautics' Segregated West Computing Unit October 16, 2019: a lunar crater is named after her. This name was chosen by planetary scientist Ryan N. Watkins and her student, and submitted on what would have been Dorothy Vaughan's 109th birthday. November 8, 2019: Congressional Gold Medal On November 6, 2020, a satellite named after her was launched into space References Sources External links 1910 births 2008 deaths 20th-century American mathematicians 20th-century American women scientists African-American mathematicians American women mathematicians West Area Computers Wilberforce University alumni African-American Methodists People from Kansas City, Missouri People from Morgantown, West Virginia Mathematicians from West Virginia Mathematicians from Missouri Computer programmers 20th-century women mathematicians Congressional Gold Medal recipients African-American computer scientists American women computer scientists American computer scientists 20th-century Methodists
2669232
https://en.wikipedia.org/wiki/David%20May%20%28computer%20scientist%29
David May (computer scientist)
Michael David May FRS FREng (born 24 February 1951) is a British computer scientist. He is a Professor in the Department of Computer Science at the University of Bristol and founder of XMOS Semiconductor, serving until February 2014 as the chief technology officer. May was lead architect for the transputer. As of 2017, he holds 56 patents, all in microprocessors and multi-processing. Life and career May was born in Holmfirth, Yorkshire, England and attended Queen Elizabeth Grammar School, Wakefield. From 1969 to 1972 he was a student at King's College, Cambridge, University of Cambridge, at first studying Mathematics and then Computer Science in the University of Cambridge Mathematical Laboratory, now the University of Cambridge Computer Laboratory. He moved to the University of Warwick and started research in robotics. The challenges of implementing sensing and control systems led him to design and implement an early concurrent programming language, EPL, which ran on a cluster of single-board microcomputers connected by serial communication links. This early work brought him into contact with Tony Hoare and Iann Barron: one of the founders of Inmos. When Inmos was formed in 1978, May joined to work on microcomputer architecture, becoming lead architect of the transputer and designer of the associated programming language Occam. This extended his earlier work and was also influenced by Tony Hoare, who was at the time working on CSP and acting as a consultant to Inmos. The prototype of the transputer was called the Simple 42 and was completed in 1982. The first production transputers, the T212 and T414, followed in 1985; the T800 floating point transputer in 1987. May initiated the design of one of the first VLSI packet switches, the C104, together with the communications system of the T9000 transputer. Working closely with Tony Hoare and the Programming Research Group at Oxford University, May introduced formal verification techniques into the design of the T800 floating point unit and the T9000 transputer. These were some of the earliest uses of formal verification in microprocessor design, involving specifications, correctness preserving transformations and model checking, giving rise to the initial version of the FDR checker developed at Oxford. In 1995, May joined the University of Bristol as a professor of computer science. He was head of the computer science department from 1995 to 2006. He continues to be a professor at Bristol while supporting XMOS, a University spin-out he co-founded in 2005. Before XMOS he was involved in Picochip, where he wrote the original instruction set. May is married with three sons and lives in Bristol, United Kingdom. Awards and recognition In 1990, May received an Honorary DSc from the University of Southampton, followed in 1991 by his election as a Fellow of The Royal Society and the Clifford Paterson Medal and Prize of the Institute of Physics in 1992. In 2010, he was elected a Fellow of the Royal Academy of Engineering. May's law May's Law states, in reference to Moore's Law: References Academics of the University of Bristol Alumni of King's College, Cambridge Alumni of the University of Warwick British computer scientists Chief technology officers Computer designers Computer hardware engineers Formal methods people Fellows of the Royal Society Fellows of the Royal Academy of Engineering History of computing in the United Kingdom 1951 births Living people People educated at Queen Elizabeth Grammar School, Wakefield
1834650
https://en.wikipedia.org/wiki/Criterion%20Games
Criterion Games
Criterion Games is a British video game developer based in Guildford. Founded in January 1996 as a division of Criterion Software, it was owned by Canon Inc. until Criterion Software was sold to Electronic Arts in October 2004. Many of Criterion Games' titles were built on the RenderWare engine, which Criterion Software developed. Notable games developed by Criterion Games include racing video games in the Burnout and Need for Speed series. As of April 2017, Criterion Games employ approximately 90 people. History Background and foundation (1993–1996) David Lau-Kee, the founder and leader of Canon Inc.'s European research arm, established Criterion Software as a wholly owned subsidiary of Canon in December 1993 and assumed the managing director role for it. At the time, Canon was seeking to establish a multimedia tool development business, while Lau-Kee had been working on interactive 2D image processing techniques and was looking to extend this to 3D image processing and, in turn, "out-and-out" 3D graphics. Adam Billyard, who served as its chief technology officer, is also credited as a co-founder. Criterion Software's 3D texture mapping and rendering programme, RenderWare, was first released in 1993 as a software library for the C programming language and was adopted by 800 companies worldwide by October 1996. The firm also provided a demo game, CyberStreet, while fully-fledged games were developed by companies like 47Tek. Meanwhile, competitor Argonaut Software developed full games—including FX Fighter and Alien Odyssey–to showcase its BRender technology. In response, Criterion Software hired new staff in a 1995 to establish a dedicated game development division. To support this expansion, Criterion Software moved to new offices within Guildford in late 1995. The division, Criterion Studios, was established in January 1996 and announced the month thereafter, at the time employing 25 people. The headcount expanded to around 35 by October. RenderWare was thereafter gradually retooled as a game development programme, with its third iteration, released in 2000, first providing full game engine capabilities. The first game to use this version was Burnout, which Criterion Studios developed in-tandem. Publishing rights to the game were sold to Acclaim Entertainment, while Criterion Studios retained the intellectual property to the brand and technology. Acclaim published Burnout (2000) and its sequel, Burnout 2: Point of Impact (2002), which accumulated around 2 million sales. Despite this, Acclaim lacked the resources to market them in the United States, its home territory, leading to poor sales in the country. At the same time, Criterion Studios (now named Criterion Games) was frequently approached by Electronic Arts (EA), which eventually signed with Criterion Games for the third release in the series, Burnout 3: Takedown (2004). Under Electronic Arts (2004–present) In July 2004, EA announced that it had reached an agreement with Canon's European arm, Canon Europe, to acquire all of the Criterion Software group, including Criterion Games. The deal was finalised on 19 October 2004, with EA paying . After the purchase, both Criterion and Electronic Arts declared that RenderWare would continue to be made available to third-party customers. However, some clients decided it was too risky to rely on technology owned by a competitor. Electronic Arts have since withdrawn RenderWare from the commercial middleware market, although remnants are still used by internal developers. In mid-2006, the company closed its Derby satellite office, making all of its programmers and support staff redundant. In early March 2007, Electronic Arts combined its Chertsey-based UK development studio and Criterion Games into a new building in central Guildford. Integration of the teams did not occur and the location housed two very separate development studios: Criterion Games and EA Bright Light before Bright Light was shut permanently in 2011. In November 2007, co-founder and CEO David Lau-Kee made the decision to leave Electronic Arts to concentrate on advisory activities within the games industry. Adam Billyard also left Electronic Arts as CTO of EATech in 2007 to pursue other projects. On 14 June 2010, Criterion announced that Need for Speed: Hot Pursuit was set for release in November 2010. The software utilises a new game engine named Chameleon. On 1 June 2012, Electronic Arts announced Criterion's second Need for Speed title, Need for Speed: Most Wanted, which was released on 30 October 2012. At Electronic Entertainment Expo 2012, Criterion Games announced that it had taken sole ownership of the Need for Speed franchise. On 28 April 2013, Alex Ward announced via Twitter that the studio is planning to steer away from its tradition in developing racing games and are instead focusing on other genres for future projects. On 13 September 2013, Criterion elected to cut its staff numbers to 17 people total, as 80% (70 people) of the studio moved over to Ghost Games UK to work with Need for Speed games. On 3 January 2014, it was announced that Alex Ward and Fiona Sperry left Criterion to found a new studio, Three Fields Entertainment. Their first game Dangerous Golf, slated for release in May 2016, combined ideas from Burnout and Black and is to lead them throws a spiritual successor to Burnout. At the Electronic Entertainment Expo 2014, the company announced a new racing project. However, the project was cancelled as Criterion is now focusing on providing additional support to other EA studios in creating future Star Wars games. Criterion worked on Star Wars Battlefront: X-Wing VR Mission, a new virtual reality mission for Star Wars Battlefront. In June 2015, news site Nintendo Life reveals that in early 2011 Nintendo of Europe approached Criterion to work on a pitch for a new F-Zero game which they hoped to unveil at E3 that same year alongside the then-unreleased Wii U console, and potentially release the game during the console's launch period. However, the developer was unable to handle the pitch as, at the time, they devoted much of their resources into the development of Need for Speed: Most Wanted for multiple platforms. The site was tipped by an anonymous, yet "reliable" source, but they had confirmed this information when Criterion co-founder Alex Ward (who left the company in 2014) admitted that Nintendo of Europe did indeed approach the company for a potential F-Zero game on the Wii U. Alex Ward also noted on Twitter that Criterion was also offered the opportunity to work on the first Forza, Mad Max, a Vauxhall only racer, a Command & Conquer first-person shooter and a Gone in 60 Seconds game. In 2018, EA announced that Battlefield V would have a battle royale mode and would be developed by Criterion. Following the release of the mode (later revealed to be called Firestorm), development was halted soon after with the mode considered a failure by fans. In 2020, Criterion was announced to return as the main developer of the Need for Speed series, but it was put on hold as Criterion was assigned to do additional work like "vehicle gameplay" on the next Battlefield game in March 2021. Games developed Accolades GamesIndustry.biz named Criterion Games among the "best places to work in the UK video games industry" in the "Best Mid-sized Companies" category in 2017, 2018, and 2019. References External links 1996 establishments in England 2004 mergers and acquisitions British companies established in 1996 British subsidiaries of foreign companies Companies based in Guildford Electronic Arts Video game companies established in 1996 Video game companies of the United Kingdom Video game development companies
5453739
https://en.wikipedia.org/wiki/The%20Computer%20Wore%20Tennis%20Shoes
The Computer Wore Tennis Shoes
The Computer Wore Tennis Shoes is a 1969 American comedy film starring Kurt Russell, Cesar Romero, Joe Flynn and William Schallert. It was produced by Walt Disney Productions and distributed by Buena Vista Distribution Company. It was one of several films made by Disney using the setting of Medfield College, first used in the 1961 Disney film The Absent-Minded Professor and its sequel Son of Flubber. Both sequels to The Computer Wore Tennis Shoes, Now You See Him, Now You Don't and The Strongest Man in the World, were also set at Medfield. Plot Dexter Reilly (Kurt Russell) and his friends attend small, private Medfield College, which cannot afford to buy a computer. The students persuade wealthy businessman A. J. Arno (Cesar Romero) to donate an old computer to the college. Arno is secretly the head of a large illegal gambling ring which used the computer for its operations. While installing a replacement computer part during a thunderstorm, Reilly receives an electric shock and becomes a human computer. He now has superhuman mathematical talent, can read and remember the contents of an encyclopedia volume in a few minutes, and can speak a language fluently after reading one textbook. His new abilities make him a worldwide celebrity and Medfield's best chance to win a televised quiz tournament with a $100,000 prize. Reilly single-handedly leads Medfield's team in victories against other colleges. During the tournament, on live television, a trigger word ("applejack") causes him to unknowingly recite details of Arno's gambling ring. Arno's henchmen kidnap Reilly and plan to kill him, but his friends help him escape by locating the house in which he is being kept, posing as house painters to gain access, and sneaking him out in a large trunk. During the escape, he suffers a concussion which, during the tournament final against rival Springfield State, gradually returns his mental abilities to normal; however, one of his friends, Schuyler, is able to answer the final question ("A small Midwest city is located exactly on an area designated as the geographic center of the United States. For 10 points and $100,000, can you tell us the name of that city?" with the answer "Lebanon, Kansas"). Medfield wins the $100,000 prize. Arno and his henchmen are arrested when they attempt to escape the TV studio and crash head-on into a police car. Cast Kurt Russell as Dexter Reilly Cesar Romero as A. J. Arno Joe Flynn as Dean Higgins William Schallert as Professor Quigley Alan Hewitt as Dean Collingsgood Richard Bakalyan as Chillie Walsh Debbie Paine as Annie Hannah Frank Webb as Pete Michael McGreevey as Schuyler Jon Provost as Bradley Frank Welker as Henry W. Alex Clarke as Myles Bing Russell as Angelo Pat Harrington as Moderator Fabian Dean as Little Mac Fritz Feld as Sigmund van Dyke Pete Ronoudet as Lt. Charles "Charlie" Hannah Hillyard Anderson as J. Reedy David Canary* as Walski Robert Foul* as Police desk sergeant Ed Begley Jr.* as a Springfield State panelist * Not credited on-screen. Reception A. H. Weiler of The New York Times wrote, "This 'Computer' isn't I.B.M.'s kind but it's homey, lovable, as exciting as porridge and as antiseptic and predictable as any homey, half-hour TV family show." Gene Siskel of the Chicago Tribune reported, "I rather enjoyed 'The Computer Wore Tennis Shoes,' and I suspect children under 14 will like it, too." Arthur D. Murphy of Variety praised the film as "above-average family entertainment, enhanced in great measure by zesty, but never show-off, direction by Robert Butler, in a debut swing to pix from telefilm." Kevin Thomas of the Los Angeles Times wrote that "Disney Productions latched on to a terrific premise for some sharp satire only to flatten it out by jamming it into its familiar 'wholesome' formula. Alas, the movie itself comes out looking like it had been made by a computer." The film holds a score of 50% on Rotten Tomatoes based on six reviews. Legacy Sequels Now You See Him, Now You Don't (1972) The Strongest Man in the World (1975) Television films This film was remade as the television film The Computer Wore Tennis Shoes in 1995 starring Kirk Cameron as Dexter Riley. Other Disney Channel films carrying similar plot elements were the Not Quite Human film series, which aired in the late 1980s and early 1990s. The films were based on the series of novels with the same name. Title sequence The animated title sequence, by future Academy Award-winning British visual effects artist Alan Maley, reproduced the look of contemporary computer graphics using stop motion photography of paper cutouts. It has been cited as an early example of "computational kitsch." See also Dexter Riley (film series) List of American films of 1969 References External links 1969 films 1960s science fiction comedy films American science fiction comedy films American films Walt Disney Pictures films Films about computing Films directed by Robert Butler Medfield College films Films set in universities and colleges 1969 comedy films 1960s English-language films
161212
https://en.wikipedia.org/wiki/Mkdir
Mkdir
The mkdir (make directory) command in the Unix, DOS, DR FlexOS, IBM OS/2, Microsoft Windows, and ReactOS operating systems is used to make a new directory. It is also available in the EFI shell and in the PHP scripting language. In DOS, OS/2, Windows and ReactOS, the command is often abbreviated to md. The command is analogous to the Stratus OpenVOS create_dir command. MetaComCo TRIPOS and AmigaDOS provide a similar MakeDir command to create new directories. The numerical computing environments MATLAB and GNU Octave include an mkdir function with similar functionality. History In early versions of Unix (4.1BSD and early versions of System V), this command had to be setuid root as the kernel did not have an mkdir syscall. Instead, it made the directory with mknod and linked in the . and .. directory entries manually. The command is available in MS-DOS versions 2 and later. Digital Research DR DOS 6.0 and Datalight ROM-DOS also include an implementation of the and commands. The version of mkdir bundled in GNU coreutils was written by David MacKenzie. It is also available in the open source MS-DOS emulator DOSBox and in KolibriOS. Usage Normal usage is as straightforward as follows: mkdir name_of_directory where name_of_directory is the name of the directory one wants to create. When typed as above (i.e. normal usage), the new directory would be created within the current directory. On Unix and Windows (with Command extensions enabled, the default), multiple directories can be specified, and mkdir will try to create all of them. Options On Unix-like operating systems, mkdir takes options. The options are: -p (--parents): parents or path, will also create all directories leading up to the given directory that do not exist already. For example, mkdir -p a/b will create directory a if it doesn't exist, then will create directory b inside directory a. If the given directory already exists, ignore the error. -m (--mode): mode, specify the octal permissions of directories created by mkdir . -p is most often used when using mkdir to build up complex directory hierarchies, in case a necessary directory is missing or already there. -m is commonly used to lock down temporary directories used by shell scripts. Examples An example of -p in action is: mkdir -p /tmp/a/b/c If /tmp/a exists but /tmp/a/b does not, mkdir will create /tmp/a/b before creating /tmp/a/b/c. And an even more powerful command, creating a full tree at once (this however is a Shell extension, nothing mkdir does itself): mkdir -p tmpdir/{trunk/sources/{includes,docs},branches,tags} If one is using variables with mkdir in a bash script, POSIX `special' built-in command 'eval' would serve its purpose. DOMAIN_NAME=includes,docs eval "mkdir -p tmpdir/{trunk/sources/{${DOMAIN_NAME}},branches,tags}" This will create: tmpdir |__ | | | branches tags trunk | sources |_ | | includes docs See also Filesystem Hierarchy Standard GNU Core Utilities Find – The find command coupled with mkdir can be used to only recreate a directory structure (without files). List of Unix commands List of DOS commands References Further reading External links Microsoft TechNet Mkdir article Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Windows administration
31468172
https://en.wikipedia.org/wiki/Cardmobili
Cardmobili
CardMobili is a European-based mobile application development company founded in 2009. The primary application of the company, also named CardMobili, provided a platform for managing customer loyalty programs via mobile phones. The application provided for digitizing and storing loyalty and membership cards, allowing their barcodes to be scanned directly from the device screen. It supported Android, iPhone, BlackBerry, Windows Phone 7, Windows Mobile, Nokia, Vodafone, and other brands. CardMobili and the Portuguese Bank Banco Espírito Santo collaborated to launch the world's first digital credit card. On April 30, 2016, the application was removed from app stores and online features ceased functioning. Awards and reception In 2010, Cardmobili won Vodafone's "Mobile Clicks" contest for the best mobile internet startup. See also References Further reading "The technology that puts the wallet inside the phone" "Loyalty cards are in the digital world" "Cardmobili and RouletteCricket: great startups, Vodafone Mobile Clicks winners" "Customers are more loyal to brands with cards on your phone" "Portuguese want to do away with plastic loyalty cards" The Register article "Ten thousand Portuguese joined the digital copy of voter registration card in the phone" "Cardmobili wins global competition for applications in Barcelona" "Cardmobili represent Portugal in the final contest of Vodafone" The Register article External links Software companies established in 2009 Android (operating system) software IOS software BlackBerry software Windows Phone software 2009 establishments in Portugal Software companies disestablished in 2016 2016 disestablishments in Portugal
66177583
https://en.wikipedia.org/wiki/TUM%20Department%20of%20Informatics
TUM Department of Informatics
The TUM Department of Informatics (TUM IN) is a department of the Technical University of Munich, located at its Garching campus. Its field is computer science and related disciplines, with the German term informatics being practically synonymous with the Anglo-American computer science. With 7,444 students, it is the largest department or school at the university. History The first courses in computer science at the Technical University of Munich were offered in 1967 at the Department of Mathematics, when Friedrich L. Bauer introduced a two-semester lecture titled Information Processing. In 1968, Klaus Samelson started offering a second lecture cycle titled Introduction to Informatics. By 1992, the computer science department had separated from the Department of Mathematics to form an independent Department of Informatics. In 2002, the Department relocated from its old campus in the Munich city center to the new building on the Garching campus. In 2017, the Department celebrated 50 Years of Informatics Munich with a series of lectures and ceremonies, together with the Ludwig Maximilian University of Munich and the Bundeswehr University Munich. Building The Department of Informatics shares a building with the Department of Mathematics. In the building, two massive parabolic slides run from the fourth floor to the ground floor. Their shape corresponds to the equation and is supposed to represent the "connection of science and art". Chairs The department consists of 31 chairs: Engineering Software for Decentralized Systems Formal Languages, Compiler Construction, Software Construction Database Systems Software & Systems Engineering Scientific Computing Robotics, Artificial Intelligence and Real-time Systems Foundations of Software Reliability and Theoretical Computer Science Network Architectures and Services Computer Vision and Artificial Intelligence Computer architecture and Parallel Systems Connected Mobility Bioinformatics Application and Middleware Systems Theoretical Computer Science Computer Graphics and Visualization Computer Aided Medical Procedures Information Systems and Business Process Management Decision Sciences & Systems Software Engineering for Business Information Systems IT Security Logic and Verification Software Engineering Sensor Based Robot Systems and Intelligent Assistance Systems Cyber Trust Data Science and Engineering Data Analytics and Machine Learning Robotics Science and Systems Intelligence (joint appointment with the Department of Electrical Engineering) Visual Computing Computational Molecular Medicine Law and Security in Digitization (joint appointment with the School of Governance) Artificial Intelligence in Healthcare and Medicine (joint appointment with the Department of Electrical Engineering) Rankings The TUM Department of Informatics has been consistently rated the top computer science department in Germany by major rankings. Globally, it is rated No. 35 (QS), No. 14 (THE), and within No. 51-75 (ARWU). In the 2020 national CHE University Ranking, the department is among the top rated departments for computer science and business informatics, being rated in the top group for the majority of criteria. Notable people Seven faculty members of the Department of Informatics have been awarded the Gottfried Wilhelm Leibniz Prize, one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award: 2020 – Thomas Neumann 2016 – Daniel Cremers 2008 – Susanne Albers 1997 – Ernst Mayr 1995 – Gerd Hirzinger 1994 – Manfred Broy 1991 – Friedrich L. Bauer was awarded the 1988 IEEE Computer Society Computer Pioneer Award for inventing the stack data structure. Gerd Hirzinger was awarded the 2005 IEEE Robotics and Automation Society Pioneer Award. and Burkhard Rost were awarded the Alexander von Humboldt Professorship in 2011 and 2008, respectively. Rudolf Bayer was known for inventing the B-tree and Red–black tree. References External links 1992 establishments in Germany Educational institutions established in 1992 Computer science departments
5162251
https://en.wikipedia.org/wiki/AT%26T%20UNIX%20PC
AT&T UNIX PC
The AT&T UNIX PC is a Unix desktop computer originally developed by Convergent Technologies (later acquired by Unisys), and marketed by AT&T Information Systems in the mid- to late-1980s. The system was codenamed "Safari 4" and is also known as the PC 7300, and often dubbed the "3B1". Despite the latter name, the system had little in common with AT&T's line of 3B series computers. The system was tailored for use as a productivity tool in office environments and as an electronic communication center. Hardware configuration 10 MHz Motorola 68010 (16-bit external bus, 32-bit internal) with custom, discrete MMU Internal MFM hard drive, originally 10 MB, later models with up to 67 MB Internal 5-1/4" floppy drive At least 512 KB RAM on main board (1 MB or 2 MB were also options), expandable up to an additional 2 MB via expansion cards Monochrome green phosphor monitor Internal 300/1200 bit/s modem RS-232 serial port Centronics parallel port 3 S4BUS expansion slots 3 phone jacks PC 7300 The initial PC 7300 model offered a modest 512 KB of memory and a small, low performance 10 MB hard drive. This model, although progressive in offering a Unix system for desktop office operation, was underpowered and produced considerable fan and drive bearing noise even when idling. The modern-looking "wedge" design was innovative, and, in fact, the machine gained notoriety appearing in many movies as the token "computer." AT&T 3B1 A later enhanced model was renamed "3B1". The cover was redesigned to accommodate a full-height 67 MB hard drive. This cover change added a 'hump' to the case, expanded onboard memory to 1 or 2 MB, as well as added a better power supply. S/50 Convergent Technologies offered an S/50 which was a re-badged PC 7300. Olivetti AT&T 3B1 British Olivetti released the "Olivetti AT&T 3B1 Computer" in Europe. Operating system The operating system is based on Unix System V Release 2, with extensions from 4.1 and 4.2 BSD, System V Release 3 and Convergent Technologies. The last release was 3.51. Programming languages AT&T BASIC dBase III GNU C++ LISP LPI C LPI COBOL LPI DEBUG (debugger) LPI Fortran LPI Pascal LPI PL/I Microsoft BASIC RM COBOL RM Fortran SMC BASIC SVS Fortran SVS Pascal Application software Business Graphics (produces chart graphics from 20/20 spreadsheet data) dBASE III (DBM) Informix (DBM) Oracle (DBM) Paint Power (drawing package) Samna/AT&T Write Power 2 (word processor/spreadsheet) Samna Plus (word processor/spreadsheet) SMART System (Office Suite) Sound Presentations (presentation graphics) Spreadsheet software 20/20 (Supercomp 20) Microsoft Multiplan Word processors AT&T Word Processor Crystal Writer Microsoft Word Samna Word SMART Word Processor WordMarc WordStar 2000 Games Chess Klondike Life Mahjongg Moria NetHack Pac-Man clone Robots Rocks (Asteroids clone) Super-Rogue 9.0 Tetris clone Utility EMACS HoneyDanBer UUCP package KA9Q (implements SLIP, built-in FTP, telnet, SMTP, finger which are otherwise not available without installing the Ethernet software) Kermit MGR window system Pcomm (ProComm clone) SPICE/NUTMEG (circuit simulation tool) TeX Various Shells: Bourne, C, and Korn Expansion cards DOS-73 8086 co-processor card running at 8 MHz with 512 KB RAM, an RS-232 COM2 port and optional 8087 math co-processor. It included MS-DOS 3.1. This board was designed and built for AT&T by Alloy Computer Products of Framingham MA. RAM could be added using 512 KB RAM or 2 MB RAM cards, up to a maximum of 4 MB (2 MB on the motherboard and 2 MB on expansion cards). EIA/RAM combo cards contained extra RAM (512 KB, 1 MB, or 1.5 MB) and two RS-232 serial ports. Dual EIA port card (same card as the EIA/RAM but without the RAM sockets) StarLAN 1 Mbit/s (1BASE5) network over twisted-pair wire local area network typically used in star format Ethernet 10 Mbit/s LAN card (AMD Lance-based) using AUI connector and Wollongong TCP/IP stack/drivers AUDIX Voice Power (“Speech Processor”) card allowed for the capture and digital recording of voice conversations. This was an option of the "Integrated Solution" package for the AT&T System 25 PBX where the UNIX PC served as the "Master Controller". Floppy Tape card provided interface for 23 MB MFM Tape Cartridge Drive (e.g. Cipher FloppyTape 525) QIC-02 card for tape backup Expansion chassis card was hard-wired to Expansion Chassis (with five added slots) Piiceon Model SR-2048 (2 MB) RAM expansion card Public domain software The STORE! was a public domain software repository provided by AT&T and accessible via dialup UUCP. Emulation The FreeBee emulator is available at . See also AT&T 6300 Plus Convergent Technologies DEC Professional (computer) IBM RT PC IBM System 9000 References External links AT&T Leapfrogs IBM With the Unix PC., InfoWorld, April 15, 1985, pp. 15–17 The AT&T Unix PC, article from BYTE magazine Volume 10 Number 05: Multiprocessing (May 1985), pp. 98–106 The AT&T Unix PC Review, article from BYTE magazine Volume 11 Number 05: Multiprocessing (May 1986), pp. 254–262 comp.sys.3b1 FAQ AT&T 3B1/7300 (UNIX PC) Information AT&T UNIX PC at old-computers.com http://bitsavers.trailing-edge.com/pdf/att/3b1/ http://www.unixpc.org Computer-related introductions in 1985 UNIX PC Computer workstations 68k architecture 32-bit computers Articles containing video clips
52340783
https://en.wikipedia.org/wiki/Smart%20ring
Smart ring
A smart ring is a wearable electronics device with advanced mobile components that combine features of mobile devices with innovative features useful for mobile or handheld use. Smart rings, which are typically the size of traditional rings or larger, combine the features of a mobile device, such as the ability to make payments and mitigate access control, with popular innovative uses such as gesture control and activity tracking. Smart rings can communicate directly with smart phones or compatible devices (such as personal computers) through a variety of applications and websites. Some smart rings can operate without the need of a mobile phone, such as when interacting with back-end systems on the cloud through or performing standalone functions such as activity tracking. They typically do not have a display and operate by contextual relevance, such as by making a payment when near a payment terminal, unlocking an electronic lock when near the lock, or controlling home appliances when making gestures in the home. Some smart rings have physical or capacitive buttons to use as an activation mechanism, such as to initiate a gesture or make a phone call. In 2013, the English firm McLear, founded by John McLear, Chris Leach, and Joseph Prencipe, released the first smart ring for sale. Use One of the main features of the smart ring is to serve as a near-field communication device, effectively eliminating the need to carry credit cards, door keys, car keys, and potentially even ID cards or driver's licenses. Other uses include connection to a smartphone in order to notify the user of incoming calls, text, emails, and more; use as gesture-based controller, allowing the user to perform a variety of actions with a simple motion of the hand; and measure steps, distance, sleep, heart rate, and track how many calories the user consumes. Security Secure access control such as for company entry and exit, home access, cars, and electronic devices was the first use of smart rings. Smart rings change the status quo for secure access control by increasing ease of use, decreasing physical security flaws such as by ease of losing the device, and by adding two-factor authentication mechanisms including biometrics and key code entry. Payments and ticketing Smart rings can perform payments and metro ticketing similar to contactless cards, smart cards, and mobile phones. Security of the transaction is equal to or greater than contactless cards. The first smart ring to be created with contactless payments was the NFC Payment Ring, which was mass produced and unveiled at the Olympics Summer Games at Rio de Janeiro in August 2016. Activity Similar to smartwatches, smart rings utilise in-built sensors to provide activity and wellness tracking. For example step and heart beat tracking, temperature and sleep tracking (through measuring heart beats and movements) and blood flow. The smart ring form factor contains enough space to contain the same components as smart watches. However, due to size constraints, smaller components are typically used in current smart ring products in market, such as smaller and less accurate accelerometers, and smaller batteries leading to lower battery life than smart watches. Communications Through the use of small microphone, or bone conduction, some smart rings can allow the wearer to make phone calls, while paired with a compatible mobile phone. Smart rings are also able to notify the wearer of incoming calls and messages, by means of vibrating or lighting up. Additionally, some smart rings allow the wearer to see and feel real-time heartbeat of the 2nd smart ring wearer, where the heartbeat is displayed on the ring in similar way by means of lighting up and vibrations. Such smart rings require connection to a smart phone with active data or Wi-Fi connection to allow the transfer of data between two smart rings. The idea behind such function is to advance on premise known as Vena amoris and serve as a digital alternative to classic wedding or engagement rings. Social Smart rings provide social feedback to users and can be used to engage in the user's environment in a way that other wearables and mobile devices do not permit. Some smart rings provide notifications (eg lights or vibrations) to notify the user when they receive a text message, phone call, or other notification. This enables the user to be aware of the notification without having to constantly check her or his smart phone. See also Smartwatch Wearable computer Personal organizer Near-field communication Oura (company) References Mobile computers Human–computer interaction Ubiquitous computing Wearable computers Wearable devices Rings (jewellery)
12852979
https://en.wikipedia.org/wiki/Bill%20Kroyer
Bill Kroyer
William Kroyer is an American director of animation and computer graphics commercials, short films, movie titles, and theatrical films. He and Jerry Rees were the main animators for the CGI sequences in Tron. He is currently the head of the Digital Arts department at Lawrence and Kristina Dodge College of Film and Media Arts at Chapman University. Career Kroyer began his animation career in 1975 by working in a small commercial studio. In 1977, he finally ended up at Disney Studios as animator on The Fox and the Hound but left Disney later because he did not want to work on The Black Cauldron. It was then he met future Tron director Steven Lisberger, who was working on Animalympics. After Animalympics was completed, Lisberger developed Tron and sold it to Disney. After Tron was finished, Kroyer decided to stay with computer animation instead of traditional animation and worked at Robert Abel and Associates and Digital Productions. In 1986, he and his wife, Sue, started Kroyer Films to combine computer animation with hand-drawn animation. They made a short film titled Technological Threat; it was nominated for an Academy Award in 1988 and preserved by the Academy Film Archive in 2008. After Technological Threat was finished, Kroyer decided to stay with computer animation films for such as Jetsons: The Movie and Rugrats in Paris: The Movie. He directed Computer Warriors: The Adventure Begins in 1990 and then FernGully: The Last Rainforest in 1992. He was originally set to direct Quest for Camelot but left the project over creative differences. Soon after he joined Rhythm and Hues Studios as Senior Animation Director and supervised the CGI animation for films such as Garfield, Scooby Doo, Cats & Dogs and The Flintstones in Viva Rock Vegas. In early 2009 he began teaching at Chapman University's Dodge College of Film and Media Arts in Orange, California. In 2017 he and his wife Susan became the first couple to receive the June Foray Award from the international Animation Society for their "contributions to the art and industry of animation." Filmography Rugrats in Paris: The Movie (CG animation director: Rhythm & Hues) The Flintstones in Viva Rock Vegas (CG animation director: Rhythm & Hues) The Green Mile (animation supervisor: Rhythm & Hues) FernGully: The Last Rainforest (director) Computer Warriors: The Adventure Begins (writer, director) Jetsons: The Movie (computer animator: vehicle animation) Technological Threat (writer, director, producer, computer animation) Starchaser: The Legend of Orin (computer animation planner, key animator) Tron (production storyboards, computer image choreography) Animalympics (animation director, animator) See also Kroyer Films Rhythm & Hues Robert Abel and Associates Digital Productions Silicon Graphics Wavefront Technologies Alias Research Symbolics Graphics Division Apple Macintosh Softimage 3D Cinetron Computer Systems CSRG, UC Berkeley References External links Creating the Memories by William Kroyer 20th-century births Living people American animators American animated film directors Year of birth missing (living people)
3110553
https://en.wikipedia.org/wiki/Statgraphics
Statgraphics
History Statgraphics Centurion is a statistical statistics package that performs and explains in plain language, both basic and highly advanced statistical functions. The software was created in 1980 by Dr. Neil W. Polhemus while on the faculty at the Princeton University School of Engineering and Applied Science for use as an advanced teaching tool for his statistics students. It soon became evident that the software would be useful to the business community at large, thus it was made available to the public in 1982, becoming the highest selling analytical program in the world for the entire decade and well into the 1990s, and the first data science software designed for use on the PC. Statgraphics Centurion 18.2.14, was initially released in October 2017 with subsequent free updates released periodically to improve the platform and add new functions. New upgrade versions, with added capability and many newly designed features and enhancements, are released every two to three years. Statgraphics has been the software of choice for quality professionals, research scientists, academics and industrial concerns with global clients that include over 40% of the Fortune 500 companies and smaller concerns pursuing best practices, as well as many of the most prominent educational institutions in the world. It is designed to serve those whose profession requires analysis of data for business intelligence, predictive analytics, Six Sigma and other sophisticated statistical protocols. Contents Version 18, offered in both 32-bit and 64-bit editions, is available in five languages: English, French, Spanish, German and Italian. The 64-bit edition is capable of computing very large sized data sets bringing it into the realm of "big data" analytics. The current version is Statgraphics a Windows Desktop application with extensive capabilities for regression analysis, ANOVA, multivariate statistics, Design of Experiments, statistical process control, life data analysis, data visualization and beyond. It features 260 plus procedures. Everything from summary statistics to advanced statistical models in an exceptionally easy to use format. It contains more than 260 data analysis procedures, including descriptive statistics, hypothesis testing, regression analysis, analysis of variance, survival analysis, time series analysis and forecasting, sample size determination, multivariate methods and Monte Carlo techniques. The SPC menu includes many procedures for quality assessment, capability analysis, control charts, measurement systems analysis, and acceptance sampling. The program also features a DOE Wizard that creates and analyzes statistically designed experiments. Version 18 added several machine learning methods and as previously mentioned, capability to deal with Big Data, as well as an R interface. A Python interface will be included in the next version, Statgraphics Centurion 19, due for release in 2020. Stratus™ is a cloud-based SaaS program which runs on PC, Mac, Linux, smart phones and tablets and which contains most of the primary capabilities needed by analysts for routine data analysis anywhere an internet connection is available, allowing 24/7/365 access. Calculations are performed remotely on Statgraphics servers with results returned to the user's browser as HTML. Sigma express™ is an Excel add-in that gives quick access to the entire Statgraphics Six Sigma toolbox of statistical techniques within Excel. Ideal for analysts who are comfortable in the Excel environment but need more extensive statistical capability necessary to implement successful Six Sigma projects and other analytical assignments. Statbeans™ is a collection of Java Beans which implement many commonly used statistical procedures, designed to be embedded in user-created applications or placed on web pages. Their structure as a component library enables simple manipulation in various visual environments. Statgraphics®.Net Web Services enable web application developers to call Statgraphics procedures from their web pages. Data and instructions are passed to the Web Servers as XML. This product can be embedded in OEM proprietary applications or can be integrated into a dashboard. Applications Statgraphics is frequently used for Six Sigma process improvement. The program has also been used in various health and nutrition-related studies. The software is heavily used in manufacturing chemicals, pharmaceuticals, medical devices, automobiles, food and consumer goods. It is also widely used in mining, environmental studies, and basic R&D. Government agencies such as NASA and the EPA also use the program, as do many colleges and universities around the world. Distribution Statgraphics is distributed by Statgraphics Technologies, Inc., a privately held company based in The Plains, Virginia. See also List of statistical packages Comparison of statistical packages List of information graphics software References Statistical software Science software for Windows
80527
https://en.wikipedia.org/wiki/Electra
Electra
Electra is one of the more popular mythological characters in tragedies. She is the main character in two Greek tragedies, Electra by Sophocles and Electra by Euripides. She is also the central figure in plays by Aeschylus, Alfieri, Voltaire, Hofmannsthal, and Eugene O'Neill. Her characteristic can be stated as a vengeful soul in The Libation Bearers, the second play of Aeschylus' Oresteia trilogy, because she plans out an attack with her brother to kill their mother, Clytemnestra. In psychology, the Electra complex is named after her. Family Electra's parents were King Agamemnon and Queen Clytemnestra. Her sisters were Iphigeneia and Chrysothemis, and her brother was Orestes. In the Iliad, Homer is understood to be referring to Electra in mentioning "Laodice" as a daughter of Agamemnon. Murder of Agamemnon Electra was absent from Mycenae when her father, King Agamemnon, returned from the Trojan War. When he came back, he brought with him his war prize, the Trojan princess Cassandra, who had already borne him twin sons. Upon their arrival, Agamemnon and Cassandra were murdered, by either Clytemnestra herself, her lover Aegisthus, or both. Clytemnestra had held a grudge against her husband for agreeing to sacrifice their eldest daughter, Iphigenia, to Artemis so he could send his ships to fight in the Trojan war. In some versions of this story, Iphigenia was saved by the goddess at the last moment. Eight years later, Electra returned home from Athens at the same time as her brother, Orestes. (Odyssey, iii. 306; X. 542). According to Pindar (Pythia, xi. 25), Orestes was saved either by his old nurse or by Electra, and was taken to Phanote on Mount Parnassus, where King Strophius took charge of him. When Orestes was twenty, the Oracle of Delphi ordered him to return home and avenge his father's death. Murder of Clytemnestra According to Aeschylus, Orestes recognized Electra's face before the tomb of Agamemnon, where both had gone to perform rites to the dead, and they arranged how Orestes should accomplish his revenge. Orestes and his friend Pylades, son of King Strophius of Phocis and Anaxibia, killed Clytemnestra and Aegisthus (in some accounts with Electra helping). Before her death, Clytemnestra cursed Orestes. The Erinyes or Furies, whose duty it is to punish any violation of the ties of family piety, fulfill this curse with their torment. They pursue Orestes, urging him to end his life. Electra was not hounded by the Erinyes. In Iphigeneia in Tauris, Euripides tells the tale somewhat differently. In his version, Orestes was led by the Furies to Tauris on the Black Sea, where his sister Iphigenia was being held. The two met when Orestes and Pylades were brought to Iphigenia to be prepared for sacrifice to Artemis. Iphigeneia, Orestes, and Pylades escaped from Tauris. The Furies, appeased by the reunion of the family, abated their persecution. Electra then married Pylades. Adaptations of the Electra story Plays The Oresteia, a trilogy of plays by Aeschylus Electra, play by Sophocles Electra, play by Euripides Orestes, play by Euripides Electra, a lost play by Quintus Tullius Cicero of which nothing is known but the name and that it was "a tragedy in the Greek style" Electra (1901) a play by Benito Pérez Galdós Elektra, a 1903 play by Hugo von Hofmannsthal, based on the Sophocles play Mourning Becomes Electra, 1931 play by Eugene O'Neill, based on Aeschylus Electra, 1937 play by Jean Giraudoux The Flies, a 1943 play by Jean-Paul Sartre, modernizing the Electra myth by introducing the theme of existentialism Electra (started in 1949, first performed 1987), a play by Ezra Pound and Rudd Fleming Electra, or The dropping of the masks (1954) a play by Marguerite Yourcenar Electra and Orestes, plays by Adrienne Kennedy, 1972 Electra (1974) a play by Robert Montgomery, directed by Joseph Chaikin Electra, 1995 drama by Danilo Kiš Electricidad, 2004 play by Luis Alfaro, modern adaptation of Electra based in the Chicano barrio Electra/Orestes, 2015 play by Jada Alberts and Anne-Louise Sarks Small and Tired, 2017 play by Kit Brookman Pitribhumi, 2021 play by Sandarbha Opera Elektra, by Richard Strauss, with libretto by Hugo von Hofmannsthal, based on his own play Electra, by Mikis Theodorakis Mourning Becomes Electra, by Marvin David Levy, based on Eugene O'Neill's play Idomeneo, by Wolfgang Amadeus Mozart, where she plays the role of rejected lover/villain Electra, opera by Johann Christian Friedrich Haeffner, Libretto by Adolf Fredrik Ristell after Nicolas Francois Guillard Électre, tragédie lyrique by Jean-Baptiste Lemoyne, set to a libretto by Nicolas-François Guillard Films Electra, a film by Michael Cacoyannis, starring Irene Papas, based on Euripides Mourning Becomes Electra (film), a film by Dudley Nichols, starring Rosalind Russell and Michael Redgrave The Forgotten Pistolero, a Spaghetti Western by Ferdinando Baldi starring Leonard Mann and Luciana Paluzzi Ellie, a film which transfers the story to a Southern U.S. locale Szerelmem, Elektra (Elektra, My Love), film by Miklós Jancsó, starring Mari Törőcsik Filha da Mãe and Mal Nascida, both by Portuguese film director João Canijo elektraZenSuite, medium-length film by Alessandro Brucini, based on texts by Aeschylus, Sophocles, William Shakespeare, Hugo von Hofmannsthal, Sylvia Plath, and the Zen Buddhist monk Takuan Soho Electra, film by Shyamaprasad, starring Nayanthara, Skanda Ashok, Manisha Koirala and Prakash Raj, based on Euripides Literature Elektra (Laodice) is the unnamed protagonist and speaker in Yannis Ritsos's long poem Beneath the Shadow of the Mountain. This poem forms part of the cycle colloquially referred to as the New Oresteia. Electra is the eponymous narrator of her story in the book 'Electra' by Henry Treece. (Bodley Head, 1963: Sphere Books., 1968). Electra on Azalea Path is the title of Sylvia Plath's poem published in 1959, in reference to the Electra Complex A central character in Donna Leon's crime fiction series is a present-day young woman named Elettra (the Italian form of "Electra"), who is highly resourceful and who bears some resemblance to the mythological character. House of Names, by Colm Tóibín. A retelling of the story of Agamemnon's death and the resulting events. (Simon and Schuster, May 9, 2017. 275 pages) Elektra a novel by Jennifer Saint that tells the story of Elektra's life, coming out April 2022 See also 130 Elektra - asteroid named after Electra. Bibliography References External links Women in Greek mythology Matricides Children of Agamemnon Princesses in Greek mythology
1010814
https://en.wikipedia.org/wiki/Minimally%20invasive%20education
Minimally invasive education
Minimally invasive education (MIE) is a form of learning in which children operate in unsupervised environments. The methodology arose from an experiment done by Sugata Mitra while at NIIT in 1999, often called The Hole in the Wall, which has since gone on to become a significant project with the formation of Hole in the Wall Education Limited (HiWEL), a cooperative effort between NIIT and the International Finance Corporation, employed in some 300 'learning stations', covering some 300,000 children in India and several African countries. The programme has been feted with the digital opportunity award by WITSA, and been extensively covered in the media. History Background Professor Mitra, Chief Scientist at NIIT, is credited with proposing and initiating the Hole-in-the-Wall programme. As early as 1982, he had been toying with the idea of unsupervised learning and computers. Finally, in 1999, he decided to test his ideas in the field. The experiment On 26 January 1999, Mitra's team carved a "hole in the wall" that separated the NIIT premises from the adjoining slum in Kalkaji, New Delhi. Through this hole, a freely accessible computer was put up for use. This computer proved to be popular among the slum children. With no prior experience, the children learned to use the computer on their own. This prompted Mitra to propose the following hypothesis: The acquisition of basic computing skills by any set of children can be achieved through incidental learning provided the learners are given access to a suitable computing facility, with entertaining and motivating content and some minimal (human) guidance. In the following comment on the TED website Mitra explains how they saw to it that the computer in this experiment was accessible to children only: "... We placed the computers 3 feet off the ground and put a shade on top, so if you are tall, you hit your head on it. Then we put a protective plastic cowl over the keyboard which had an opening such that small hands would go in. Then we put a seating rod in front that was close to the wall so that, if you are of adult height, your legs would splay when you sit. Then we painted the whole thing in bright colours and put a sign saying 'for children under 15'. Those design factors prevented adult access to a very large extent." Results Mitra has summarised the results of his experiment as follows. Given free and public access to computers and the Internet, a group of children can Become computer literate on their own, that is, they can learn to use computers and the Internet for most of the tasks done by lay users. Teach themselves enough English to use email, chat and search engines. Learn to search the Internet for answers to questions in a few months time. Improve their English pronunciation on their own. Improve their mathematics and science scores in school. Answer examination questions several years ahead of time. Change their social interaction skills and value systems. Form independent opinions and detect indoctrination. Current status and expansion outside India The first adopter of the idea was the Government of National Capital Territory of Delhi. In 2000, the Government of Delhi set up 30 Learning Stations in a resettlement colony. This project is ongoing and said to be achieving significant results. Encouraged by the initial success of the Kalkaji experiment, freely accessible computers were set up in Shivpuri (a town in Madhya Pradesh) and in Madantusi (a village in Uttar Pradesh). These experiments came to be known as Hole-in-the-Wall experiments. The findings from Shivpuri and Madantusi confirmed the results of Kalkaji experiments. It appeared that the children in these two places picked up computer skills on their own. Dr. Mitra defined this as a new way of learning "Minimally Invasive Education". At this point in time, International Finance Corporation joined hands with NIIT to set up Hole-in-the-Wall Education Ltd (HiWEL). The idea was to broaden the scope of the experiments and conduct research to prove and streamline Hole-in-the-Wall. The results, show that children learn to operate as well as play with the computer with minimum intervention. They picked up skills and tasks by constructing their own learning environment. Today, more than 300,000 children have benefited from 300 Hole-in-the-Wall stations over last 8 years. In India Suhotra Banerjee (Head-Government Relations) has increased the reach of HiWEL learning stations in Nagaland, Jharkhand, Andhra Pradesh... and is slowly expanding their numbers. Besides India, HiWEL also has projects abroad. The first such project was established in Cambodia in 2004. The project currently operates in Botswana, Mozambique, Nigeria, Rwanda, Swaziland, Uganda, and Zambia, besides Cambodia. The idea, also called Open learning, is even being applied in Britain, albeit inside the classroom. HiWEL Hole-in-the-Wall Education Ltd. (HiWEL) is a joint venture between NIIT and the International Finance Corporation. Established in 2001, HiWEL was set up to research and propagate the idea of Hole-in-the-Wall, a path-breaking learning methodology created by Mitra, Chief Scientist of NIIT. Awards and recognition Digital Opportunity Award by the World Information Technology and Services Alliance (WITSA) in 2008. Reason: "groundbreaking work in developing computer literacy and improving the quality of education at a grass root level." Coverage in the media The project has received extensive coverage from sources as diverse as UNESCO, Business Week, CNN, Reuters, and The Christian Science Monitor, besides being featured at the annual TED conference in 2007. The project received international publicity, when it was found that it was the inspiration behind the book Q & A, itself the inspiration for the Academy Award winning film Slumdog Millionaire. HiWEL has been covered by the Indian Reader's Digest. In school Minimally Invasive Education in school adduces there are many reasons why children may have difficulty learning, especially when the learning is imposed and the subject is something the student is not interested in, a frequent occurrence in modern schools. Schools also label children as "learning disabled" and place them in special education even if the child does not have a learning disability, because the schools have failed to teach the children basic skills. Minimally Invasive Education in school asserts there are many ways to study and learn. It argues that learning is a process you do, not a process that is done to you. The experience of schools holding this approach shows that there are many ways to learn without the intervention of teaching, to say, without the intervention of a teacher being imperative. In the case of reading for instance in these schools some children learn from being read to, memorizing the stories and then ultimately reading them. Others learn from cereal boxes, others from games instructions, others from street signs. Some teach themselves letter sounds, others syllables, others whole words. They adduce that in their schools no one child has ever been forced, pushed, urged, cajoled, or bribed into learning how to read or write, and they have had no dyslexia. None of their graduates are real or functional illiterates, and no one who meets their older students could ever guess the age at which they first learned to read or write. In a similar form students learn all the subjects, techniques and skills in these schools. Every person, children and youth included, has a different learning style and pace and each person, is unique, not only capable of learning but also capable of succeeding. These schools assert that applying the medical model of problem-solving to individual children who are pupils in the school system, and labeling these children as disabled—referring to a whole generation of non-standard children that have been labeled as dysfunctional, even though they suffer from nothing more than the disease of responding differently in the classroom than the average manageable student—systematically prevents the students' success and the improvement of the current educational system, thus requiring the prevention of academic failure through intervention. This, they clarify, does not refer to people who have a specific disability that affects their drives; nor is anything they say and write about education meant to apply to people who have specific mental impairments, which may need to be dealt with in special, clinical ways. Describing current instructional methods as homogenization and lockstep standardization, alternative approaches are proposed, such as the Sudbury model schools, an alternative approach in which children, by enjoying personal freedom thus encouraged to exercise personal responsibility for their actions, learn at their own pace rather than following a chronologically-based curriculum. These schools are organized to allow freedom from adult interference in the daily lives of students. As long as children do no harm to others, they can do whatever they want with their time in school. The adults in other schools plan a curriculum of study, teach the students the material and then test and grade their learning. The adults at Sudbury schools are "the guardians of the children's freedom to pursue their own interests and to learn what they wish," creating and maintaining a nurturing environment, in which children feel that they are cared for, and that does not rob children of their time to explore and discover their inner selves. They also are there to answer questions and to impart specific skills or knowledge when asked to by students. As Sudbury schools, proponents of unschooling have also claimed that children raised in this method do not suffer from learning disabilities, thus not requiring the prevention of academic failure through intervention. "If learning is an emergent phenomenon, then the teacher needs to provide stimulus — lots of it – in the form of “big” questions. These must include questions to which the teacher, or perhaps anyone, does not have the answer. These should be the sorts of questions that will occupy children’s minds perpetually. The teacher needs to help each child cultivate a vision of the future. Thus, a new primary curriculum needs to teach only three skills: 1. Reading comprehension: This is perhaps the most crucial skill a child needs to acquire while growing up. 2. Information search and analysis: First articulated at the National Institute of Technology in India by professor J.R. Isaac in the early 1990s — decades ahead of its time — this skill set is vital for children searching for answers in an infinite cyberspace. 3. A rational system of belief: If children know how to search, and if they know how to read, then they must learn how to believe. Each one of us has a belief system. How soon can a child acquire one? A rational belief system will be our children’s protection against doctrine. Children who have these skills scarcely need schools as we define them today. They need a learning environment and a source of rich, big questions. Computers can give out answers, but they cannot, as of yet, make questions. Hence, the teacher’s role becomes bigger and stranger than ever before: She must ask her “learners” about things she does not know herself. Then she can stand back and watch as learning emerges." See also Open learning Didactic method Response to intervention Positive Behavior Interventions and Supports Sudbury school Problem-based learning Notes and references External links The Hole in the Wall site https://web.archive.org/web/20080523112413/http://www.ascilite.org.au/ajet/ajet21/mitra.html https://web.archive.org/web/20070816042917/http://www.egovmonitor.com/node/5865 Live Conversation with Professor Sugata Mitra at Wiz-IQ-dot-com WizIQ is a popular educational website equipped with state-of-art Virtual Classroom Classrooms in the cloud or castles in the air? Alternative education Computing and society Educational technology Human–computer interaction Pedagogy
5127870
https://en.wikipedia.org/wiki/GCOS
GCOS
GCOS may refer to: Affymetrix GeneChip Operating Software Global Climate Observing System General Comprehensive Operating System, a family of operating systems oriented toward mainframes, originally called GECOS (General Electric Comprehensive Operating Supervisor) See also GKOS keyboard Geckos GameCube OS A homebrew Operating System for the Nintendo Gamecube and later Nintendo Wii Google Chrome Operating System An open source, lightweight operating system that will initially be targeted at netbooks General Cargo Operational SystemTransnet Port Terminals Operational System in house built (GCOS) pl:GECOS
18496638
https://en.wikipedia.org/wiki/Universal%20Gen%C3%A8ve
Universal Genève
Universal Genève SA is a Swiss luxury watch company, founded in 1894 as Universal Watch. Since its beginnings, the company has produced complete watches with in-house movements, and throughout the 20th century, distributed many notable and important timepieces. Along with neighboring Geneva companies Audemars Piguet, Girard-Perregaux, Patek Philippe and Rolex, Universal is internationally regarded for its style of craftsmanship and manufacture. In addition, the brand also makes historical claim for creating the first-ever chronographic wristwatch in 1917. History 1894–1930s: Beginnings Started and briefly based in Le Locle, co-founder Ulysse Perret would relocate Universal to Geneva in 1919, solidifying the company's status as a Genève brand. During the company's tenure in much smaller Le Locle (Neuchâtel region), Perret had conceptualized the company as Universal Watch in 1894 with classmate Numa-Emile Descombes, both of whom were horology students at the time. Although Universal began only as a manufacturer and retailer of cases, crowns, dials and movements, the company whilst under Perret and Descombes patented the brand's first 24-hour indication watch. After Descombes' death in 1897 at the age of 34, Perret recruited Louis Edouard Berthoud as a co-manufacturer of complications, and both briefly operated under the registered name Perret & Berthoud before switching to Universal Watch et Company (UWEC) Genève, Ltd. after relocating to Geneva. Under both trademarks, the horologists created various pocket and trench watches for both sides during World War I. By 1925, the duo created the brand's first patented self-winding timepiece called the Auto Rem, an octagon-shaped men's wristwatch with lozenge-styled hands and a 15-jewel movement. Following Perret's passing in 1933, his son would take over management, and Universal would remain a family-run business for 30 more years. 1930s–1950s: The Chronograph and "Watch Couturier" era After the pocketwatch started to lose usefulness in favor of the more convenient wristwatch during the first world war, Universal seized the opportunity by creating the Compur in 1933 and the Aero Compax ("Aviator's Compact Chronograph") in 1936, shortly before the start of World War II. In addition to its automatic "smooth sweep" timekeeping, the Compax was also equipped with a built-in stopwatch which made it a suitable device for soldiers during training exercises and full-fledged combat operations. The Compax was produced in many variations including the Moon Phase, Medico, Tri-, Uni-, and Master Vortex. During the same period, Universal briefly collaborated with Parisian high fashion brand Hermès and designed the Pour Hermès ("For Hermès") chronographs, which featured square button registers, telemeters and tachometers, a movement containing a Breguet balance spring, and an Arabic-numeral dial. Hermès' Paris headquarters would in turn act as a major sales hub for all Universal brand watches in Europe until the 1950s, while the Henri Stern Watch Agency in Manhattan, the U.S. distributorship of Patek Philippe, would be an official Universal Genève dealer in North America. Universal's popularity with the chronographs caught the attention of high-ranking government officials throughout Europe, including the Dutch Royal Family, who granted the Swiss brand a Royal Warrant in 1939 to issue a military watch for the nation's army, with then-Queen Wilhelmina's initials embossed on the dial. The Dutch army utilized this watch up until Nazi Germany bombed Rotterdam in May of the same year and occupied the Netherlands until 1945. For female civilians during that era, Universal distributed the art deco "Couture Diamond" watch, which featured a mother of pearl dial rimmed with diamonds and manufactured in either gold, stainless steel or platinum metals. The feminine cuff watch, which earned Universal Genève the title of "watch couturier", was sold in affluent boutiques worldwide and was most popular among actresses, socialites and wives of world leaders. The Martel Watch Company in Les Ponts-de-Martel had supplied movements for many of Universal Genève's chronographic timepieces since 1918, with Universal adapting the complications as Cal 285's. However, the mechanisms would be rechristened as Zenith 146's, 146D's and 146H's when competing Le Locle watchmaker Zenith acquired Martel, and effectively all of its patents, by 1960. 1950s–1960s: Microtor automatics Arguably the most well-known Universal watch of the post-war era was the Polerouter. Designed by Gérald Genta, it was originally produced as the Polarouter in 1954, appearing with a Cal 138SS Bumper movement. The following year it was replaced with the innovative Cal 215 microtor movement which, with minor changes and a name change (from Polarouter to Polerouter, in 1955), was produced until late 1969. In its initial fifteen years of production, the watch was produced in many variations including the Polerouter de luxe, Polerouter Jet, Polerouter Super, Polerouter Genève, Polerouter Compact, Polerouter "NS", Polerouter III, and the Polerouter Sub diver's watch. The Polerouter's durability under extreme temperatures and fluctuating altitudes made it a preferred timepiece among Scandinavian Airlines' pilots who made flights over the arctic. The worldwide acclaim of the Polerouter Date was comparable to the reputation of similar Genève automatics like the Rolex Oyster Date and Omega Seamaster Date. The Golden Shadow and White Shadow were first produced in 1965 and were the thinnest automatic watch movements at the time, with a thickness of only 2.3mm. This record was held until 1978. The Shadows were also designed by Genta and were available in 18K yellow and white gold as the Golden Shadow, and in stainless steel as the White Shadow. Both watches contained the Caliber 2-66 micro rotor movement up until the late 1960s. 1970–1980s: Decrease in automatic and mechanical wind production During the 1970s, Universal had been one of the few Swiss watch brands to introduce a quartz movement and phase out automatics, coinciding with an era now known as the "quartz crisis". While the company continued to use silver, gold, platinum and diamonds for its dials, cases, bands and bracelets, the switch to quartz oscillators was a cost-efficient alternative to automatic complications, which were considerably more expensive and more time-consuming to produce, and could not compete with the mass-produced growth of electronic movements. In particular, the Golden and White Shadows, which previously contained microtors, would be replaced with Unisonics and Accutrons. Since the innovation of quartz technology had originated in Japan, Universal began to focus most of its attention on the Asian watch market, since a significant portion of company revenue was already centered in Hong Kong. Although Forbes still ranked Universal (in price) to Corum, IWC and Rolex, and as being more expensive than Omega, Longines and Baume et Mercier, the international marketing strategies and venture to quartz proved economically devastating for the brand, causing loss of capital among its holding companies, and in effect, the popularity of the brand itself. 1990s–present: Comeback After a difficult period in the 1980s and 1990s, Universal Genève released a series of watches with a new micro-rotor caliber which revisited the company's earlier success. Although still headquartered in Geneva, Universal Genève was purchased in 1989 by Hong Kong-based investment firm Stelux Holdings International, Ltd., which also owns Cyma, another high-end Swiss watchmaker whose patents had been owned by Universal since 1918. During the late 1960s and early 1970s, Universal was owned by New York-based Bulova, an acquisition which expanded the manufacture's fame in Japanese and North American markets, and led to stylistic collaborations with other watch or jewelry firms such as Tiffany's, Cartier SA and Movado. As of 2011, Universal is an active member of the Federation of the Swiss Watch Industry, maintains three offices in Switzerland and oversees La Chaux-de-Fonds-based watchmaker Cyma. Notable wearers Many celebrities, writers, business executives and diplomats from around the globe have owned both contemporary and vintage Universal Genève watches: Athletes The 1997 and 1998 Universal Ayrton Senna sports chronograph watches were named after late Formula One race car driver Ayrton Senna. As the limited edition watch bears Senna's name, proceeds from sales were to endow Instituto Ayrton Senna, an anti-poverty charity started by his sister, Vivianne. Business and media In the late 1990s, Playboy illustrator LeRoy Neiman appeared in a print campaign promoting Universal Genève, with Neiman pictured in his studio wearing a Universal Genève Golden Janus and his oil paintings displayed in the foreground. In his book, Marking Time: Collecting Watches and Thinking about Time, Simon & Schuster editor-in-chief Michael Korda recalled receiving a pink gold Universal Genève Tri-Compax from his uncle while attending Le Rosey boarding school in Switzerland, and cited the brand as piquing his interest in watches. Diplomats and politicians 29th and 41st Argentine President Juan Perón was a wearer of a Universal Tri-Compax, as was 33rd U.S. President Harry S. Truman, who donned the popular model at the Potsdam Conference. 45th U.S. President Donald Trump owned a Universal Genève Senna watch before donating the timepiece to Antiquorum for charity. 40th President of the Dominican Republic Hector Trujillo owned a pink gold and enamel Universal Genève, ostensibly given as a diplomatic gift. Because of Swiss neutrality during World War II, Swiss made goods had been exported to international buyers with no basis on their country's alliance, reputation or political standing in the world. Although Hermann Göring owned many different brands of watches, the Reichsmarschall had given his Universal Genève Compax to Nuremberg Trials guard Lt. Jack Wheelis the night before his scheduled execution. While Wheelis' family maintained that the gift was a friendly gesture, historians have long attributed the wristwatch as being a bribe for the cyanide pill Göring ingested to escape the hangman. Fernando Aubel, a former Chilean Air Force General (1978–1990) under Augusto Pinochet's military dictatorship, recalled receiving a Universal Genève chronometer wristwatch as a young man and still wearing it at the present, according to a personal memoir. Film French writer, poet, filmmaker and one-time Cannes Film Festival president Jean Cocteau was an outspoken fan of Universal Genève's style of manufacture, and a limited line of Universal tourbillons had etched verses of Cocteau's poetry. Jon Voight owned a personalized Compax and Joan Rivers owned a Golden Shadow before both actors donated their watches to Antiquorum. Musicians English musician Eric Clapton donned a panda dial Tri-Compax during his years with the rock band Cream in the mid to late 1960s, leading to this particular model being nicknamed the "Eric Clapton" Price and value BusinessWeek cited the market value of most Universal Genève watches from the 1960s as approximately ranging between $2,500–$3,500 (figures adjusted to 2010 inflation) Among the rarest and most expensive of Universal Genève timepieces includes the 'Golden Janus', a 1994 centennial of the 1930s Cabriolet, which were limited to 10 in number and have realized at an upwards $50,000 (43,700 CHF) at auction. Universal Genève's 'A. Cairelli Rattrapante', an aviator's chronograph, consists of a 24-hour dial with a 16-minute register. Manufactured in Rome, the wristwatch was produced only sporadically between 1939 and 1945 and was originally meant for the Royal Italian Air Force (Regia Aeronautica Italiana). At private dealers and auction houses such as Sotheby's and Christie's, the Universal Genève Cairellis have closed between an estimated $90,000 and $130,000. References External links Universal Genève company website Luxury brands Swiss watch brands Watch manufacturing companies of Switzerland Manufacturing companies established in 1894 Design companies established in 1894 Swiss companies established in 1894
23654752
https://en.wikipedia.org/wiki/Information%20Technology%20Institute
Information Technology Institute
The Information Technology Institute (ITI) is a national institute established in 1993 in Egypt specializing in IT. The Information Technology Institute (ITI) is a national institute established in 1993 by the Egyptian Information and Decision Support Center (IDSC). It provides specialized software development programs to fresh graduates, as well as professional training programs and IT courses for the Egyptian Government, ministries, and local decision support centers. With the government’s objective of providing access and opportunity for all. ITI followed by opening a second branch in Alexandria in 1996 to create greater coverage of its services, and recently in September 2007, ITI opened two other branches in Assiut and Mansoura to maintain and assist in the spreading of its training services. ITI Management The board of trustees of ITI is headed by the Minister of Communications and Information Technology. The board members include experts from the MCIT, academia, and information and telecommunication companies. External links About IDSC Information technology in Egypt
30639514
https://en.wikipedia.org/wiki/Stat-Ease
Stat-Ease
Stat-Ease, Inc. is a privately held company producing statistical software in Minneapolis, Minnesota founded by Pat Whitcomb in 1982. The company currently has 14 employees. The company provides software packages for engineers and scientists using Design of experiments methods (DOE) for optimizing development of products and processes. It also provides DOE training and consulting services. History Stat-Ease was founded by Pat Whitcomb while at General Mills in 1982. He later brought in two of his General Mills colleagues—Tryg Helseth as Programmer and Mark Anderson as Business Manager. Whitcomb and Anderson are Principals of the company today. The company sold its first software in June 1985. Sales took off in 1987 when the software was described as “incredibly easy to use” in a review of DOE software. In 1988, the company released its first version of Design-Expert software, which provided the tools for response surface methods (RSM) for process optimization. This package complemented Design-Ease, which handled factorial designs, and also provided statistical tools for optimizing mixtures in the chemical process industries. In 1996, the firm added the features of Design-Ease into Design-Expert version 5 and translated it from DOS to Windows. Both packages are still marketed today. In 1996, Forbes Magazine said the “new mantra” for process improvement is multivariate testing (MVT) and added: “A Minneapolis software firm, Stat-Ease, sells most of the software these MVT types use.” Felix Grant, reviewing Design-Expert version 7.1.3 in Scientific Computing Magazine said: “In a mature, well-established product which dominates its market, upgrades should be expected to reflect developing practice and display evolutionary growth rather than piling on attention-seeking gimmicks; and so it is here.... Core functions have been extended usefully... the design editor in Design-Expert (DX) has long been a strong point, providing a central cockpit from which to intuitively refine and test most aspects, but various new control features have now been added to the package, small in themselves but significantly enhancing productive control.” Statistical design of experiments Minimum-Run Resolution IV and Minimum-Run Resolution V experimental designs were invented by Pat Whitcomb and Gary Oehlert in 2004. Minimum-run resolution IV (“MR4”) factorial designs estimate all main effects, clear of two-factor or higher interactions, in a minimum of experimental runs. MR4 designs work well for factor screening. Minimum-run resolution V (“MR5”) factorial designs estimate all main effects and two-factor interactions in a minimum of experimental runs. MR5 designs are typically done after screening to a vital few factors, which then need to be studied in more depth in case they interact. Whitcomb and Oehlert won the Shewell award in 2008 for invention of the half-normal plot of effects for general factorials. They also developed statistical tools in 2008 to calculate power for a broad range of experimental designs and precision as a power substitute via fraction of design space (FDS) for response surface methods (RSM) and mixture design. Whitcomb and Anderson have written two non-academic books on DOE; the books include a free educational version of Stat-Ease software. Applications Alberto-Culver developed a new line of scrubs using Design-Expert software from Stat-Ease. Stat-Ease was used by Los Alamos National Laboratory researchers in designing a set of experiments designed to demonstrate the application of model validation techniques to a structural dynamics problem. Stat-Ease was used by researchers to design experiments to optimize the effects of storage on the physico-chemical, microbiological and sensory quality on bottlegourd-basil leaf juice. Invitrogen used Stat-Ease Design-Expert software to optimize a cell culture bioproduction system. The researcher states: “This experiment demonstrates how a robotically controlled microbioreactor system can be combined with DoE methods to optimize cell-culture media and feeding strategies. The new process is rich in information and provides a solid understanding of the most influential factors affecting performance of specific cell lines.” The United States Environmental Protection Agency (EPA) evaluated the physicochemical properties of nine surfactants used in the remediation of perchloroethylene (PCE) in aqueous solutions using a response surface quadratic design model of experiment. Design-Expert software was used to generate the experimental design and perform the analysis. The research provided predictive models for alterations in the physiochemical properties of pore fluid to surfactant enhanced acquifer remediation of PCE. Researchers investigated the possibility of producing poly-3-hydroxybutyrate (P(3HB)) polyester using corn syrup. The concentrations of the different ingredients were optimized using DOE performed with Design-Expert software. Researchers at the University of Nottingham demonstrated that the DNA extracted from both green and roasted beans could be used in a restriction fragment length polymorphism (RFLP) based analysis to differentiate between Arabica and Robusta types of coffee. Design-Expert software was used for design of experiments comparing and optimizing yields using a variety of commercial DNA extraction kits. References External links Stat-Ease official website Software companies based in Minnesota Companies based in Minnesota Software companies of the United States 1982 establishments in the United States 1982 establishments in Minnesota Software companies established in 1982 Companies established in 1982
38042649
https://en.wikipedia.org/wiki/University%20of%20South%20Wales
University of South Wales
The University of South Wales () is a public university in Wales, with campuses in Cardiff, Newport and Pontypridd. It was formed on 11 April 2013 from the merger of the University of Glamorgan and the University of Wales, Newport. The university is the second largest university in Wales in terms of its student numbers, and offers around 200 undergraduate and postgraduate courses. The university has three main faculties across its campuses in South Wales. History The university can trace its roots to the founding of the Newport Mechanics' Institute in 1841. The Newport Mechanics' Institute later become the University of Wales, Newport. In 1913 the South Wales and Monmouthshire School of Mines was formed. The school of mines was later to become the Polytechnic of Wales, before gaining the status of University of Glamorgan in 1992. The name for the new merged university was chosen following a research exercise amongst interested parties and announced in December 2012 by the prospective vice-chancellor of the university, Julie Lydon, who retired in 2021. In 2020 the university entered a strategic alliance with the University of Wales Trinity Saint David through a Deed of Association. A joint statement said that the two universities would be "working together on a national mission to strengthen Wales’ innovation capacity, supporting economic regeneration and the renewal of its communities",while retaining their autonomy and distinct identities. Notable dates 1841 Opening of Mechanics Institute, Newport 1913 Opening of South Wales and Monmouthshire School of Mines, Treforest 2013 Merger between the University of Glamorgan and the University of Wales, Newport 2014 Rowan Williams appointed Chancellor 2015 London Campus closes 2016 Caerleon Campus closes 2020 Dubai Campus closes Student numbers At formation it was reported that the university had more than 33,500 students from 122 countries and was then the sixth largest in the United Kingdom and the largest in Wales. Following the decline in student numbers reported by the HESA over the years since the formation of the university, for the academic year the University ranking was largest in the UK and the 2nd largest in Wales when measured by the numbers of students enrolled. Source:- The Higher Education Statistics Agency Organisation Associated organisations The university is part of the University of South Wales Group comprising the university, the Royal Welsh College of Music and Drama and the Merthyr Tydfil College. The university has a band of 106 partner colleges, universities, FE institutions or organisations, who deliver University of South Wales's higher education programmes or access courses in the UK and 18 other countries. Faculties The university has three faculties spread over its campuses in South East Wales. Faculty of Computing, Engineering and Science School of Computing and Mathematics School of Engineering School of Applied Sciences Faculty of Creative Industries Film and TV School Wales School of Drama and Music School of Art and Design South Wales Business School Faculty of Life Sciences and Education School of Psychology and Therapeutic Studies School of Education, Early Years and Social Work School of Health, Sport & Professional Practice School of Care Sciences The university has a film school, animation facilities, broadcasting studios, a photography school, a reputation for theatre design, poets, scriptwriters and authors as well as the national music and drama conservatoire, the Royal Welsh College of Music and Drama, as a wholly owned subsidiary. It offers a range of qualifications from further education to degrees to PhD study. As a Post 92 University it delivers a range of STEM subjects. Campuses The university has three main campuses located in South Wales: Cardiff The Faculty of Creative Industries is based at the Cardiff Campus, along with a smaller number of courses from the Faculty of Business and Society. The Atrium Building is the main building at the campus, originally opened by the University of Glamorgan in 2007 the building was recently extended at a cost of £14.7 million to replace the Caerleon campus. The building re-opened during September 2016. The campus also includes the Atlantic House building. Newport The university's newest campus is the £40 million campus on the west bank of the River Usk in Newport city centre. The 'City Campus' was built for the University of Wales, Newport and was opened in 2011 by Sir Terry Matthews. Originally built to house a variety of undergraduate and postgraduate courses for the Newport Business School, Newport Film School and the universities art and design department, it now hosts departments and courses from the Faculty of Life Sciences and Education, including teaching, social work and youth work as well as some courses in business together with the National Cyber Security Academy. Pontypridd This was formerly the main campus of the University of Glamorgan. Currently the university's largest campus, with a range of facilities, including an indoor sports centre and students' union. The campus is located in three parts:- 1) Treforest – Which hosts the School of Engineering, School of Computing and Mathematics and the South Wales Business School. The University's graduate school, main library and administrative departments are based on the Treforest site. 2) Glyntaff – Where nursing, science and sport departments are based. The campus is divided into Lower Glyntaff, where nursing is focused and Upper Glyntaff where Applied Sciences is based. The Alfred Russel Wallace building, named after the Welsh naturalist, is an impressive example of South Wales architecture, having been an Edwardian boys grammar school and built in typical dramatic style. 3) Tyn y Wern – The location of the University of South Wales' sport park. Former campuses Caerleon Caerleon is located on the northern outskirts of Newport. Formerly the second largest campus, it hosted a variety of undergraduate and postgraduate courses, including education, sports, history, fashion design, art and photography. The campus had extensive sports facilities, library, students' union shop and a students' union bar. It was formerly the main campus of the University of Wales, Newport. In 2014, it was announced by the University of South Wales that the Caerleon campus would close in 2016. The university cited the need to invest around £20 million to improve and upgrade facilities as the primary reason for its closure. The university relocated courses to the Newport City campus and the Cardiff Campus where it invested £14.7 million to extend and upgrade the Atrium building. The campus opened during 1914 and closed for the last time on 31 July 2016, after 102 years. The University is proposing to sell the campus for housing development but there is strong opposition to the planned re-development from local residents. The Caerleon Civic Society asked Cadw, the body that looks after historic monuments and buildings in Wales, to give the Edwardian main building Grade II Listed building status to save it from demolition. On 7 August 2016 the Welsh Government announced that they would recommend that the main building, gatehouses and gate-piers be listed as 'buildings of special architectural and historic interest'. The University of South Wales expressed their continued opposition to the proposed listing but the announcement was welcomed by local politicians and the Caerleon Civic Society. Grade II listing of the Main Building, the Principal's Residence, Gate Piers and Caretaker's / Gardener's Lodge was confirmed on 3 March 2017. Dubai, United Arab Emirates A new campus in Dubai was opened during September 2018 in Dubai South located near Al-Maktoum International Airport. The courses offered were British Bachelor degrees which include Aviation Maintenance Engineering and postgraduate courses including MSc International Logistics and Supply Chain Management. From September 2020 it was announced that the campus would not accept further applications and would close. In 2018 the University was criticised by human rights campaigners when it awarded honorary doctorates to two senior figures in the UAE government, Ahmed bin Saeed Al Maktoum and Nahyan bin Mubarak Al Nahyan, at the campus' opening ceremony. London In 2014, USW spent an estimated £300,000 developing a campus in the Docklands area of London, but in January 2015 cancelled the project before taking on any students. The university described this as a test of the market, but cited problems created by new UK visa regulations. Academic profile Awards The University of Wales, Newport received the 2013 Guardian Higher Education Award (with the University of Glamorgan) for widening participation through its Universities Heads of the Valleys Institute (UHOVI) initiative. The University of Glamorgan was recognised for providing outstanding student support, winning the 2012 Times Higher Award for Outstanding Support to Students. The vice-chancellor of the university, Julie Lydon, was appointed an OBE for services to higher education in Wales in the 2014 Queen's Birthday Honours. Rankings and reputation In 2017, the university entered the top five percent of universities in the world in the Times Higher Education World University Rankings. In the 2017 National Student Survey the University was placed equal 140 out of 149 universities and institutions surveyed. The Complete University Guide 2016/7 ranked the university as 99 out of 127 UK universities., however the ranking declined to 110 out of 129 UK Universities in 2017/8 The University came 35th in the 2017 What Uni Awards The University did not participate in the 2017 Teaching Excellence Framework which is a government assessment of the quality of undergraduate teaching in universities and other higher education providers. National Cyber Security Academy In 2016, the university launched its National Cyber Security Academy. This academy is a joint venture with industrial partners and Welsh Government and has been recognised by the UK's national security organisation GCHQ. Research The university is one of Wales's five major universities and a member of the St David's Day Group. Its precursor institutions have been recognised for producing some world-leading and internationally excellent research in specialist areas, such as mechanical, aeronautical & manufacturing engineering, social work, social policy & administration, education, history, art and design, nursing and midwifery, architecture and the built environment, English language and literature, communication, cultural & media studies, sports-related studies. The University has provided a partnership platform for think-tanks such as the Joseph Rowntree Foundation to develop debate on public policy reform in the UK. The Research Excellence Framework in 2014 concluded that the university's research output is 'world leading' or 'internationally excellent', placing the university's research strengths placed in the creative industries, social policy and criminology and sports and exercise science. Student life Students' Union University of South Wales Students' Union is the students' union of the university. It exists to support and represent the students of the university. It is a member-led organisation and all students are automatically members. Accommodation Pontypridd has halls of residence and facilities on its Treforest campus. Students studying at the university's Cardiff campus have access to private halls of residence, which are shared with the city's other universities. The Newport City building has nearby private student halls of residence. Notable alumni Artists and photographers Roger Cecil, painter, mixed media artist Maciej Dakowicz, photographer and photojournalist Ken Elias, artist Tracey Moberly, interdisciplinary artist Tish Murtha, documentary photographer Authors and creative writers Carole Bromley, poet Emma Darwin, novelist Philip Gross, poet, novelist, playwright and academic Paul Groves, poet Maria McCann, novelist Gareth L. Powell, science fiction author Dan Rhodes, writer Rachel Trezise, author Camilla Way, author Tine Wittler, writer and presenter Business and legal Joe Blackman, entrepreneur, Ambassador of The Princes Trust, CEO of Collection 26 Christopher Chung Shu-kun, BBS, JP, member of Hong Kong Legislative Council Trudy Norris-Grey, Microsoft Gemma Hallett, Entrepreneur and Founder of miFuture Film Gareth Evans, film director and screenwriter Philip John, director and screenwriter Kirk Jones, film director and screenwriter Asif Kapadia, film maker Justin Kerrigan, writer and director Teddy Soeriaatmadja, film director Peter Watkins-Hughes, BAFTA Cymru award-winning writer/director Scott Barley, film maker Healthcare professionals Sue Bale OBE, Director of South East Wales Academic Health Science Partnership Media personalities and performers Jayde Adams, comedian, actor, writer and singer Behnaz Akhgar, weather presenter Max Boyce MBE, entertainer Lorna Dunkley, newsreader and presenter Ben Green, comedy actor Harry Greene, television personality Mark Labbett, TV personality Nicola Miles-Wildin, performer Musicians Richard James Burgess, producer, musician, digital music innovator Martin Goldschmidt, co-founder and managing director of UK independent record label Cooking Vinyl Mike Howlett, musician and music producer Jon Maguire, songwriter and former member of duo Lilygreen & Maguire Sion Russell Jones, singer and songwriter Ian Watkins, singer from rock band Lostprophets Politicians Kevin Brennan, politician Suzy Davies Jill Evans, MEP for Wales Catherine Thomas Leanne Wood, party leader of Plaid Cymru and Welsh Assembly Group Leader Scientists Randii Wessen Brad Scottly Sports people Matthew Jarvis, rugby player Rupert Moon, rugby player and businessman Darren Morris, rugby player Gemma Hallett, rugby union player Jamie Robinson, rugby player Nigel Walker, former Olympian and rugby player for Wales, National Director at the English Institute of Sport References External links South Wales Education in Newport, Wales University Alliance 2013 establishments in Wales Educational institutions established in 2013 Universities established in the 21st century Chiropractic schools in the United Kingdom Universities and colleges formed by merger in the United Kingdom Organisations based in Newport, Wales Law schools in Wales Universities UK
57594760
https://en.wikipedia.org/wiki/9799%20Thronium
9799 Thronium
9799 Thronium, provisional designation: , is a large Jupiter trojan from the Greek camp and the parent body of a small, unnamed asteroid family , approximately in diameter. It was discovered on 8 September 1996, by American astronomer Timothy Spahr at the Catalina Station of the Steward Observatory near Tucson, Arizona, in the United States. The assumed C-type asteroid belongs to the 50 largest Jupiter trojans and has a relatively long rotation period of 21.52 hours. It was named for the ancient Greek city of Thronium mentioned in the Iliad. Orbit and classification Thronium is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance (see Trojans in astronomy). It orbits the Sun at a distance of 4.9–5.4 AU once every 11 years and 10 months (4,320 days; semi-major axis of 5.19 AU). Its orbit has an eccentricity of 0.05 and an inclination of 31° with respect to the ecliptic. The body's observation arc begins in December 1986, with its first observation as at the Observatory of the University of St Andrews , Scotland, almost 10 years prior to its official discovery observation at Catalina Station. Parent of a small Trojan family Thronium is also the parent body of a small, unnamed asteroid family with the family identification number 006. The family seems to be young, compact and consist of only 7 known members. Only a few families have been identified among the Jovian asteroids; four of them in the Greek camp. This potentially collisional family was first characterized by Jakub Rozehnal and Miroslav Brož in 2014. The other members of this family include the unnamed Jovian asteroids , , , , and . Numbering and naming This minor planet was numbered on 8 December 1998 after its orbit had been sufficiently secured (). On 14 May 2021, the object was named by the Working Group Small Body Nomenclature (WGSBN), after the ancient Greek city of Thronium. In Greek mythology and mentioned in the Iliad (Catalogue of Ships), it was one of the places from which the Locrians joined the Achaeans. Physical characteristics is an assumed, carbonaceous C-type asteroid. Nesvorný does not give an overall spectral type for this unnamed family, but derives an albedo of 0.06 (see below), which is also typical for carbonaceous C-types. Rotation period In October 2009, a rotational lightcurve of was obtained from photometric observations by Stefano Mottola using a 1.2-meter telescope at the Calar Alto Observatory in Spain. Lightcurve analysis gave a longer-than average rotation period of 21.52 hours with a brightness amplitude of 0.16 magnitude (). Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, measures between 64.87 and 72.42 kilometers in diameter and its surface has an albedo between 0.037 and 0.060. The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0603 and a diameter of 65.06 kilometers based on an absolute magnitude of 9.6. References External links Long-term evolution of asteroid families among Jovian Trojans, Jakub Rozehnal and Miroslav Brož (2014) Asteroid Lightcurve Database (LCDB), query form (info ) Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center 009799 Discoveries by Timothy B. Spahr Minor planets named for places Minor planets named from Greek mythology Named minor planets 19960908
30627354
https://en.wikipedia.org/wiki/Cyanippus
Cyanippus
In Greek mythology, the name Cyanippus (Ancient Greek: Κυάνιππος) may refer to: Cyanippus, son of Aegialeus and Comaetho, or else son of Adrastus and Amphithea and brother of Aegialeus. He fought in the Trojan War and was one of the men who entered the Trojan Horse. For a while, he ruled over Argos. He died childless and was succeeded by Cylarabes, son of Sthenelus. Cyanippus, son of Pharax, from Thessaly. He fell in love with the beautiful Leucone and married her, but he was so fond of hunting that he would not spend any time with his young wife. Leucone, suspecting her husband of infidelity, followed him to the woods to spy on him. Cyanippus' hounds scented her hiding in the thicket and, taking her for a wild animal, rushed at the woman and tore her to pieces. Cyanippus himself came up too late; he set up a funeral pyre for his wife, slew his dogs upon it and then killed himself. The story is similar to that of Cephalus and Procris. Cyanippus, a Syracusan who did not venerate Dionysus. The god punished him by making him drunk, in which state Cyanippus raped his own daughter Cyane. She managed to take a ring off the rapist's finger, so that she could recognize him later, and gave the ring to her nurse. Soon after that, the city was affected with plague, and the oracle of Apollo pronounced that there was an impious man in the city who was to be sacrificed in order to put an end to the calamity. Cyane was the only one to understand the prophecy. She grabbed her father by the hair, cut his throat and then killed herself in the same manner. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Lucius Mestrius Plutarchus, Moralia with an English Translation by Frank Cole Babbitt. Cambridge, MA. Harvard University Press. London. William Heinemann Ltd. 1936. Online version at the Perseus Digital Library. Greek text available from the same website. Parthenius, Love Romances translated by Sir Stephen Gaselee (1882-1943), S. Loeb Classical Library Volume 69. Cambridge, MA. Harvard University Press. 1916. Online version at the Topos Text Project. Parthenius, Erotici Scriptores Graeci, Vol. 1. Rudolf Hercher. in aedibus B. G. Teubneri. Leipzig. 1858. Greek text available at the Perseus Digital Library. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Tryphiodorus, Capture of Troy translated by Mair, A. W. Loeb Classical Library Volume 219. London: William Heinemann Ltd, 1928. Online version at theoi.com Tryphiodorus, Capture of Troy with an English Translation by A.W. Mair. London, William Heinemann, Ltd.; New York: G.P. Putnam's Sons. 1928. Greek text available at the Perseus Digital Library. Kings of Argos Kings in Greek mythology People of the Trojan War Argive characters in Greek mythology Sicilian characters in Greek mythology Characters in Greek mythology Thessalian mythology
10236636
https://en.wikipedia.org/wiki/010%20Trojans
010 Trojans
The 010 Trojans (formerly known as Rotterdam Trojans are an American football team based in Rotterdam, the Netherlands. Founded in 1984, the Trojans are the second oldest surviving team in the Netherlands, behind the Amsterdam Crusaders. Upon foundation, the Trojans immediately joined the national American Football body at the time, the NAFF (Nederlandse American Football Federatie). After beginning in the second tier (NAFF Division One), the Trojans managed an undefeated championship season (12–0) in 1988 and gained promotion to the Premier Division. However, this step proved too much for the Trojans who were immediately relegated. After a divisional reorganisation the Trojans moved back to the highest division, where they remain to this day. National Tulip Bowl titles were won in 1996 and 1997. Outside the NAFF, the Rotterdam Trojans also competed in the Eurobowl competitions in 1995 (losing in the first round to Birmingham Bulls) and reached the Eurobowl B final in 1996, losing 7–0 in the final to St. Gallen Seaside Vipers. In 1998 the Trojans joined the breakaway AFLN (American Football League Nederland) and won two AFLN "National" championships. In addition, participation in the Dutch-Belgian Benelux League saw Rotterdam pick up two Benelux Bowl titles in 1999 and 2000 to add to their first (non-official) win in 1995. In 2001 the NAFF and AFLN put aside their differences to merge into the current organisational structure: the AFBN (American Football Bond Nederland). A large re-building operation over the last 6 years has seen Rotterdam have intermittent success, reaching the Tulip Bowl final in 2002 and the semi-finals several times. The Trojans have been very successful in recruitment recently and look to be on their way to becoming a challenger once more for the national championship of the Netherlands. In 2006 the Rotterdam Trojans returned to wearing their original team colours of green and white after several years of red and yellow. They played their home games at the renovated "City of Troje". In 2012 the Rotterdam Trojans filed for bankruptcy, due to financial mismanagement and fraud by the then acting board. A new club was founded and called the 010 Trojans (010 is the area-code for the city of Rotterdam). The team, led by head coach Michel "Moose" Storm, won the Div. 3 title in their first season, and lost the semi-final a year later. When Storm stepped down after a 5-5 season in 2015, offensive coordinator Wouter van den Boogaard was promoted to head coach. Playing in the highest division in the Netherlands, the Trojans upset the Alpen Eagles 14–17 in the playoffs in Van den Boogaard's first year and advanced to Tulip Bowl XXXII, losing to the Amsterdam Crusaders 40–6. The following year saw the departure of head coach Wouter van den Boogaard and his staff and the hiring of new head coach Pascal Matla. The team duplicated their effort of the previous year, reaching the Tulip Bowl once again, only to lose once more to the Amsterdam Crusaders, this time by a score of 33–13. Head coach Pascal Matla left for health reasons, and the team had a down year that following year, not winning a game and relegating to the first division. See also List of American football teams in the Netherlands External links 010 Trojans Official Website American Football Bond Nederland Website Sports clubs in Rotterdam American football teams in the Netherlands American football teams established in 1984
2594707
https://en.wikipedia.org/wiki/List%20of%20assigned%20/8%20IPv4%20address%20blocks
List of assigned /8 IPv4 address blocks
Some large /8 blocks of IPv4 addresses, the former Class A network blocks, are assigned in whole to single organizations or related groups of organizations, either by the Internet Corporation for Assigned Names and Numbers (ICANN), through the Internet Assigned Numbers Authority (IANA), or a regional Internet registry. Each /8 block contains 256 = 2 = 16,777,216 addresses, which covers the whole range of the last three delimited segments of an IP address. As IPv4 address exhaustion has advanced to its final stages, some organizations, such as Stanford University, formerly using 36.0.0.0/8, have returned their allocated blocks (in this case to APNIC) to assist in the delay of the exhaustion date. List of reserved /8 blocks List of assigned /8 blocks List of assigned /8 blocks to the United States Department of Defense List of assigned /8 blocks to the regional Internet registries The regional Internet registries (RIR) allocate IPs within a particular region of the world. Note that this list may not include current assignments of /8 blocks to all regional or national Internet registries. Original list of IPv4 assigned address blocks The original list of IPv4 address blocks can be found in RFC 790 (J. B. Postel, September 1981). In previous versions of the document, (RFC 776 (J. B. Postel, January 1981), RFC 750, (J. B. Postel, 26 September 1978)), network numbers were 8-bit numbers rather than the 32-bit numbers used in IPv4. RFC 790 also added three networks not listed in RFC 776: 42.rrr.rrr.rrr, 43.rrr.rrr.rrr, and 44.rrr.rrr.rrr. The relevant portion of RFC 790 is reproduced here with minor changes: 000.rrr.rrr.rrr Reserved [JBP] 001.rrr.rrr.rrr BBN-PR BBN Packet Radio Network [DCA2] 002.rrr.rrr.rrr SF-PR-1 SF Packet Radio Network [JEM] 003.rrr.rrr.rrr BBN-RCC BBN RCC Network [SGC] 004.rrr.rrr.rrr SATNET Atlantic Satellite Network [DM11] 005.rrr.rrr.rrr SILL-PR Ft. Sill Packet Radio Network[JEM] 006.rrr.rrr.rrr SF-PR-2 SF Packet Radio Network [JEM] 007.rrr.rrr.rrr CHAOS MIT CHAOS Network [MOON] 008.rrr.rrr.rrr CLARKNET SATNET subnet for Clarksburg [DM11] 009.rrr.rrr.rrr BRAGG-PR Ft. Bragg Packet Radio Net [JEM] 010.rrr.rrr.rrr ARPANET ARPANET [VGC] 011.rrr.rrr.rrr UCLNET University College London [PK] 012.rrr.rrr.rrr CYCLADES CYCLADES [VGC] 013.rrr.rrr.rrr Unassigned [JBP] 014.rrr.rrr.rrr TELENET TELENET [VGC] 015.rrr.rrr.rrr EPSS British Post Office EPSS [PK] 016.rrr.rrr.rrr DATAPAC DATAPAC [VGC] 017.rrr.rrr.rrr TRANSPAC TRANSPAC [VGC] 018.rrr.rrr.rrr LCSNET MIT LCS Network [DDC2] 019.rrr.rrr.rrr TYMNET TYMNET [VGC] 020.rrr.rrr.rrr DC-PR D.C. Packet Radio Network [VGC] 021.rrr.rrr.rrr EDN DCEC EDN [EC5] 022.rrr.rrr.rrr DIALNET DIALNET [MRC] 023.rrr.rrr.rrr MITRE MITRE Cablenet [APS] 024.rrr.rrr.rrr BBN-LOCAL BBN Local Network [SGC] 025.rrr.rrr.rrr RSRE-PPSN RSRE / PPSN [BD2] 026.rrr.rrr.rrr AUTODIN-II AUTODIN II [EC5] 027.rrr.rrr.rrr NOSC-LCCN NOSC / LCCN [KTP] 028.rrr.rrr.rrr WIDEBAND Wide Band Satellite Network [CJW2] 029.rrr.rrr.rrr DCN-COMSAT COMSAT Dist. Comp. Network [DLM1] 030.rrr.rrr.rrr DCN-UCL UCL Dist. Comp. Network [PK] 031.rrr.rrr.rrr BBN-SAT-TEST BBN SATNET Test Network [DM11] 032.rrr.rrr.rrr UCL-CR1 UCL Cambridge Ring 1 [PK] 033.rrr.rrr.rrr UCL-CR2 UCL Cambridge Ring 2 [PK] 034.rrr.rrr.rrr MATNET Mobile Access Terminal Net [DM11] 035.rrr.rrr.rrr NULL UCL/RSRE Null Network [BD2] 036.rrr.rrr.rrr SU-NET Stanford University Ethernet [MRC] 037.rrr.rrr.rrr DECNET Digital Equipment Network [DRL] 038.rrr.rrr.rrr DECNET-TEST Test Digital Equipment Net [DRL] 039.rrr.rrr.rrr SRINET SRI Local Network [GEOF] 040.rrr.rrr.rrr CISLNET CISL Multics Network [CH2] 041.rrr.rrr.rrr BBN-LN-TEST BBN Local Network Testbed [KTP] 042.rrr.rrr.rrr S1NET LLL-S1-NET [EAK] 043.rrr.rrr.rrr INTELPOST COMSAT INTELPOST [DLM1] 044.rrr.rrr.rrr AMPRNET Amateur Radio Experiment Net [HM] See also Classless Inter-Domain Routing (CIDR) List of countries by IPv4 address allocation Notes References The authoritative up-to-date list of IANA assignments. Historical IP address lists: First version of IANA table with historical notes via the Internet Archive Wayback Machine. Last version of IANA table with historical notes via the Internet Archive Wayback Machine. Network addressing IPv4 IPv4 address blocks
962678
https://en.wikipedia.org/wiki/ESET%20NOD32
ESET NOD32
ESET NOD32 Antivirus, commonly known as NOD32, is an antivirus software package made by the Slovak company ESET. ESET NOD32 Antivirus is sold in two editions, Home Edition and Business Edition. The Business Edition packages add ESET Remote Administrator allowing for server deployment and management, mirroring of threat signature database updates and the ability to install on Microsoft Windows Server operating systems. History NOD32 The acronym NOD stands for Nemocnica na Okraji Disku ("Hospital at the end of the disk"), a pun related to the Czechoslovak medical drama series Nemocnice na kraji města (Hospital at the End of the City). The first version of NOD32 - called NOD-ICE - was a DOS-based program. It was created in 1987 by Miroslav Trnka and Peter Paško at the time when computer viruses started to become increasingly prevalent on PCs running DOS. Due to the limitations of the OS (lack of multitasking among others) it didn't feature any on-demand/on-access protection nor most of the other features of the current versions. Besides the virus scanning and cleaning functionality it only featured heuristic analysis. With the increasing popularity of the Windows environment, advent of 32-bit CPUs, a shift in the PC market and increasing popularity of the Internet came the need for a completely different antivirus approach as well. Thus the original program was re-written and christened "NOD32" to emphasize both the radical shift from the previous version and its Win32 system compatibility. Initially the program gained popularity with IT workers in Eastern European countries, as ESET was based in Slovakia. Though the program's abbreviation was originally pronounced as individual letters, the worldwide use of the program led to the more common single-word pronunciation, sounding like the English word nod. Additionally, the "32" portion of the name was added with the release of a 32-bit version in the Windows 9x era. The company reached its 10000th update to virus definitions on June 25, 2014. Mail Security for Microsoft Exchange Server On March 10, 2010 ESET released ESET Mail Security for Microsoft Exchange Server, which contains both antimalware and antispam modules. It supports Microsoft Exchange 5.5, 2000, 2003, 2007 and 2010. Mobile Security ESET Mobile Security is the replacement for ESET Mobile Antivirus, which provided anti-malware and antispam functionality. ESET Mobile Security contains all the features of the older product and adds new anti-theft features such as SIM locking and remote wipe as well as a security audit and a firewall. Versions for Windows Mobile and Symbian OS were available as of September 2010, for both home and enterprise users. Remote Administrator ESET Remote Administrator is a central management console designed to allow network administrators to manage ESET software across a corporate network. Smart Security On November 5, 2007, ESET released an Internet security suite, ESET Smart Security version 3.0, to compete with other security suites by other companies such as McAfee, Symantec, AVG and Kaspersky. ESET Smart Security incorporates anti-spam and a bidirectional firewall along with traditional anti-malware features of ESET NOD32 Antivirus. On March 2, 2009, ESET Smart Security version 4.0 was released, adding integration of ESET SysInspector; support for Mozilla Thunderbird and Windows Live Mail; a new self-defense module, an updated firewall module, ESET SysRescue and a wizard for creating bootable CD and USB flash drives. There were initially compatibility problems between ESET Smart Security 4.0 and Windows Vista Service Pack 2 but these were remedied by an update. On August 17, 2010, ESET Smart Security version 4.2 was released with new features, enhancements and changes. On September 14, 2011, ESET Smart Security version 5.0 was released. On January 15, 2013, ESET Smart Security version 6.0 was released. This version included Anti-Theft feature for tracking of lost, misplaced or stolen laptop. On October 16, 2013, ESET Smart Security version 7.0 was released. It offers enhanced operation memory scanning and blocks misuses of known exploits. On October 2, 2014, ESET Smart Security version 8.0 was released. It adds exploit blocking for Java and botnet protection. On October 13, 2015, ESET Smart Security version 9.0 was released. SysInspector ESET SysInspector is a diagnostic tool which allows in-depth analysis of various aspects of the operating system, including running processes, registry content, startup items and network connections. Anti-Stealth Technology is used to discover hidden objects (rootkits) in the Master Boot Record, boot sector, registry entries, drivers, services and processes. SysInspector Logs are standard XML files and can be submitted to IT experts for further analysis. Two logs can be compared to find a set of items not common to both logs. A log file can be saved as a service script for removing malicious objects from a computer. SysRescue Live ESET SysRescue Live is a Linux-based bootable Live CD/USB image that can be used to boot and clean heavily infected computers independent of the installed operating system. The program is offered free of charge, and can download updates if a network connection is present. Other programs ESET has released free standalone removers for malware when they are widespread, such as Mebroot. Development File Security for Microsoft Windows Server On June 1, 2010, the first release candidate for ESET File Security for Microsoft Windows Server v4.3 was made available to the public. This program is an updated version of ESET NOD32 Antivirus Business Edition designed for Microsoft Windows Server operating systems and contains a revised user interface, automatic exclusions for critical directories and files and unspecified optimizations for operation on servers. Mobile Security On April 22, 2010, ESET Mobile Security for Windows Mobile and Symbian OS went into public beta. The Home Edition was released on September 2, 2010, and on January 20, 2011, the Business Edition went into beta. On April 29, 2011, ESET a beta test version for Android was released. On August 10, 2011, the release candidate was made available. NOD32 for Mac OS X and Linux Desktop On December 2, 2009, ESET NOD32 Antivirus 4 for Mac OS X Desktop and ESET NOD32 Antivirus 4 for Linux Desktop were released for public testing. ESET stated the release automatically detects and cleans cross-platform malware, scans archives, automatically scans removable media such as USB flash drives when mounted, performs real-time scanning, provides reports and offers a GUI similar to the Microsoft Windows version. The second beta test versions were released January 9, 2010, and the third on June 10, 2010. On September 13, 2010, ESET released ESET NOD32 Antivirus for Mac OS X Business Edition. and announced a release candidate for ESET Cybersecurity for Mac OS X On September 24, 2010, ESET released a Release Candidate for ESET Cybersecurity for Mac OS X and on January 21, 2011, ESET released a Release Candidate for ESET NOD32 Antivirus for Linux Desktop Smart Security On May 5, 2011, ESET released a beta test version of ESET Smart Security 5.0. The beta version adds parental control, a cloud-based file reputation service, gamer mode, HIPS and improvements to its antispam, firewall and removable media control functions. On June 14, 2011, ESET released a release candidate for ESET Smart Security version 5.0. On August 5, 2014, ESET Smart Security version 8.0 public beta 1 was released. It offers enhanced exploit blocking and botnet detection. Discontinued products Mobile Antivirus ESET Mobile Antivirus was aimed at protecting smartphones from viruses, spyware, adware, trojans, worms, rootkits, and other unwanted software. It also provided antispam filtering for SMS messages. Versions for Windows Mobile and Symbian OS were available. ESET discontinued ESET Mobile Antivirus in January 2011 and provides ESET Mobile Security as a free upgrade to licensed users of ESET Mobile Antivirus. NOD32 Antivirus v2.7 and older On 1 February 2010, ESET discontinued version 2.7 of NOD32 Antivirus and all previous versions of NOD32 Antivirus. They were removed from the ESET website, including product pages and e-Store. Version 2.7 was the last version supporting Microsoft 95/98/ME and Novell NetWare operating systems. Virus signature database updates and customer support was discontinued on February 1, 2012. Technical information On a network, NOD32 clients can update from a central "mirror server" on the network. Reception , NOD32 Antivirus holds ICSA Labs certifications. , NOD32 has accumulated one hundred eleven VB100 awards from Virus Bulletin; it has only failed to receive this award three times. In comparative report that Virus Bulletin published on 2 September 2008, NOD32 detected 94.4% of all malware and 94.7% of spyware. It stood above competitors like Norton Internet Security and ZoneAlarm but below Windows Live OneCare and Avira AntiVir. In the RAP averages quadrant between December 2011 and June 2012, Virus Bulletin found that ESET was pretty much at the same level, about 94%, but was noted for its ability to block spam and phishing, earning an award, an award only 19 other antivirus companies were able to acquire. On 28 April 2008, Robert Vamosi of CNET.com reviewed version 3.0 of NOD32 and gave it a score of 3.5/5. On 6 March 2009, Seth Rosenblatt of Download.com reviewed the 4.0 version of NOD32 gave it a rating of 4.6/5. On 15 September 2011, Seth Rosenblatt of CNET reviewed the 5.0 version of NOD32 and gave it a rating of 5/5. See also Antivirus software List of antivirus software Comparison of computer viruses References External links Virus Radar, a service run by ESET using NOD32 statistics The official German ESET Support Forum Antivirus software MacOS security software Linux security software Windows security software Computer security software
1053555
https://en.wikipedia.org/wiki/University%20of%20Klagenfurt
University of Klagenfurt
The University of Klagenfurt ( or Alpen-Adria-Universität Klagenfurt, AAU) is a federal Austrian research university and the largest research and higher education institution in the state of Carinthia. It has its campus in Klagenfurt. Originally founded in 1970 and relaunched in 1993, the university today holds faculties of humanities & social sciences, management & economics, technical sciences, and interdisciplinary studies. It is listed in the ARWU, THE, and QS global rankings and holds rank 48 worldwide in THE's Young University Rankings 2021. The university has defined two research priority areas, Networked and Autonomous Systems and Social Ecology (until 2018, transferred to BOKU Vienna), with the latter spawning three ERC Grants. It has launched a new initiative, Humans in the Digital Age (HDA), in 2019, hosting an ERC Grant on cybersecurity. It also holds a number of central facilities such as the Robert Musil Institute (co-organizer of the Bachmann Prize), the Karl Popper Kolleg (an Institute for Advanced Study), the University Cultural Centre (UNIKUM), the build! Gründerzentrum (a start-up facilitation center), the University Sports Centre (USI), and the Klagenfurt University Library. Oliver Vitouch, a cognitive psychologist and former faculty member of the University of Vienna and the Max Planck Institute for Human Development in Berlin, is the university's Vice-Chancellor. Larissa Krainer chairs the Academic Senate; Werner Wutscher, Secretary General of the European Forum Alpbach, is chairman of the University Council. The University of Klagenfurt is situated 30 km from the Slovenian and 60 km from the Italian border and supports bi- and multilingualism, especially in the context of the Slovenian minority in Carinthia. Together with the Free University of Bozen-Bolzano (Italy) and the University of Fribourg (Switzerland), it is among the three southernmost universities in the German-speaking world. History With the Protestant collegium sapientiae et pietatis founded in 1552, Klagenfurt hosted one of the oldest gymnasiums in Austria (today's Europagymnasium), directed by Hieronymus Megiser from 1593 to 1601, but had no ancient university tradition. In 1970, the Austrian parliament passed a federal law allowing the establishment of an Educational Science College in Klagenfurt. The first doctoral degree was conferred in 1972. In 1975, new laws on higher education came into force, with the name of the college being changed into Universität für Bildungswissenschaften (University of Educational Sciences). In 1993, a fundamental relaunch took place: The institution's name was changed to Universität Klagenfurt (University of Klagenfurt), and a Faculty of Humanities and a (new) Faculty of Economics, Business Administration, and Informatics were inaugurated. The Faculty of Interdisciplinary Studies was inaugurated in 2004. The university adopted the official cognomen Alpen-Adria-Universität Klagenfurt in 2004 (with its legal name still being Universität Klagenfurt). It was extended with a fourth, technical sciences faculty in 2007 (with a focus on Informatics, Information Technology, and Networked & Autonomous Systems), engaging in research operations in collaboration with the Lakeside Science & Technology Park. In 2012, the number of students passed the 10,000 mark. On occasion of the institution's 40th anniversary, a Boat Race was held on Lake Wörth in 2010. Klagenfurt's Eight won against the University of Vienna by a boat length on a sprinting distance from the rowing clubs to Maria Loretto castle. In 2015, the university established commencement speeches at its graduation ceremonies. Among the speakers so far are Sabine Herlitschka, Josef Winkler, August-Wilhelm Scheer, and Johanna Rachinger. Presidents of Austria Heinz Fischer (formerly) and Alexander Van der Bellen (incumbent) are recurrent guests at doctoral graduations sub auspiciis Praesidentis. In 2020, the university celebrated its institutional 50 year jubilee. This included a lecture series together with the Austrian Academy of Sciences, Utopia! Is the world out of joint? Contributions to the art of Enlightenment opened by Barbara Stollberg-Rilinger and the bestowal of an honorary doctorate to Rae Langton. Several further jubilee events were virtualized or postponed due to the COVID-19 pandemic. On Nov 22, 2020, the Austrian Broadcasting Corporation showed the TV documentary Humans in the Digital Age: University of Klagenfurt—50th Anniversary. The jubilee exhibition ARTEFICIA was postponed to autumn 2021. It shows unique exhibits from honorary doctors Manfred Bockelmann, Michael Guttenbrunner, Maja Haderlap, Peter Handke, Maria Lassnig, Valentin Oman, Wolfgang Puschnig, Peter Turrini, and Josef Winkler. Technological developments of the University of Klagenfurt—leading contributions to the navigation system of the robotic helicopter Ingenuity—are part of NASAs Mars 2020 mission (Mars landing on 18 Feb 2021, maiden flight of the helicopter on 19 Apr 2021). Campus With its suburban setting, the university campus is in walking distance of both the renaissance-dominated historic city centre of Klagenfurt (capital of the state of Carinthia) and the east bay of the Wörthersee, a renowned Austrian summer resort. Also hiking, climbing and skiing possibilities in the Austrian Alps are nearby. Together with the adjacent Lakeside Science & Technology Park, a 60 acres start-up and spin-off park, the university campus forms the so-called Lakeside District. From 2016 to 2018, the university's central and north wing (13,000 m2) were fully refurbished with a budget of €26 million. As a result, the university was shortlisted for the Prix Versailles – Campuses 2019 (under UNESCO patronage), together with buildings of the University of Chicago in Hongkong, Barnard College, Stanford University, SPA Vijayawada, and Skoltech, which won the competition. Faculties and departments Faculty of Humanities The Faculty of Humanities currently encompasses 11 departments and a faculty centre. Their common ambition, beyond doing discipline-specific research, is to foster multilingualism and intercultural education, with particular emphasis on the Alps-Adriatic region. Department of English and American Studies Department of Cultural Analysis Department of Educational Sciences and Research Department of German Studies Department of History Department of Media and Communications Science Department of Philosophy Department of Psychology Department of Romance Studies Department of Slavonic Studies Robert Musil Institute for Literary Studies Faculty Centre for Sign Language and Communication of the Hearing Impaired Faculty of Management and Economics The Faculty of Management and Economics has a focus on applied business management while fostering interdisciplinary links with law, sociology, economics and application-oriented geography. Within these disciplines, the faculty concentrates on areas of research and development, teaching and consulting in fields where cultural, business and social factors interact. Department of Financial Management Department of Geography and Regional Studies Department of Innovation Management and Entrepreneurship Department of Public, Nonprofit, and Health Management Department of Organization, Human Resources, and Service Management Department of Operations, Energy, and Environmental Management Department of Law Department of Sociology Department of Business Management Department of Economics Faculty of Technical Sciences The Faculty of Technical Sciences is dedicated to research and training in the fields of informatics, information technology and technical mathematics. The faculty was founded in January 2007 and superseded the Faculty of Economics, Business Administration and Informatics as well as a newly established Department for information and communication technology. The faculty is headed by Dean Gerhard Friedrich (informatics) and Vice-Dean Clemens Heuberger (mathematics). It is organized into nine departments and offers four bachelor's degree programs, four master's degree programs, two teacher training degree programs and two doctoral programs. Department of Artificial Intelligence and Cybersecurity Department of Informatics Education Department of Informatics Systems Department of Information Technology Department of Mathematics Department of Mathematics Education Department of Networked and Embedded Systems Department of Smart Systems Technologies Department of Statistics The research cluster "self-organizing networked systems" closely collaborates with the research institute Lakeside Labs. Faculty of Interdisciplinary Studies The Faculty of Interdisciplinary Studies develops, tests and evaluates innovative ideas in the academic fields of research, training and organization. The objective of the faculty is to tackle prevailing social problem areas by creating adequate research and learning processes. Department of Instructional and School Development Department of Science and Technology Studies Department of Science Communication and Higher Education Research University centres Centre for Women's and Gender Studies Digital Age Research Centre (D!ARC) Karl Popper Kolleg M/O/T – School of Management, Organizational Development and Technology School of Education (SoE) UNIKUM (University Cultural Centre) Partnerships As of 2021, the University of Klagenfurt has strategic partnerships with the Austrian Academy of Sciences, the Ca' Foscari University of Venice, the Fraunhofer Austria Society, and with Silicon Austria Labs (SAL). It offers joint study programs with the Universities of Vienna, Graz, Udine, La Rochelle, and the Poznań University of Technology. Student mobility partnerships via Erasmus+ and other exchange programs exist with over 250 universities in more than 50 countries worldwide. At the beginning of 2022, the University of Klagenfurt joined YERUN, a European network of young research-intensive universities headquartered in Brussels. Rankings The THE World University Rankings 2022 list the University of Klagenfurt in the 351–400 group. This is the 2nd best rank of an Austrian university with a broader spectrum of studies, second only to the University of Vienna (137) and the Medical Universities of Vienna, Graz, and Innsbruck. THE uses the field-weighted citation impact, considering the different range of fields between universities. In the THE Young University Rankings 2021, Klagenfurt holds rank 48 worldwide. From the STEM fields, the University of Klagenfurt has technology, engineering, and mathematics in its spectrum, but not any classic sciences or life sciences, which is a handicap in the other large global university rankings. Still, it is listed in the 501–510 group in the QS World University Rankings, which aim to rank the 1,300 best universities in the world (out of > 26,000), ahead of the University of Graz (651–700) and the University of Salzburg (801–1,000). The University of Klagenfurt also ranks in the Academic Ranking of World Universities (Shanghai Ranking) since 2019 (901–1,000), and in U-Multirank since 2017 (honorable mention in 2021). At the Global Student Satisfaction Awards 2021, provided by Studyportals, the University of Klagenfurt came off as global winner for the best COVID-19 Crisis Management. Honorary doctors Hans Albert (2007) Manfred Bockelmann (2013) Joseph Buttinger (1977) Karl Corino (2014) Peter Eichhorn (2003) Helmut Engelbrecht (1998) Hertha Firnberg (1980) Adolf Frisé (1982) Gerda Fröhlich (1995) Manfred Max Gehring (1992) Ernst von Glasersfeld (1997) Georg Gottlob (2016) Michael Guttenbrunner (1994) Maja Haderlap (2012) Peter Handke (2002) Adolf Holl (2000) Johannes Huber (2017) Sigmund Kripp (1998) Rae Langton (2020) Maria Lassnig (1999 / 2013) Claudio Magris (1995) Ewald Nowotny (2008) Valentin Oman (1995) Paul Parin (1995) Wolfgang Petritsch (2013) Theodor Piffl-Perčević (1977) Janko Pleterski (2005) Wolfgang Puschnig (2004) Josef Rattner (2006) Siegfried J. Schmidt (2004) Klaus Tschira (1995) Peter Turrini (2010) Oswald Wiener (1995) Horst Wildemann (2003) Josef Winkler (2009) References External links Universities and colleges in Austria Educational institutions established in 1970 Buildings and structures in Carinthia (state) Education in Carinthia (state) 1970 establishments in Austria
711201
https://en.wikipedia.org/wiki/Snake%20oil%20%28cryptography%29
Snake oil (cryptography)
In cryptography, snake oil is any cryptographic method or product considered to be bogus or fraudulent. The name derives from snake oil, one type of patent medicine widely available in 19th century United States. Distinguishing secure cryptography from insecure cryptography can be difficult from the viewpoint of a user. Many cryptographers, such as Bruce Schneier and Phil Zimmermann, undertake to educate the public in how secure cryptography is done, as well as highlighting the misleading marketing of some cryptographic products. The Snake Oil FAQ describes itself as, "a compilation of common habits of snake oil vendors. It cannot be the sole method of rating a security product, since there can be exceptions to most of these rules. [...] But if you're looking at something that exhibits several warning signs, you're probably dealing with snake oil." Some examples of snake oil cryptography techniques This is not an exhaustive list of snake oil signs. A more thorough list is given in the references. Secret system Some encryption systems will claim to rely on a secret algorithm, technique, or device; this is categorized as security through obscurity. Criticisms of this are twofold. First, a 19th century rule known as Kerckhoffs's principle, later formulated as Shannon's maxim, teaches that "the enemy knows the system" and the secrecy of a cryptosystem algorithm does not provide any advantage. Second, secret methods are not open to public peer review and cryptanalysis, so potential mistakes and insecurities can go unnoticed. Technobabble Snake oil salespeople may use "technobabble" to sell their product since cryptography is a complicated subject. "Unbreakable"Claims of a system or cryptographic method being "unbreakable" are always false (or true under some limited set of conditions), and are generally considered a sure sign of snake oil. "Military-grade" There is no accepted standard or criterion for "military-grade" ciphers. One-time pads One-time pads are a popular cryptographic method to invoke in advertising, because it is well known that one-time pads, when implemented correctly, are genuinely unbreakable. The problem comes in implementing one-time pads, which is rarely done correctly. Cryptographic systems that claim to be based on one-time pads are considered suspect, particularly if they do not describe how the one-time pad is implemented, or they describe a flawed implementation. Unsubstantiated "bit" claims Cryptographic products are often accompanied with claims of using a high number of bits for encryption, apparently referring to the key length used. However key lengths are not directly comparable between symmetric and asymmetric systems. Furthermore, the details of implementation can render the system vulnerable. For example, in 2008 it was revealed that a number of hard drives sold with built-in "128-bit AES encryption" were actually using a simple and easily defeated "XOR" scheme. AES was only used to store the key, which was easy to recover without breaking AES. References External links Beware of Snake Oil — by Phil Zimmermann Google Search results for "The Doghouse" in Bruce Schneier's Crypto-Gram newsletters — the Doghouse section of the Crypto-Gram newsletter frequently describes various snake oil encryption products, commercial or otherwise. Cryptography Pejorative terms
40171309
https://en.wikipedia.org/wiki/Aquaveo
Aquaveo
Aquaveo is a modeling software company based in Provo, Utah that develops software used to model and simulate groundwater, watershed, and surface water resources. Its main software products include SMS, GMS, WMS, and Arc Hydro Groundwater. History The Engineering Computer Graphics Laboratory (ECGL) was established in 1985 at Brigham Young University (BYU). Simulation data produced in the lab was critical to Anderson v. Cryovac, Inc., a 1986 federal lawsuit concerning toxic contamination of groundwater in Woburn, Massachusetts. After using another BYU product—Movie BYU—to animate a hydrology problem to make it easier to understand, Norm Jones decided to start work on software that helped engineers visualize problems and solutions. Use of the tools created by Jones allowed the U.S. Army Corps of Engineers to save "hundreds of thousands of dollars". Alan Zundel and James Nelson, two other professors at BYU, also became involved in the process of developing the software in 1991. By 2002, the company's software was being used in "more than 100 countries and [by] more than 9,000 organizations." Royd Nelson created Environmental Modeling Systems (EMSI), a private company, in October 1995 to distribute the software created in the ECGL. The name of the lab was changed in September 1998 to the Environmental Modeling Research Laboratory (EMRL). The government of India used early versions of the software developed by the EMRL to help with finding sources of clean groundwater, setting groundwater policy, and setting a course of action to clean up the contaminated groundwater in their country. State and local governments in Utah and California have used the software to help trace contaminant sources at farms and mining facilities, and to determine how to manage water resources in drier climates like the Los Angeles area. The planners of the 2002 Winter Olympics, held in Salt Lake City, Utah, used the Watershed Modeling System (WMS) software to simulate terrorist attacks on water infrastructure such as the Jordanelle Reservoir. In April 2007, a private company named Aquaveo was created to develop the work of the EMRL as commercial products, and the main people at the lab moved to the new company. EMSI and Aquaveo merged in October 2008, keeping the company name of "Aquaveo". Local and federal government agencies, including the US Army Corps of Engineers, the US Federal Highway Administration, Los Angeles County, the USGS, the US Department of Energy, and the USEPA, have software and consulting contracts with Aquaveo. In 2011, the government of Australia used Arc Hydro Groundwater to help in developing a national groundwater information system. Arc Hydro was used to convert bore and construction log data and to create geovolumes from georasters. Aquaveo worked with the Office of Naval Research, the Carderock Division of the Naval Surface Warfare Center, and several universities and companies to create the Environmental and Ship Motion Forecasting system. This system was designed to "provide sea-based forces with new capabilities for difficult operations like ship-to-ship transfer of personnel, vehicles or material-giving operators sea condition information at levels of accuracy never possible before". Products Aquaveo is primarily a software development company for water modeling that allow water to be modeled in most situations: watershed, rivers, lakes, ocean tides, flooding, and so on. Its flagship products are SMS and GMS, which are used by municipalities and universities around the world. It also produces WMS and Arc Hydro Groundwater. Their software is used by "over 12,000 firms, government institutions, and universities in over 120 countries". Arc Hydro Groundwater Arc Hydro Groundwater allows for managing groundwater and subsurface data within ArcGIS. The software was created in cooperation with ESRI to allow groundwater and subsurface analysis, as well as using MODFLOW to analyze results. CityWater CityWater is a cloud-based water distribution management tool. It uses EPANET model files to allow users via any modern browser to access current information on a water distribution system such as a municipal water system. GMS The Groundwater Modeling System (GMS) is a computer application designed to build and simulate groundwater models. It uses 2D and 3D geostatistics and stratigraphic modeling to show how water and contaminants can move through various soil structures. The software supports many standard models, including MODFLOW, MODPATH, MT3DMS, RT3D, FEMWATER, SEEP2D, and UTEXAS. SMS The Surface-water Modeling System (SMS) is an application used for building and simulating surface water models within the hydrological cycle, including river and stream flow models, flooding, and sediment and particle flow in lakes and oceans. It features 1D and 2D conceptual modeling. The software supports standard models, including ADCIRC, CMS-FLOW2D, FESWMS, TABS, TUFLOW, BOUSS-2D, CGWAVE, STWAVE, CMS-WAVE (WABED), GENESIS, and PTM. WMS The Watershed Modeling System (WMS) is an application used for developing watershed simulations of river hydraulics, municipal storm drain systems, floodplains, and watersheds. WMS supports lumped parameter, regression, and 2D hydrologic modeling of watersheds, and can be used to model both water quantity and water quality. It supports standard models such as HEC-1, HEC-RAS, HEC-HMS, TR-20, TR-55 hydrologic model, National Flood Frequency Model, rational hydrologic model, MODRAT, HSPF, CE-QUAL-W2, GSSHA, and SMPDBK. XMDF XMDF (eXtensible Model Data Format) is a library providing a standard format for the geometric data storage of river cross-sections, 2D/3D structured and unstructured meshes, geometric paths through space, and associated time data. XMDF uses HDF5 for cross-platform data storage and compression. API includes interfaces for C/C++ and Fortran. Associations Aquaveo is a member of the American Water Resources Association. References Companies based in Provo, Utah Engineering companies of the United States Environmental research Hydrology and urban planning Software companies based in Utah Water resource management in the United States 2007 establishments in Utah Software companies of the United States
1258112
https://en.wikipedia.org/wiki/RETAIN
RETAIN
RETAIN is a mainframe based database system, accessed via IBM 3270 terminals (or more likely, emulators), used internally within IBM providing service support to IBM field personnel and customers. The acronym RETAIN stands for Remote Technical Assistance Information Network. Predecessor system Historically, two different, but similar, systems were called RETAIN. The first, dating to the mid-1960s was a system that provided technical information to people in the IBM Field Engineering Division in the form of short bulletins or tips, organized according to machine type number or, for software, according to software component ID number. This information was accessible using simple query commands from IBM service branch office terminals. The terminals supported by this early RETAIN system were typewriter-type terminals, such as the IBM 2740. These same terminals were also used to access the IBM Field Instruction System (FIS), which provided education in the form of programmed instruction courseware. The RETAIN system was built on the same software framework as that of FIS. In fact, most of the early support for RETAIN was actually written in the language of a "course". The system was primarily used to provide field support for the System/360 family of mainframe systems, although it was used also to disseminate some technical information on other older systems. RETAIN/370 In 1970, concurrent with the announcement of System/370, the next generation of mainframes after System/360, a new system was announced, called RETAIN/370. This system was designed for use by special Technical Support Centers located in regional centers, rather than by the branch office. This new system was designed to support display terminals, rather than the old typewriter-based ones. A special version of the 2915 display, originally designed for the airline reservations systems, such as SABRE, was used. The 2915 was a small keyboard-display driven by a large electronic controller and data interchange unit, the IBM 2948. Each 2948 could control up to 31 display terminals, which had to be located within a few hundred feet. The cost of this display system, with its large controller, prevented the 2915 terminals from being used in branch offices, so they were used in regional support centers instead. The older RETAIN system continued to be used for several years afterwards, running in parallel with RETAIN/370, still connected to branch-office terminals. It was sometimes called the "RETAIN/360" system, although that designation was never formalized. In time, after RETAIN/370 became available via 3270 terminals in the branch offices, the old RETAIN system was phased out, and RETAIN/370 was renamed to simply RETAIN. Search engine RETAIN/370 ran special applications designed for technical support center use. Its most powerful feature was a full-text search engine, enabling most text documents in the system to be retrieved by using boolean search requests, similar in concept to full-text search engines in use today on the Internet, such as Google or AltaVista, although limited only to searching for individual words, or combinations of words, without reference to word-adjacency. RETAIN/370 was the first IBM system deployed on a large scale that had such a capability. The search engine component of RETAIN is called IRIS, for Interpretive Retrieval Information System (not to be confused with other non-IBM software systems of that name... IBM never sold this search engine as a product, so there was no trademark issue). Mirrored database In the mid-1970s, a RETAIN was expanded to permit multiple copies of the database to be hosted on geographically distributed systems. RETAIN's custom-built Data Bank Manager, which served as the foundation for all RETAIN applications, and the IRIS search engine, was modified to support "mirroring" of file updates to take place automatically across the network, in a manner nearly invisible to the application programs, but which providing a high level of data integrity. After this change, RETAIN hosts were created in two US locations, two in Europe, two in South America, and two in Japan. Most applications were developed by IBM programmers in Raleigh, NC, (moved to Boulder, Colorado, in 1976) with some work being done in North Harbour, UK. Registered users of the system numbered in the thousands, in over 60 countries. Remote support At the time System/370 was announced, along with the corresponding RETAIN/370 system, IBM announced that the new family of computers would be equipped to permit remote diagnosis of hardware problems. Each System/370 installation of model 145 and above have a telecommunications adapter included capable of being used for remote support. The hardware diagnostic programs were written to allow control via a remote connection to applications on the RETAIN system that could be controlled by IBM specialists located at the IBM support center in Chicago, managed by Paul Rushton, and also including the original plant of manufacture of the CPU. This form of support was dubbed "Data Link / Hardware". The connection was made through a communications device called an IBM 2955 adapter, a stripped-down variant of the 2701 communications controller. It could connect at 600 bit/s to the RETAIN system to run diagnostics. Mainly, this was to run mostly the same diagnostics that could be run locally by an IBM CE, but in time other specialized applications were developed, such as programs to analyze "logouts" generated by hardware malfunctions, i.e. "machine check" interruptions. In time, the concept of remote support was extended to software as well (about 1973 or 1974). Through a special application, an MVS system could be connected, via RETAIN, to an IBM support center, and memory dumps and other system data could be examined remotely. The application also permitted download of software fixes, or IBM Program temporary fixes. Although the 2955 only supported a 6-bit character code (similar to the 2740 terminal), binary transfer of memory dump and software updates was accomplished through a protocol similar to the base-64 encoding scheme used today on the Internet for email attachments. Over the years, several projects have aimed to supplant RETAIN's functionality, but it has shown lasting presence despite them. External links Database engines IBM software
1982312
https://en.wikipedia.org/wiki/SpaceWire
SpaceWire
SpaceWire is a spacecraft communication network based in part on the IEEE 1355 standard of communications. It is coordinated by the European Space Agency (ESA) in collaboration with international space agencies including NASA, JAXA, and RKA. Within a SpaceWire network the nodes are connected through low-cost, low-latency, full-duplex, point-to-point serial links, and packet switching wormhole routing routers. SpaceWire covers two (physical and data-link) of the seven layers of the OSI model for communications. Architecture Physical layer SpaceWire's modulation and data formats generally follow the data strobe encoding - differential ended signaling (DS-DE) part of the IEEE Std 1355-1995. SpaceWire utilizes asynchronous communication and allows speeds between 2 Mbit/s and 200 Mbit/s, with initial signalling rate of 10Mbit/s. DS-DE is well-favored because it describes modulation, bit formats, routing, flow control, and error detection in hardware, with little need for software. SpaceWire also has very low error rates, deterministic system behavior, and relatively simple digital electronics. SpaceWire replaced old PECL differential drivers in the physical layer of IEEE 1355 DS-DE by low-voltage differential signaling (LVDS). SpaceWire also proposes the use of space-qualified 9-pin connectors. SpaceWire and IEEE 1355 DS-DE allows for a wider set of speeds for data transmission, and some new features for automatic failover. The fail-over features let data find alternate routes, so a spacecraft can have multiple data buses, and be made fault-tolerant. SpaceWire also allows the propagation of time interrupts over SpaceWire links, eliminating the need for separate time discretes. Link layer Each transferred character starts with a Parity bit and a Data-Control Flag bit. If Data-Control Flag is a 0-bit, an 8-bit LSB character follows. Otherwise one of the control codes, including end of packet (EOP). Network layer The network data frames look as follows: One or more address bytes are used for the routing. Addresses are either physical ones (0-31), or logical ones. The difference is that the physical addresses are deleted from the frame header during routing - which is used for hop-based routing (based on path specified in the frame itself). Logical addresses may be deleted as well, depending on the router configuration. Interconnection The hardware devices may be connected either directly, or via a SpaceWire router. In the former case, usually pairs of devices are used to guarantee a fail-safe operation - which is however handled by the software. A SpaceWire router is usually a crossbar switch-type device, operating in wormhole switching mode. This also may limit the speed of the communication to the lowest common speed. The routing decisions are based on the programmed routing table and the initial incoming frame contents. Uses SpaceWire is used all around the globe. Its use began primarily in ESA projects, but it is currently used by NASA, JAXA, RKA, and many other organizations and companies. Some NASA projects using it include the James Webb Space Telescope, Swift's Burst Alert Telescope, the Lunar Reconnaissance Orbiter, LCROSS, the Geostationary Operational Environmental Satellite (GOES-R), and the SCaN Testbed, previously known as the Communications, Navigation, and Networking Reconfigurable Testbed (CoNNeCT). It has also been selected by the United States Department of Defense for Operationally Responsive Space. SpaceWire initiatives are being coordinated between several Space Agencies in the frame of CCSDS in order to extend its communication model to the Network and Transport Layers of the OSI model. SpaceWire supports highly fault-tolerant networks and systems, which is one reason for its popularity. Protocols The ESA has a draft specification in place for the Protocol ID. The following Protocol ID's have been assigned in ECSS-E-ST-50-11: References Other sources ECSS-E-ST-50-12C - SpaceWire - Links, nodes, routers, and networks, ESA-ESTEC. ECSS-E-50-12A (SUPERSEDED only document number has been changed to ECSS-E-ST-50-12C) SpaceWire - Nodes, links, and networks, ESA-ESTEC. ECSS-E-ST-50-11C Draft 1.3 "Space engineering - SpaceWire protocols" External links SpaceWire Homepage (ESA) European Cooperation for Space Standardisation - ECSS 4Links Publications International SpaceWire Conference 2007 International SpaceWire Conference 2008 International SpaceWire Conference 2010 International SpaceWire Conference 2011 International SpaceWire Conference 2013 STAR-Dundee Knowledge Database http://www.interfacebus.com/SpaceWire_Avionics_Bus.html Commercial providers of SpaceWire equipment: STAR-Dundee Spacewire.fr Aeroflex Aeroflex Gaisler Astrium Microchip Aurelia Microelettronica Ingespace Dynamic Engineering 4Links SKYLAB Industries RUAG Space PnP Innovations TELETEL SA TTTech - Gateway for SpaceWire to 1GbE Ethernet, with Leon-2FT CPU SpaceWire IP Cores: 4Links STAR-Dundee Aeroflex Gaisler Astrium SpaceWire RMAP CEA IRFU CESR CNRS (CeCILL-C license) NASA Goddard - tech transfer OpenCores.org (SpaceWire and SpaceWire Light) SpaceWire UK European Space Agency PnP Innovations Articles: NASA article on SpaceWire used on James Webb Space Telescope Computer buses ECSS standards Fault-tolerant computer systems James Webb Space Telescope
21577832
https://en.wikipedia.org/wiki/Embedded%20hypervisor
Embedded hypervisor
An embedded hypervisor is a hypervisor that supports the requirements of embedded systems. The requirements for an embedded hypervisor are distinct from hypervisors targeting server and desktop applications. An embedded hypervisor is designed into the embedded device from the outset, rather than loaded subsequent to device deployment. While desktop and enterprise environments use hypervisors to consolidate hardware and isolate computing environments from one another, in an embedded system, the various components typically function collectively to provide the device's functionality. Mobile virtualization overlaps with embedded system virtualization, and shares some use cases. Typical attributes of embedded virtualization include efficiency, security, communication, isolation and real-time capabilities. Background Software virtualization has been a major topic in the enterprise space since the late 1960s, but only since the early 2000s has its use appeared in embedded systems. The use of virtualization and its implementation in the form of a hypervisor in embedded systems are very different from enterprise applications. An effective implementation of an embedded hypervisor must deal with a number of issues specific to such applications. These issues include the highly integrated nature of embedded systems, the requirement for isolated functional blocks within the system to communicate rapidly, the need for real-time/deterministic performance, the resource-constrained target environment and the wide range of security and reliability requirements. Hypervisor A hypervisor provides one or more software virtualization environments in which other software, including operating systems, can run with the appearance of full access to the underlying system hardware, where in fact such access is under the complete control of the hypervisor. These virtual environments are called virtual machines (VM)s, and a hypervisor will typically support multiple VMs managed simultaneously. Classification Hypervisors are generally classed as either type 1 or type 2, depending on whether the hypervisor runs exclusively in supervisor mode or privileged mode (type 1) or is itself hosted by an operating system as a regular application (type 2). Type 1 hypervisors manage key system resources required to maintain control over the virtual machines, and facilitate a minimal trusted computing base (TCB). Type 2 hypervisors typically run as an application within a more general purpose operating system, relying on services of the OS to manage system resources. Nowadays kernel extensions are often loaded to take advantage of hardware with virtualization support. Embedded hypervisor An embedded hypervisor is most often a type 1 hypervisor which supports the requirements of embedded systems development. See references and for a more detailed discussion. These requirements are summarized below. A small, fast hypervisor with support for multiple isolated VMs; Support for lightweight but secure encapsulation of medium-grain subsystem components that interact strongly; High-bandwidth, low-latency communication between system components, subject to a configurable, system-wide security policy; Minimal impact on system resources and support real-time latency guarantees; Ability to implement a scheduling policy between VMs and provide support for real-time system components; Implementation An embedded hypervisor typically provides multiple VMs, each of which emulates a hardware platform on which the virtualised software executes. The VM may emulate the underlying native hardware, in which case embedded code that runs on the real machine will run on the virtual machine and vice versa. An emulation of the native hardware is not always possible or desired, and a virtual platform may be defined instead. When a VM provides a virtual platform, guest software has to be ported to run in this environment, however since a virtual platform can be defined without reliance on the native hardware, guest software supporting a virtual platform can be run unmodified across various distinct hardware platforms supported by the hypervisor. Embedded hypervisors employ either paravirtualization or use virtualization features of the underlying CPU. Paravirtualization is required in cases where the hardware does not assist, and involves often extensive modifications to core architecture support core of guest kernels. Emulation of hardware at the register level is rarely seen in embedded hypervisors as this is very complex and slow. The custom nature of embedded systems means that the need to support unmodified binary-only guest software which require these techniques is rare. The size and efficiency of the implementation is also an issue for an embedded hypervisor, as embedded systems are often much more resource constrained than desktop and server platforms. It is also desirable for the hypervisor to maintain, as closely as possible, the native speed, real-time response and determinism and power efficiency of the underlying hardware platform. Hypervisor design Implementations for embedded systems applications have most commonly been based on small microkernel and separation kernel designs, with virtualization built-in as an integral capability. This was introduced with PikeOS in 2005. Examples of these approaches have been produced by companies such as Open Kernel Labs (microkernel followed by a separation kernel) and LynuxWorks (separation kernel). VirtualLogix appears to take the position that an approach based on a dedicated Virtual Machine Monitor (VMM) would be even smaller and more efficient. This issue is the subject of some ongoing debate. However, the main point at issue is the same on all sides of the discussion – the speed and size of the implementation (for a given level of functionality) are of major importance. For example: " ... hypervisors for embedded use must be real-time capable, as well as resource-miserly." Resource requirements Embedded systems are typically highly resource constrained due to cost and technical limitations of the hardware. It is therefore important for an embedded hypervisor to be as efficient as possible. The microkernel and separation kernel based designs allow for small and efficient hypervisors. Thus embedded hypervisors usually have a memory footprint from several tens to several hundred kilobytes, depending on the efficiency of the implementation and the level of functionality provided. An implementation requiring several megabytes of memory (or more) is generally not acceptable. With the small TCB of a type 1 embedded hypervisor, the system can be made highly secure & reliable. Standard software-engineering techniques, such as code inspections and systematic testing, can be used to reduce the number of bugs in such a small code base to a tiny fraction of the defects that must be expected for a hypervisor and guest OS combination that may be 100,000–300,000 lines in total. VM communication One of the most important functions required in an embedded hypervisor is a secure message-passing mechanism, which is needed to support real-time communication between processes. In the embedded environment, a system will typically have a number of closely coupled tasks, some of which may require secure isolation from each other. In a virtualized environment, the embedded hypervisor will support and enforce this isolation between multiple VMs. These VMs will therefore require access to a mechanism that provides low-latency communication between the tasks. An inter-process communication (IPC) mechanism can be used to provide these functions, as well as invoking all system services, and implemented in a manner which ensures that the desired level of VM isolation is maintained. Also, due to its significant impact on system performance, such an IPC mechanism should be highly optimised for minimal latency. Hardware requirements An embedded hypervisor needs to be in complete control of system resources, including memory accesses, to ensure that software cannot break out of the VM. A hypervisor therefore requires the target CPU to provide memory management support (typically using an MMU). Many embedded processors including such as ARM, MIPS and PowerPC have followed desktop and server chip vendors in adding hardware support for virtualization. There are still a large proportion of embedded processors however which do not provide such support and a hypervisor supporting paravirtualization is required. ARM processors are notable in that most of their application class processor designs support a technology called ARM TrustZone, which provides essentially hardware support for one privileged and one unprivileged VM. Normally a minimal Trusted Execution Environment (TEE) OS is run in the Secure World and a native kernel running in the Non-secure World. Use cases Some of the most common use cases for an embedded hypervisor are: 1. OS independence Designers of embedded systems may have many hardware drivers and system services which are specific to a target platform. If support for more than one OS is required on the platform, either concurrently or consecutively using a common hardware design, an embedded hypervisor can greatly simplify the task. Such drivers and system services can be implemented just once for the virtualized environment; these services are then available to any hosted OS. This level of abstraction also allows the embedded developer to implement or change a driver or service in either hardware or software at any point, without this being apparent to the hosted OS. 2. Support for multiple operating systems on a single processor Typically this is used to run a real-time operating system (RTOS) for low-level real-time functionality (such as the communication stack) while at the same time running a general purpose OS, (GPOS) like Linux or Windows, to support user applications, such as a web browser or calendar. The objective might be to upgrade an existing design without the added complexity of a second processor, or simply to minimize the bill of materials (BoM). 3. System security An embedded hypervisor is able to provide secure encapsulation for any subsystem defined by the developer, so that a compromised subsystem cannot interfere with other subsystems. For example, an encryption subsystem needs to be strongly shielded from attack to prevent leaking the information the encryption is supposed to protect. As the embedded hypervisor can encapsulate a subsystem in a VM, it can then enforce the required security policies for communication to and from that subsystem. 4. System reliability The encapsulation of a subsystem components into a VM ensures that failure of any subsystem cannot impact other subsystems. This encapsulation keeps faults from propagating from a subsystem in one VM to a subsystem in another VM, improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection. This can be particularly important for embedded device drivers, as this is where the highest density of fault conditions is seen to occur, and is thus the most common cause of OS failure and system instability. It also allows the encapsulation of operating systems that were not necessarily built to the reliability standards demanded of the new system design. 5. Dynamic update of system software Subsystem software or applications can be securely updated and tested for integrity, by downloading to a secure VM before “going live” in an executing system. Even if this process then fails, the system can revert to its former state by restarting the original software subsystem/application, without halting system operation. 6. Legacy code re-use Virtualization allows legacy embedded code to be used with the OS environment it has been developed and validated with, while freeing the developer to use a different OS environment in a separate VM for new services and applications. Legacy embedded code, written for a particular system configuration may assume exclusive control of all system resources of memory, I/O and processor. This code base can be re-used unchanged on alternative system configurations of I/O and memory through the use of a VM to present a resource map and functionality that is consistent with the original system configuration, effectively de-coupling the legacy code from the specifics of a new or modified hardware design. Where access to the operating system source code is available, paravirtualization is commonly used to virtualize the OS’s on processors without hardware virtualization support, and thus the applications supported by the OS can also run unmodified and without re-compilation in new hardware platform designs. Even without source access, legacy binary code can be executed in systems running on processors with hardware virtualization support such as the AMD-V, Intel VT technologies and the latest ARM processors with virtualization support. The legacy binary code could run completely unmodified in a VM with all resource mapping handled by the embedded hypervisor, assuming the system hardware provides equivalent functionality. 7. IP protection Valuable proprietary IP may need protection from theft or misuse when an embedded platform is being shipped for further development work by (for example) an OEM customer. An embedded hypervisor makes it possible to restrict access by other system software components to a specific part of the system containing IP that needs to be protected. 8. Software license segregation Software IP operating under one licensing scheme can be separated from other software IP operating under a different scheme. For example, the embedded hypervisor can provide an isolated execution environment for proprietary software sharing the processor with open source software subject to the GPL. 9. Migration of applications from uni-core to multi-core systems As new processors utilise multi-core architectures to increase performance, the embedded hypervisor can manage the underlying architecture and present a uni-processor environment to legacy applications and operating systems while efficiently using the new multiprocessor system design. In this way a change in hardware environment does not require a change to the existing software. Commercial products Crucible by Star Lab Corp. Cross-OS Hypervisor - Allows applications to run natively on a single OS platform from MapuSoft Technologies, Inc. OKL4 Hypervisor - Supports ARM based smart connected devices (embedded, mobile). Used in defense and security sensitive applications. Supported commercially by Cog Systems. References Embedded systems Virtualization software