text
stringlengths 125
311k
|
---|
Apple Inc. is an American multinational technology company headquartered in Cupertino, California. , Apple is the world's biggest company by market capitalization, and with the largest technology company by 2022 revenue. , Apple is the fourth-largest personal computer vendor by unit sales; the largest manufacturing company by revenue; and the second-largest mobile phone manufacturer in the world. It is considered one of the Big Five American information technology companies, alongside Alphabet (parent company of Google), Amazon, Meta Platforms, and Microsoft.
Apple was founded as Apple Computer Company on April 1, 1976, by Steve Wozniak, Steve Jobs and Ronald Wayne to develop and sell Wozniak's Apple I personal computer. It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977. The company's second computer, the Apple II, became a best seller and one of the first mass-produced microcomputers. Apple went public in 1980 to instant financial success. The company developed computers featuring innovative graphical user interfaces, including the 1984 original Macintosh, announced that year in a critically acclaimed advertisement called "1984". By 1985, the high cost of its products, and power struggles between executives, caused problems. Wozniak stepped back from Apple and pursued other ventures, while Jobs resigned and founded NeXT, taking some Apple employees with him.
As the market for personal computers expanded and evolved throughout the 1990s, Apple lost considerable market share to the lower-priced duopoly of the Microsoft Windows operating system on Intel-powered PC clones (also known as "Wintel"). In 1997, weeks away from bankruptcy, the company bought NeXT to resolve Apple's unsuccessful operating system strategy and entice Jobs back to the company. Over the next decade, Jobs guided Apple back to profitability through a number of tactics including introducing the iMac, iPod, iPhone and iPad to critical acclaim, launching the "Think different" campaign and other memorable advertising campaigns, opening the Apple Store retail chain, and acquiring numerous companies to broaden the company's product portfolio. When Jobs resigned in 2011 for health reasons, and died two months later, he was succeeded as CEO by Tim Cook.
Apple became the first publicly traded U.S. company to be valued at over $1 trillion in August 2018, then at $2 trillion in August 2020, and at $3 trillion in January 2022. In June 2023, it was valued at just over $3 trillion. The company receives criticism regarding the labor practices of its contractors, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. Nevertheless, the company has a large following and enjoys a high level of brand loyalty. It has also been consistently ranked as one of the world's most valuable brands.
History
1976–1980: Founding and incorporation
Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership. The company's first product was the Apple I, a computer designed and hand-built entirely by Wozniak. To finance its creation, Jobs sold his Volkswagen Bus, and Wozniak sold his HP-65 calculator. Wozniak debuted the first prototype Apple I at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which would not yet be marketed as a complete personal computer. It went on sale soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits".
Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%.
The Apple II, also invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differed from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While the Apple I and early Apple II models used ordinary audio cassette tapes as storage devices, they were superseded by the introduction of a -inch floppy disk drive and interface called the Disk II in 1978.
The Apple II was chosen to be the desktop platform for the first "killer application" of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office, but Apple II market share remained behind home computers made by competitors such as Atari, Commodore, and Tandy.
On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.10 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, 300 millionaires were created, from a stock price of $29 per share and a market cap of $1.778 billion.
1980–1990: Success with Macintosh
A critical moment in the company's history came in December 1979 when Jobs and several Apple employees, including human–computer interface expert Jef Raskin, visited Xerox PARC in to see a demonstration of the Xerox Alto, a computer using a graphical user interface. Xerox granted Apple engineers three days of access to the PARC facilities in return for the option to buy 100,000 shares (22.4 million split-adjusted shares ) of Apple at the pre-IPO price of $10 a share. After the demonstration, Jobs was immediately convinced that all future computers would use a graphical user interface, and development of a GUI began for the Apple Lisa, named after Jobs's daughter.
The Lisa division would be plagued by infighting, and in 1982 Jobs was pushed off the project. The Lisa launched in 1983 and became the first personal computer sold to the public with a GUI, but was a commercial failure due to its high price and limited software titles.
Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as a low-cost computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue.
In 1984, Apple launched the Macintosh, the first personal computer to be sold without a programming language. Its debut was signified by "1984", a $1.5 million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This is now hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide.
The advertisement created great interest in the original Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had made the decision to equip the original Macintosh with 128 kilobytes of RAM, attempting to reach a price point, which limited its speed and the software that could be used. The Macintosh would eventually ship for , a price panned by critics in light of its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs saying, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley decided to remove Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors.
The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a boardroom coup and called an emergency meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took a number of Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years." Despite Wozniak's grievances, he officially remained employed by Apple, and to this day continues to work for the company as a representative, receiving a stipend estimated to be $120,000 per year for this role. Both Jobs and Wozniak remained Apple shareholders after their departures.
After the departures of Jobs and Wozniak, Sculley worked to improve the Macintosh in 1985 by quadrupling the RAM and introducing the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter and PageMaker was responsible for the creation of the desktop publishing market.
This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, notably due to Jean-Louis Gassée's mantra of "fifty-five or die", referring to the 55% profit margins of the Macintosh II.
This policy began to backfire in the last years of the decade as desktop publishing programs appeared on PC clones that offered some or much of the same functionality of the Macintosh, but at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford their high-priced products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year.
1990–1997: Decline and restructuring
The company pivoted strategy and in October 1990 introduced three lower-cost models, the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which saw significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities.
The success of the lower-cost Macs and PowerBook brought increasing revenue. For some time, Apple was doing incredibly well, introducing fresh new products and generating increasing profits in the process. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh.
The success of Apple's lower-cost consumer models, especially the LC, also led to the cannibalization of their higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, aimed at different markets: the high-end Quadra models, the mid-range Centris line, and the consumer-marketed Performa series. This led to significant market confusion, as customers did not understand the difference between models.
The early 1990s also saw the discontinuation of the Apple II series, which was expensive to produce, and the company felt was still taking sales away from lower-cost Macintosh models. After the launch of the LC, Apple began encouraging developers to create applications for Macintosh rather than Apple II, and authorized salespersons to direct consumers towards Macintosh and away from Apple II. The Apple IIe was discontinued in 1993.
Throughout this period, Microsoft continued to gain market share with its Windows graphical user interface that it sold to manufacturers of generally less expensive PC clones. While the Macintosh was more expensive, it offered a more tightly integrated user experience, but the company struggled to make the case to consumers.
Apple also experimented with a number of other unsuccessful consumer targeted products during the 1990s, including digital cameras, portable CD audio players, speakers, video game consoles, the eWorld online service, and TV appliances. Most notably, enormous resources were invested in the problem-plagued Newton tablet division, based on John Sculley's unrealistic market forecasts.
Throughout this period, Microsoft continued to gain market share with Windows by focusing on delivering software to inexpensive personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; instead, they sued Microsoft for using a GUI similar to the Apple Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years before it was finally dismissed.
The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler.
With Spindler at the helm, Apple, IBM, and Motorola formed the AIM alliance in 1994 with the goal of creating a new computing platform (the PowerPC Reference Platform; PReP), which would use IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. The same year, Apple introduced the Power Macintosh, the first of many Apple computers to use Motorola's PowerPC processor.
In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996, Apple executives were worried that the clones were cannibalizing sales of their own high-end computers, where profit margins were highest.
In 1996, Spindler was replaced by Gil Amelio as CEO. Hired for his reputation as a corporate rehabilitator, Amelio made deep changes, including extensive layoffs and cost-cutting.
This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this with by introducing cooperative multitasking in System 5, but the company still felt it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and the attempted purchase of BeOS in 1996. Talks with Be stalled when the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million instead of the $125 million Apple wanted to pay.
Only weeks away from bankruptcy, Apple's board decided NeXTSTEP was a better choice for its next operating system and purchased NeXT in late 1996 for $400 million, bringing back Apple co-founder Steve Jobs.
1997–2007: Return to profitability
The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup that resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses.
The board named Jobs as interim CEO and he immediately began a review of the company's products. Jobs would order 70% of the company's products to be cancelled, resulting in the loss of 3,000 jobs, and taking Apple back to the core of its computer offerings. The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing software for the Mac. The investment was seen as an "antitrust insurance policy" for Microsoft who had recently settled with the Department of Justice over anti-competitive practices. Jobs also ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, Apple introduced the Apple Store website, which was tied to a new build-to-order manufacturing that had been successfully used by PC manufacturer Dell.
The moves paid off for Jobs; at the end of his first year as CEO, the company turned a $309 million profit.
On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success for Apple selling 800,000 units in its first five months and ushered in major shifts in the industry by abandoning legacy technologies like the -inch diskette, being an early adopter of the USB connector, and coming pre-installed with internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. The device also had a striking teardrop shape and translucent materials, designed by Jonathan Ive, who although hired by Amelio, would go on to work collaboratively with Jobs for the next decade to chart a new course the design of Apple's products.
A little more than a year later on July 21, 1999, Apple introduced the iBook, a laptop for consumers. It was the culmination of a strategy established by Jobs to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, along with the iMac desktop and iBook laptop for consumers. Jobs felt the small product line allowed for a greater focus on quality and innovation.
At around the same time, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired of Macromedia's Key Grip digital video editing software project which was renamed Final Cut Pro when it was launched on the retail market in April 1999. The development of Key Grip also led to Apple's release of the consumer video-editing product iMovie in October 1999. Next, Apple successfully acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple would sell as the professional-oriented DVD Studio Pro software product, and used the same technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, while simplifying the user interface and adding the ability to burn CDs.
2001 would be a pivotal year for the Apple with the company making three announcements that would change the course of the company.
The first announcement came on March 24, 2001, that Apple was nearly ready to release a new modern operating system, Mac OS X. The announcement came after numerous failed attempts in the early 1990s, and several years of development. Mac OS X was based on NeXTSTEP, OPENSTEP, and BSD Unix, with Apple aiming to combine the stability, reliability, and security of Unix with the ease of use afforded by an overhauled user interface, heavily influenced by NeXTSTEP. To aid users in migrating from Mac OS 9, the new operating system allowed the use of OS 9 applications within Mac OS X via the Classic Environment.
In May 2001, the company opened its first two Apple Store retail locations in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they went on to become highly successful, and the first of more than 500 stores around the world.
On October 23, 2001, Apple debuted the iPod portable digital audio player. The product, which was first sold on November 10, 2001, was phenomenally successful with over 100 million units sold within six years.
In 2003, Apple's iTunes Store was introduced. The service offered music downloads for 99¢ a song and integration with the iPod. The iTunes Store quickly became the market leader in online music services, with over five billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer.
In 2002, Apple purchased Nothing Real for their advanced digital compositing application Shake, as well as Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto in the same year completed the iLife suite.
At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. On April 29, 2009, The Wall Street Journal reported that Apple was building its own team of engineers to design microchips. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X.
Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders".
2007–2011: Success with mobile devices
During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced that Apple Computer, Inc. would thereafter be known as "Apple Inc.", because the company had shifted its emphasis from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 iPhone units during the first 30 hours of sales, and the device was called "a game changer for the industry".
In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management (DRM), thereby allowing tracks to be played on third-party players if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM.
In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone.
On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession with revenue of $8.16 billion and profit of $1.21 billion.
After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the U.S. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May of the same year, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989.
In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new uninsulated stainless steel design that acted as the phone's antenna. Later that year, Apple again refreshed its iPod line of MP3 players by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second generation Apple TV which allowed renting of movies and shows.
On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief operating officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death.
On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors, Andrea Jung and Arthur D. Levinson, who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs' death.
2011–present: Post-Jobs era, Cook's leadership
On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The first major product announcement by Apple following Jobs's passing occurred on January 19, 2012, when Apple's Phil Schiller introduced iBook's Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography "Steve Jobs" that he wanted to reinvent the textbook industry and education.
From 2011 to 2012, Apple released the iPhone 4S and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third- and fourth-generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth-generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers.
On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make $280 million a year from this deal with HTC.
In May 2014, the company confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for $3 billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology." The acquisition was the largest purchase in Apple's history.
During a press event on September 9, 2014, Apple introduced a smartwatch, the Apple Watch. Initially, Apple marketed the device as a fashion accessory and a complement to the iPhone, that would allow people to look at their smartphones less. Over time, the company has focused on developing health and fitness-oriented features on the watch, in an effort to compete with dedicated activity trackers.
In January 2016, it was announced that one billion Apple devices were in active use worldwide.
On June 6, 2016, Fortune released Fortune 500, their list of companies ranked on revenue generation. In the trailing fiscal year (2015), Apple appeared on the list as the top tech company. It ranked third, overall, with $233 billion in revenue. This represents a movement upward of two spots from the previous year's list.
In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Towards the end of the year, TechCrunch reported that Apple was acquiring Shazam, a company that introduced its products at WWDC and specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple $400 million, with media reports noting that the purchase looked like a move to acquire data and tools bolstering the Apple Music streaming service. The purchase was approved by the European Union in September 2018.
Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writers Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, as well as a partnership with A24 to create original films.
During the Apple Special Event in September 2017, the AirPower wireless charger was announced alongside the iPhone X, 8 and Watch Series 3. The AirPower was intended to wirelessly charge multiple devices, simultaneously. Though initially set to release in early 2018, the AirPower would be canceled in March 2019, marking the first cancellation of a device under Cook's leadership.
On August 19, 2020, Apple's share price briefly topped $467.77, making Apple the first US company with a market capitalization of $2 trillion.
During its annual WWDC keynote speech on June 22, 2020, Apple announced it would move away from Intel processors, and the Mac would transition to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that Macs featuring Apple's processors would allow for big increases in performance over current Intel-based models. On November 10, 2020, the MacBook Air, MacBook Pro, and the Mac Mini became the first Mac devices powered by an Apple-designed processor, the Apple M1.
In April 2022, it was reported that Samsung Electro-Mechanics would be collaborating with Apple on its M2 chip instead of LG Innotek. Developer logs showed that at least nine Mac models with four different M2 chips were being tested.
The Wall Street Journal reported that an effort to develop its own chips left Apple better prepared to deal with the semiconductor shortage that emerged during the pandemic era and led to increased profitability, with sales of Mac computers that included M1 chips rising sharply in 2020 and 2021. It also inspired other companies like Tesla, Amazon, and Meta Platforms to pursue a similar path.
In April 2022, Apple opened an online store that allowed anyone in the US to view repair manuals and order replacement parts for specific recent iPhones, although the difference in cost between this method and official repair is anticipated to be minimal.
In May 2022, a trademark was filed for RealityOS, an operating system reportedly intended for virtual and augmented reality headsets, first mentioned in 2017. According to Bloomberg, the headset may come out in 2023. Further insider reports state that the device uses iris scanning for payment confirmation and signing into accounts.
On June 18, 2022, the Apple Store in Towson, Maryland became the first to unionize in the U.S., with the employees voting to join the International Association of Machinists and Aerospace Workers.
On July 7, 2022, Apple added Lockdown Mode to macOS 13 and iOS 16, as a response to the earlier Pegasus revelations; the mode increases security protections for high-risk users against targeted zero-day malware.
Apple launched a buy now, pay later service called 'Apple Pay Later' for its Apple Wallet users in March 2023. The program allows its users to apply for loans between $50 and $1,000 to make online or in-app purchases and then repaying them through four installments spread over six weeks without any interest or fees.
Products
Mac
The Mac is Apple's family of personal computers. Macs are known for their ease of use and distinctive aluminium, minimalist designs. Macs have been popular among students, creative professionals, and software engineers. The current lineup consists of the MacBook Air and MacBook Pro laptops, and the iMac, Mac mini, Mac Studio and Mac Pro desktop computers.
Often described as a walled garden, Macs use Apple silicon chips, run the macOS operating system, and include Apple software like the Safari web browser, iMovie for home movie editing, GarageBand for music creation, and the iWork productivity suite. Apple also sells pro apps: Final Cut Pro for video production, Logic Pro for musicians and producers, and Xcode for software developers.
Apple also sells a variety of accessories for Macs, including the Pro Display XDR, Apple Studio Display, Magic Mouse, Magic Trackpad, and Magic Keyboard.
iPhone
The iPhone is Apple's line of smartphones, which run the iOS operating system. The first iPhone was unveiled by Steve Jobs on January 9, 2007. Since then, new models have been released annually. When it was introduced, its multi-touch screen was described as "revolutionary" and a "game-changer" for the mobile phone industry. The device has been credited with creating the app economy.
, the iPhone has 15% market share, yet represents 50% of global smartphone revenues, with Android phones accounting for the rest. The iPhone has generated large profits for the company, and is credited with helping to make Apple one of the world's most valuable publicly traded companies.
The most recent iPhones are the iPhone 15, iPhone 15 Plus, iPhone 15 Pro and iPhone 15 Pro Max.
iPad
The iPad is Apple's line of tablets which run iPadOS. The first-generation iPad was announced on January 27, 2010. The iPad is mainly marketed for consuming multimedia, creating art, working on documents, videoconferencing, and playing games. The iPad lineup consists of several base iPad models, and the smaller iPad Mini, upgraded iPad Air, and high-end iPad Pro. Apple has consistently improved the iPad's performance, with the iPad Pro adopting the same M1 and M2 chips as the Mac; but the iPad still receives criticism for its limited OS.
Apple has sold more than 500 million iPads, though sales peaked in 2013. The iPad still remains the most popular tablet computer by sales , and accounted for nine percent of the company's revenue .
Apple sells several iPad accessories, including the Apple Pencil, Smart Keyboard, Smart Keyboard Folio, Magic Keyboard, and several adapters.
Other products
Apple also makes several other products that it categorizes as "Wearables, Home and Accessories". These products include the AirPods line of wireless headphones, Apple TV digital media players, Apple Watch smartwatches, Beats headphones and HomePod Mini smart speakers.
, this broad line of products comprises about 11% of the company's revenues.
At WWDC 2023, Apple introduced its new VR headset, Vision Pro, along with visionOS. Apple announced that it will be partnering with Unity to bring existing 3D apps to Vision Pro using Unity's PolySpatial technology.
Services
Apple also offers a broad line of services that it earns revenue on, including advertising in the App Store and Apple News app, the AppleCare+ extended warranty plan, the iCloud+ cloud-based data storage service, payment services through the Apple Card credit card and the Apple Pay processing platform, a digital content services including Apple Books, Apple Fitness+, Apple Music, Apple News+, Apple TV+, and the iTunes Store.
, services comprise about 19% of the company's revenue. Many of the services have been launched when Apple announced it would be making a concerted effort to expand its service revenues.
Marketing
Branding
According to Steve Jobs, the company's name was inspired by his visit to an apple farm while he was on a fruitarian diet. Jobs thought the name "Apple" was "fun, spirited and not intimidating." Steve Jobs and Steve Wozniak were fans of the Beatles, but Apple Inc. had name and logo trademark issues with Apple Corps Ltd., a multimedia company started by the Beatles in 1968. This resulted in a series of lawsuits and tension between the two companies. These issues ended with the settling of their lawsuit in 2007.
Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. On August 27, 1999, Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation.
Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon," while Ive claimed in 2014 that "people have an incredibly personal relationship" with Apple's products.
Fortune magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year .
there were 1.65 billion Apple products in active use. In February 2023 that number exceeded 2 billion devices.
Advertising
Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines—for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod.
From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts towards effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained significant attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul".
Stores
The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so.
Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. Apple Stores underwent a period of significant redesign, beginning in May 2016. This redesign included physical changes to the Apple Stores, such as open spaces and re-branded rooms, as well as changes in function to facilitate interaction between consumers and professionals.
Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement.
Market power
On March 16, 2020, France fined Apple €1.1 billion for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017 but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012.
On August 13, 2020, Epic Games, the maker of the popular game Fortnite, sued Apple and Google after its hugely popular video game was removed from Apple and Google's App Store. The suits come after both Apple and Google blocked the game after it introduced a direct payment system, effectively shutting out the tech titans from collecting fees. In September 2020 Epic Games founded the Coalition for App Fairness together with other thirteen companies, which aims for better conditions for the inclusion of apps in the app stores. Later in December 2020, Facebook agreed to assist Epic in their legal game against Apple, planning to support the company by providing materials and documents to Epic. Facebook had, however, stated that the company will not participate directly with the lawsuit, although did commit to helping with the discovery of evidence relating to the trial of 2021. In the months prior to their agreement, Facebook had been dealing with feuds against Apple relating to the prices of paid apps as well as privacy rule changes. Head of ad products for Facebook Dan Levy commented, saying that "this is not really about privacy for them, this is about an attack on personalized ads and the consequences it's going to have on small-business owners," commenting on the full-page ads placed by Facebook in various newspapers in December 2020.
Customer privacy
Apple has a notable pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out.
With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod Touch applications to directly ask iPhone users permission to track them. The feature, titled "App Tracking Transparency", received heavy criticism from Facebook, whose primary business model revolves around the tracking of users' data and sharing such data with advertisers so users can see more relevant ads, a technique commonly known as targeted advertising. Despite Facebook's measures, including purchasing full-page newspaper advertisements protesting App Tracking Transparency, Apple released the update in mid-spring 2021. A study by Verizon subsidiary Flurry Analytics reported only 4% of iOS users in the United States and 12% worldwide have opted into tracking.
However, Apple aids law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which do not have the same level of constitutional privacy as a passcode in the United States.
Prior to the release of iOS 15, Apple announced new efforts at combating child sexual abuse material on iOS and Mac platforms. Parents of minor iMessage users can now be alerted if their child sends or receives nude photographs. Additionally, on-device hashing would take place on media destined for upload to iCloud, and hashes would be compared to a list of known abusive images provided by law enforcement; if enough matches were found, Apple would be alerted and authorities informed. The new features received praise from law enforcement and victims rights advocates, however privacy advocates, including the Electronic Frontier Foundation, condemned the new features as invasive and highly prone to abuse by authoritarian governments.
Ireland's Data Protection Commission launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform.
In December 2019, a report found that the iPhone 11 Pro continues tracking location and collecting user data even after users have disabled location services. In response, an Apple engineer said the Location Services icon "appears for system services that do not have a switch in settings."
According to published reports by Bloomberg News on March 30, 2022, Apple turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, an Apple representative referred the reporter to a section of the company policy for law enforcement guidelines, which stated, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse."
Corporate affairs
Leadership
Senior management
As of March 16, 2021, the management of Apple Inc. includes:
Tim Cook (chief executive officer)
Jeff Williams (chief operating officer)
Luca Maestri (senior vice president and chief financial officer)
Katherine L. Adams (senior vice president and general counsel)
Eddy Cue (senior vice president – Internet Software and Services)
Craig Federighi (senior vice president – Software Engineering)
John Giannandrea (senior vice president – Machine Learning and AI Strategy)
Deirdre O'Brien (senior vice president – Retail + People)
John Ternus (senior vice president – Hardware Engineering)
Greg Josiwak (senior vice president – Worldwide Marketing)
Johny Srouji (senior vice president – Hardware Technologies)
Sabih Khan (senior vice president – Operations)
Board of directors
As of January 20, 2023, the board of directors of Apple Inc. includes:
Arthur D. Levinson (chairman)
Tim Cook (executive director and CEO)
James A. Bell
Al Gore
Alex Gorsky
Andrea Jung
Monica Lozano
Ronald Sugar
Susan Wagner
Previous CEOs
Michael Scott (1977–1981)
Mike Markkula (1981–1983)
John Sculley (1983–1993)
Michael Spindler (1993–1996)
Gil Amelio (1996–1997)
Steve Jobs (1997–2011)
Corporate culture
Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions while in his youth as a source of inspiration for his co-founding Apple.
As the company has grown and been led by a series of differently opinionated chief executives, it has arguably lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned to the company. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than projects with it.
To recognize the best of its employees, Apple created the Apple Fellows program which awards individuals who make extraordinary technical or leadership contributions to personal computing while at the company. The Apple Fellowship has so far been awarded to individuals including Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller.
At Apple, employees are intended to be specialists who are not exposed to functions outside their area of expertise. Jobs saw this as a means of having "best-in-class" employees in every role. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. As an example, when iOS senior vice president Scott Forstall refused to sign Apple's official apology for numerous errors in the redesigned Maps app, he was forced to resign. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year.
In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees.
Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service.
In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years.
In 2023, Bloomberg Mark Gurman revealed the existence of Apple's Exploratory Design Group (XDG), which was working to add glucose monitoring to the Apple Watch. Gurman compared XDG to Alphabet's X "moonshot factory".
Offices
Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death.
Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university.
In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings that also served as prior headquarters: "Stephens Creek Three" (1977–1978), Bandley One" (1978–1982), and "Mariani One" (1982–1993). In total, Apple occupies almost 40% of the available office space in the city.
Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork.
Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people work in technical support, supply chain management, online store curation, and Apple Maps data management.
The company also has several other locations in Boulder, Colorado, Culver City, California, Herzliya (Israel), London, New York, Pittsburgh, San Diego, and Seattle that each employ hundreds of people.
Litigation
Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents. Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. In January 2022, Ericsson sued Apple over payment of royalty of 5G technology.
Finances
Apple is the world's largest technology company by revenue, the world's largest technology company by total assets, and the world's second-largest mobile phone manufacturer after Samsung.
In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors.
The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes.
Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings.
On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later.
, Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
In July 2022, Apple reported an 11% decline in Q3 profits compared to 2021. Its revenue in the same period rose 2% year-on-year to $83 billion, though this figure was also lower than in 2021, where the increase was at 36%. The general downturn is reportedly caused by the slowing global economy and supply chain disruptions in China.
In May 2023, Apple reported a decline in its sales for the first quarter of 2023. Compared to that of 2022, revenue for 2023 fell by 3%. This is Apple's second consecutive quarter of sales decline. This fall is attributed to the slowing economy and consumers putting off purchases of iPads and computers due to increased pricing. However, iPhone sales held up with a year-on-year increase of 1.5%. According to Apple, demands for such devices were strong, particularly in Latin America and South Asia.
Taxes
Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich", which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean.
British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporate tax rates. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax.
According to a US Senate report on the company's offshore tax structure concluded in May 2013, Apple has held billions of dollars in profits in Irish subsidiaries to pay little or no taxes to any government by using an unusual global tax structure. The main subsidiary, a holding company that includes Apple's retail stores throughout Europe, has not paid any corporate income tax in the last five years. "Apple has exploited a difference between Irish and U.S. tax residency rules", the report said.
On May 21, 2013, Apple CEO Tim Cook defended his company's tax tactics at a Senate hearing.
Apple says that it is the single largest taxpayer in the U.S., with an effective tax rate of approximately of 26% as of Q2 FY2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated that Apple was the biggest taxpayer worldwide.
In 2016, after a two-year investigation, the European Commission claimed that Apple's use of a hybrid Double Irish tax arrangement constituted "illegal state aid" from Ireland, and ordered Apple to pay 13 billion euros ($14.5 billion) in unpaid taxes, the largest corporate tax fine in history. This was later annulled, after the European General Court ruled that the Commission had provided insufficient evidence. In 2018, Apple repatriated $285 billion to America, resulting in a $38 billion tax payment spread over the following 8 years.
Charity
Apple is a partner of (PRODUCT)RED, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish.
Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, as well as for the 2017 Central Mexico earthquake. The company has also used its iTunes platform to encourage donations in the wake of environmental disasters and humanitarian crises, such as the 2010 Haiti earthquake, the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and the 2015 European migrant crisis. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible.
On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet." Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco.
During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe.
On January 13, 2021, Apple announced a $100 million "Racial Equity and Justice Initiative" to help combat institutional racism worldwide.
Environment
Apple Energy
Apple Energy, LLC is a wholly-owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. In addition to the company's solar energy production, Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina. Apple will use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely with energy from renewable sources.
Energy and resources
In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer".
Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources.
In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet."
, Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015.
During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products. Apple working in partnership with Conservation Fund, have preserved 36,000 acres of working forests in Maine and North Carolina. Another partnership announced is with the World Wildlife Fund to preserve up to of forests in China. Featured was the company's installation of a 40 MW solar power plant in the Sichuan province of China that was tailor-made to coexist with the indigenous yaks that eat hay produced on the land, by raising the panels to be several feet off of the ground so the yaks and their feed would be unharmed grazing beneath the array. This installation alone compensates for more than all of the energy used in Apple's Stores and Offices in the whole of China, negating the company's energy carbon footprint in the country. In Singapore, Apple has worked with the Singaporean government to cover the rooftops of 800 buildings in the city-state with solar panels allowing Apple's Singapore operations to be run on 100% renewable energy. Liam was introduced to the world, an advanced robotic disassembler and sorter designed by Apple Engineers in California specifically for recycling outdated or broken iPhones. Reuses and recycles parts from traded in products.
Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly 140,000 metric tons of waste have been diverted from landfills.
On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal.
In April 2021, Apple said that it had started a $200 million fund in order to combat climate change by removing 1 million metric tons of carbon dioxide from the atmosphere each year.
In February 2022, the NewClimate Institute, a German environmental policy think tank, published a survey evaluating the transparency and progress of the climate strategies and carbon neutrality pledges announced by 25 major companies in the United States that found that Apple's carbon neutrality pledge and climate strategy was unsubstantiated and misleading.
Toxins
Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category.
In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praised Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. Apple continues to score well on product ratings, with all of their products now being free of PVC plastic and BFRs. However, the guide criticized Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data, and for not setting any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables.
Green bonds
In February 2016, Apple issued a US$1.5 billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects.
Supply chain
Apple products were made in America in Apple-owned factories until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that "Made in the USA" is no longer a viable option for most Apple products".
The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk."
In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in its iPhone devices. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology— and the advanced manufacturing that goes with that— that quite frankly is essential to our innovation."
, Apple uses components from 43 countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics with factories mostly located inside China, but also Brazil, and India.
Taiwan Semiconductor Manufacturing Co., (TSMC) is a pure-play semiconductor manufacturing company. They make the majority of Apple's smartphone SoCs, with Samsung Semiconductor, playing a minority role. Apple, alone accounted for over 25% of TSMC's total income in 2021. Apple's Bionic lineup of smartphone SoCs, are currently made exclusively by TSMC from the A7 bionic onwards, previously manufacturing was shared with Samsung. The M series of Apple SoC for consumer computers and tablets is made by TSMC as well.
During the Mac's early history Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all Cable TV boxes in the United States.
Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year.
During the 2022 COVID-19 protests in China, Chinese state-owned company Wingtech was reported by The Wall Street Journal to gain an additional foothold in Apple's supply chain following protests at a Foxconn factory in the Zhengzhou Airport Economy Zone.
Worker organizations
In 2006, one complex of factories in Shenzhen, China that assembled the iPod and other items had over 200,000 workers living and working within it. Employees regularly worked more than 60 hours per week and made around $100 per month. A little over half of the workers' earnings was required to pay for rent and food from the company.
Apple immediately launched an investigation after the 2006 media report, and worked with their manufacturers to ensure acceptable working conditions. In 2007, Apple started yearly audits of all its suppliers regarding worker's rights, slowly raising standards and pruning suppliers that did not comply. Yearly progress reports have been published . In 2011, Apple admitted that its suppliers' child labor practices in China had worsened.
The Foxconn suicides occurred between January and November 2010, when 18 Foxconn () employees attempted suicide, resulting in 14 deaths—the company was the world's largest contract electronics manufacturer, for clients including Apple, at the time. The suicides drew media attention, and employment practices at Foxconn were investigated by Apple. Apple issued a public statement about the suicides, and company spokesperson Steven Dowling said:
The statement was released after the results from the company's probe into its suppliers' labor practices were published in early 2010. Foxconn was not specifically named in the report, but Apple identified a series of serious labor violations of labor laws, including Apple's own rules, and some child labor existed in a number of factories. Apple committed to the implementation of changes following the suicides.
Also in 2010, workers in China planned to sue iPhone contractors over poisoning by a cleaner used to clean LCD screens. One worker claimed that he and his coworkers had not been informed of possible occupational illnesses. After a high suicide rate in a Foxconn facility in China making iPads and iPhones, albeit a lower rate than that of China as a whole, workers were forced to sign a legally binding document guaranteeing that they would not kill themselves. Workers in factories producing Apple products have also been exposed to hexane, a neurotoxin that is a cheaper alternative than alcohol for cleaning the products.
A 2014 BBC investigation found excessive hours and other problems persisted, despite Apple's promise to reform factory practice after the 2010 Foxconn suicides. The Pegatron factory was once again the subject of review, as reporters gained access to the working conditions inside through recruitment as employees. While the BBC maintained that the experiences of its reporters showed that labor violations were continuing , Apple publicly disagreed with the BBC and stated: "We are aware of no other company doing as much as Apple to ensure fair and safe working conditions".
In December 2014, the Institute for Global Labour and Human Rights published a report which documented inhumane conditions for the 15,000 workers at a Zhen Ding Technology factory in Shenzhen, China, which serves as a major supplier of circuit boards for Apple's iPhone and iPad. According to the report, workers are pressured into 65-hour work weeks which leaves them so exhausted that they often sleep during lunch breaks. They are also made to reside in "primitive, dark and filthy dorms" where they sleep "on plywood, with six to ten workers in each crowded room." Omnipresent security personnel also routinely harass and beat the workers.
In 2019, there were reports stating that some of Foxconn's managers had used rejected parts to build iPhones and that Apple was investigating the issue.
See also
List of Apple Inc. media events
Pixar
Notes
References
Bibliography
Further reading
External links
1976 establishments in California
1980s initial public offerings
American brands
Companies based in Cupertino, California
Companies in the Dow Jones Industrial Average
Companies in the PRISM network
Companies listed on the Nasdaq
Computer companies established in 1976
Computer companies of the United States
Display technology companies
Electronics companies of the United States
Home computer hardware companies
Mobile phone manufacturers
Multinational companies headquartered in the United States
Networking hardware companies
Portable audio player manufacturers
Retail companies of the United States
Software companies based in the San Francisco Bay Area
Software companies established in 1976
Steve Jobs
Technology companies based in the San Francisco Bay Area
Technology companies established in 1976
Technology companies of the United States |
The aspect ratio of a geometric shape is the ratio of its sizes in different dimensions. For example, the aspect ratio of a rectangle is the ratio of its longer side to its shorter side—the ratio of width to height, when the rectangle is oriented as a "landscape".
The aspect ratio is most often expressed as two integer numbers separated by a colon (x:y), less commonly as a simple or decimal fraction. The values x and y do not represent actual widths and heights but, rather, the proportion between width and height. As an example, 8:5, 16:10, 1.6:1, and 1.6 are all ways of representing the same aspect ratio.
In objects of more than two dimensions, such as hyperrectangles, the aspect ratio can still be defined as the ratio of the longest side to the shortest side.
Applications and uses
The term is most commonly used with reference to:
Graphic / image
Image aspect ratio
Display aspect ratio
Paper size
Standard photographic print sizes
Motion picture film formats
Standard ad size
Pixel aspect ratio
Photolithography: the aspect ratio of an etched, or deposited structure is the ratio of the height of its vertical side wall to its width.
HARMST High Aspect Ratios allow the construction of tall microstructures without slant
Tire code
Tire sizing
Turbocharger impeller sizing
Wing aspect ratio of an aircraft or bird
Astigmatism of an optical lens
Nanorod dimensions
Shape factor (image analysis and microscopy)
Finite Element Analysis
Aspect ratios of simple shapes
Rectangles
For a rectangle, the aspect ratio denotes the ratio of the width to the height of the rectangle. A square has the smallest possible aspect ratio of 1:1.
Examples:
4:3 = 1.: Some (not all) 20th century computer monitors (VGA, XGA, etc.), standard-definition television
: international paper sizes (ISO 216)
3:2 = 1.5: 35mm still camera film, iPhone (until iPhone 5) displays
16:10 = 1.6: commonly used widescreen computer displays (WXGA)
Φ:1 = 1.618...: golden ratio, close to 16:10
5:3 = 1.: super 16 mm, a standard film gauge in many European countries
16:9 = 1.: widescreen TV and most laptops
2:1 = 2: dominoes
64:27 = 2.: ultra-widescreen, 21:9
32:9 = 3.: super ultra-widescreen
Ellipses
For an ellipse, the aspect ratio denotes the ratio of the major axis to the minor axis. An ellipse with an aspect ratio of 1:1 is a circle.
Aspect ratios of general shapes
In geometry, there are several alternative definitions to aspect ratios of general compact sets in a d-dimensional space:
The diameter-width aspect ratio (DWAR) of a compact set is the ratio of its diameter to its width. A circle has the minimal DWAR which is 1. A square has a DWAR of .
The cube-volume aspect ratio (CVAR) of a compact set is the d-th root of the ratio of the d-volume of the smallest enclosing axes-parallel d-cube, to the set's own d-volume. A square has the minimal CVAR which is 1. A circle has a CVAR of . An axis-parallel rectangle of width W and height H, where W>H, has a CVAR of .
If the dimension d is fixed, then all reasonable definitions of aspect ratio are equivalent to within constant factors.
Notations
Aspect ratios are mathematically expressed as x:y (pronounced "x-to-y").
Cinematographic aspect ratios are usually denoted as a (rounded) decimal multiple of width vs unit height, while photographic and videographic aspect ratios are usually defined and denoted by whole number ratios of width to height. In digital images there is a subtle distinction between the display aspect ratio (the image as displayed) and the storage aspect ratio (the ratio of pixel dimensions); see Distinctions.
See also
Axial ratio
Ratio
Equidimensional ratios in 3D
List of film formats
Squeeze mapping
Scale (ratio)
Vertical orientation
References
Ratios |
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.
The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009.
AMD's main products include microprocessors, motherboard chipsets, embedded processors, graphics processors, and FPGAs for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center and gaming markets, and has announced plans to enter the high-performance computing market.
History
First twelve years
Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968.
In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid.
In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available.
In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million.
AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.
Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976.
In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation.
Technology exchange agreement with Intel
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled.
Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips.
The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985.
By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.
AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.
AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO.
Acquisition of ATI, spin-off of GlobalFoundries, and acquisition of Xilinx
On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for a total of approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name.
In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.
In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip.
On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June.
On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014.
After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments.
In October 2020, AMD announced that it was acquiring Xilinx in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion.
In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem.
List of CEOs
Products
CPUs and APUs
IBM PC and the x86 architecture
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD.
In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units.
In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor.
Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors.
K5, K6, Athlon, Duron, and Sempron
AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked.
In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor).
The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64KB instead of 256KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3.
On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512KB L2 Cache was released.
Athlon 64, Opteron and Phenom
The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64.
On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment.
In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, as well as an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm.
In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, as well as a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform.
In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed.
The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process.
Fusion becomes the AMD APU
Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit).
Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, as well as northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card.
New microarchitectures
High-power, high-performance Bulldozer cores
Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace.
The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism.
In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency.
Low-power Cat cores
The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W.
Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014.
ARM architecture-based designs
In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57 based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release.
In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred.
Zen-based CPUs and APUs
Zen is a new architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015. One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient. The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory. AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020. As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture.
The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs.
Graphics products and GPUs
ATI prior to AMD acquisition
Radeon within AMD
In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014.
Combined GPU and CPU divisions
In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017.
Radeon Technologies Group
In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities.
In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs.
Semi-custom and game console products
In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory.
Other hardware
AMD motherboard chipsets
Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.
The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.
As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI.
On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform.
AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS–based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX).
With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality.
AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia.
Embedded products
Embedded CPUs
In the early 1990s, AMD began marketing a series of embedded System-on-a-chip (SoC) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHze.g. was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series.
In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications.
In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, as well as the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015.
AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability.
The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom.
In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces.
In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016.
In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system on a chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory.
Embedded graphics
AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology.
Current product lines
CPU and APU products
AMD's portfolio of CPUs and APUs
Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen)
A-series – Excavator-class consumer desktop and laptop APUs
G-series – Excavator- and Jaguar-class low-power embedded APUs
Ryzen – brand of consumer CPUs and APUs
Ryzen Threadripper – brand of prosumer/professional CPUs
R-series – Excavator class high-performance embedded APUs
Epyc – brand of server CPUs
Opteron – brand of microserver APUs
Graphics products
AMD's portfolio of dedicated graphics processors
Radeon – brand for consumer line of graphics cards; the brand name originated with ATI.
Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops.
Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand.
Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products
Radeon-branded products
RAM
In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business.
Solid-state drives
AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface.
Technologies
CPU hardware
technologies found in AMD CPU/APU and other products include:
HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products
Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture
Graphics hardware
technologies found in AMD GPU products include:
AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card
AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard
AMD TrueAudio – acceleration of audio calculations
AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3
AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs
Unified Video Decoder (UVD) – acceleration of video decompression (decoding)
Video Coding Engine (VCE) – acceleration of video compression (encoding)
Software
AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade.
For the following mentions, software not expressely stated free can be assumed to be proprietary.
Distribution
AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux.
Software by type
CPU
AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux.
AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows.
AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future.
GPU
Most notable public AMD software is on the GPU side.
AMD has opened both its graphic and compute stacks:
GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution.
ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks.
Misc
AMD conducts open research on heterogeneous computing.
Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library.
AMD contributes to open source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community.
In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards.
Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5.
Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture.
Production and fabrication
Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication.
In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009.
With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past.
In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021.
Corporate affairs
Partnerships
AMD uses strategic industry partnerships to further its business interests as well as to rival Intel's dominance and resources:
A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors.
AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies.
To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft.
In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory.
On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup.
In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well.
In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs as well as increased support for their products across the board.
AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India.
AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks.
AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market.
On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021.
On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023.
In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony.
On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse.
In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture.
Litigation with Intel
AMD has a long history of litigation with former (and current) partner and x86 creator Intel.
In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba.
In November 2009, Intel agreed to pay AMD $1.25bn and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them.
Guinness World Record achievement
On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).
On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz.
On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz.
Acquisitions, mergers and investments
Corporate social responsibility
In its 2012 report on progress relating to conflict minerals, the Enough Project rated AMD the fifth most progressive of 24 consumer electronics companies.
Other initiatives
50x15, digital inclusion, with targeted 50% of world population to be connected through Internet via affordable computers by the year of 2015.
The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids.
See also
Bill Gaede
List of AMD processors
List of AMD accelerated processing units
List of AMD graphics processing units
List of AMD chipsets
List of ATI chipsets
3DNow!
Cool'n'Quiet
PowerNow!
Notes
References
Rodengen, Jeffrey L. The Spirit of AMD: Advanced Micro Devices. Write Stuff, 1998.
Ruiz, Hector. Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group, 2013.
External links
1969 establishments in California
1970s initial public offerings
American companies established in 1969
Fabless semiconductor companies
Companies based in Santa Clara, California
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Companies in the Nasdaq-100
Computer companies of the United States
Computer companies established in 1969
Electronics companies established in 1969
Graphics hardware companies
HSA Foundation founding members
Manufacturing companies based in the San Francisco Bay Area
Motherboard companies
Semiconductor companies of the United States
Superfund sites in California
Technology companies based in the San Francisco Bay Area
Technology companies established in 1969 |
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.
Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.
Auto-correlation of stochastic processes
In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is
where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined.
Subtracting the mean before multiplication yields the auto-covariance function between times and :
Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law).
Definition for wide-sense stationary stochastic process
If is a wide-sense stationary process then the mean and the variance are time-independent, and further the autocovariance function depends only on the lag between and : the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and auto-correlation can be expressed as a function of the time-lag, and that this would be an even function of the lag . This gives the more familiar forms for the auto-correlation function
and the auto-covariance function:
In particular, note that
Normalization
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the auto-correlation coefficient of a stochastic process is
If the function is well defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a wide-sense stationary (WSS) process, the definition is
.
The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
Properties
Symmetry property
The fact that the auto-correlation function is an even function can be stated as
respectively for a WSS process:
Maximum at zero
For a WSS process:
Notice that is always real.
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality, inequality for stochastic processes:
Autocorrelation of white noise
The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at and will be exactly for all other .
Wiener–Khinchin theorem
The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform:
For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only:
Auto-correlation of random vectors
The (potentially time-dependent) auto-correlation matrix (also called second moment) of a (potentially time-dependent) random vector is an matrix containing as elements the autocorrelations of all pairs of elements of the random vector . The autocorrelation matrix is used in various digital signal processing algorithms.
For a random vector containing random elements whose expected value and variance exist, the auto-correlation matrix is defined by
where denotes the transposed matrix of dimensions .
Written component-wise:
If is a complex random vector, the autocorrelation matrix is instead defined by
Here denotes Hermitian transpose.
For example, if is a random vector, then is a matrix whose -th entry is .
Properties of the autocorrelation matrix
The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors.
The autocorrelation matrix is a positive semidefinite matrix, i.e. for a real random vector, and respectively in case of a complex random vector.
All eigenvalues of the autocorrelation matrix are real and non-negative.
The auto-covariance matrix is related to the autocorrelation matrix as follows:Respectively for complex random vectors:
Auto-correlation of deterministic signals
In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function.
Auto-correlation of continuous-time signal
Given a signal , the continuous autocorrelation is most often defined as the continuous cross-correlation integral of with itself, at lag .
where represents the complex conjugate of . Note that the parameter in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning.
Auto-correlation of discrete-time signal
The discrete autocorrelation at lag for a discrete-time signal is
The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as
For processes that are not stationary, these will also be functions of , or .
For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to
These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes.
Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.)
Definition for periodic signals
If is a continuous periodic function of period , the integration from to is replaced by integration over any interval of length :
which is equivalent to
Properties
In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes.
A fundamental property of the autocorrelation is symmetry, , which is easy to prove from the definition. In the continuous case,
the autocorrelation is an even function when is a real function, and
the autocorrelation is a Hermitian function when is a complex function.
The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay , . This is a consequence of the rearrangement inequality. The same result holds in the discrete case.
The autocorrelation of a periodic function is, itself, periodic with the same period.
The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all ) is the sum of the autocorrelations of each function separately.
Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation.
By using the symbol to represent convolution and is a function which manipulates the function and is defined as , the definition for may be written as:
Multi-dimensional autocorrelation
Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be
When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function.
Efficient computation
For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence (i.e. , and for all other values of ) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values:
Thus the required autocorrelation sequence is , where and the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e. then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give which has the same period as the signal sequence The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal.
While the brute force algorithm is order , several efficient algorithms exist which can compute the autocorrelation in order . For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data with two fast Fourier transforms (FFT):
where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate.
Alternatively, a multiple correlation can be performed by using brute force calculation for low values, and then progressively binning the data with a logarithmic density to compute higher values, resulting in the same efficiency, but with lower memory requirements.
Estimation
For a discrete process with known mean and variance for which we observe observations , an estimate of the autocorrelation coefficient may be obtained as
for any positive integer . When the true mean and variance are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities:
If and are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate.
A periodogram-based estimate replaces in the above formula with . This estimate is always biased; however, it usually has a smaller mean squared error.
Other possibilities derive from treating the two portions of data and separately and calculating separate sample means and/or sample variances for use in defining the estimate.
The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of , then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the 's, the variance calculated may turn out to be negative.
Regression analysis
In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used.
In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive.
The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as with k degrees of freedom.
Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).
In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have , for , and , for .
Applications
Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions.
Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators.
Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated.
Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up.
The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density.
In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics.
In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field.
In signal processing, autocorrelation can give information about repeating events like musical beats (for example, to determine tempo) or pulsar frequencies, though it cannot tell the position in time of the beat. It can also be used to estimate the pitch of a musical tone.
In music recording, autocorrelation is used as a pitch detection algorithm prior to vocal processing, as a distortion effect or to eliminate undesired mistakes and inaccuracies.
Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone.
In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population.
The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide.
In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries.
In panel data, spatial autocorrelation refers to correlation of a variable with itself through space.
In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination.
In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground.
In medical ultrasound imaging, autocorrelation is used to visualize blood flow.
In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset.
Autocorrelation has been used to accurately measure power system frequency in numerical relays.
Serial dependence
Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms.
A time series of a random variable has serial dependence if the value at some time in the series is statistically dependent on the value at another time . A series is serially independent if there is no dependence between any pair.
If a time series is stationary, then statistical dependence between the pair would imply that there is statistical dependence between all pairs of values at the same lag .
See also
Autocorrelation matrix
Autocorrelation of a formal word
Autocorrelation technique
Autocorrelator
Cochrane–Orcutt estimation (transformation for autocorrelated error terms)
Correlation function
Correlogram
Cross-correlation
CUSUM
Fluorescence correlation spectroscopy
Optical autocorrelation
Partial autocorrelation function
Phylogenetic autocorrelation (Galton's problem}
Pitch detection algorithm
Prais–Winsten transformation
Scaled correlation
Triple correlation
Unbiased estimation of standard deviation
References
Further reading
Mojtaba Soltanalian, and Petre Stoica. "Computational design of sequences with good correlation properties." IEEE Transactions on Signal Processing, 60.5 (2012): 2180–2193.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
Klapetek, Petr (2018). Quantitative Data Processing in Scanning Probe Microscopy: SPM Applications for Nanometrology (Second ed.). Elsevier. pp. 108–112 .
Signal processing
Time domain analysis |
The Bulgarian Army () is the military of Bulgaria. The commander-in-chief is the president of Bulgaria. The Ministry of Defense is responsible for political leadership, while overall military command is in the hands of the Defense Staff, headed by the Chief of the Defense. There are three main branches of the Bulgarian military, named literally the Land Forces, the Air Forces and the Naval Forces (the term "Bulgarian Army" refers to them encompassed all together).
Throughout history, the Army has played a major role in defending the country's sovereignty. Only several years after its inception in 1878, Bulgaria became a regional military power and was involved in several major wars – Serbo-Bulgarian War (1885), First Balkan War (1912–13), Second Balkan War (1913), First World War (1915–1918) and Second World War (1941–1945), during which the Army gained considerable combat experience. During the Cold War, the People's Republic of Bulgaria maintained one of the largest militaries in the Warsaw Pact, numbering an estimated 152,000 troops in 1988. Since the Fall of Communism, the political leadership has decided to pursue a pro-NATO policy, thus reducing military personnel and weaponry. Bulgaria joined the North Atlantic Treaty Organization on 29 March 2004.
The patron saint of the Bulgarian Army is St. George. The Armed Forces Day or St. George's Day (6 May) is an official holiday in Bulgaria.
History of the Bulgarian Army
Medieval Period
The modern Bulgarian military dates back to 1878. On 22 July 1878 (10 July O.S.) a total of 12 battalions of opalchentsi who participated in the Liberation war, formed the Bulgarian armed forces. According to the Tarnovo Constitution, all men between 21 and 40 years of age were eligible for military service. In 1883 the military was reorganised in four infantry brigades (in Sofia, Pleven, Ruse and Shumen) and one cavalry brigade.
Serbo-Bulgarian war
The Serbo-Bulgarian War was the first armed conflict after Bulgaria's liberation. It was a result of the unification with Eastern Rumelia, which happened on 6 September 1885. The unification was not completely recognised, however, and one of the countries that refused to recognise the act was the Kingdom of Serbia. The Austro-Hungarian Empire had been expanding its influence in the Balkans and was particularly opposed. Serbia also feared this would diminish its dominance in the region. In addition, Serbian ruler Milan Obrenović IV was annoyed that Serbian opposition leaders like Nikola Pašić, who had escaped persecution after the Timok Rebellion, had found asylum in Bulgaria. Lured by Austria-Hungary's promises of territorial gains from Bulgaria (in return for concessions in the western Balkans), Milan IV declared war on Bulgaria on 14 November 1885.
Military strategy relied largely on surprise, as Bulgaria had moved most of its troops near the border with the Ottoman Empire, in the southeast. As it happened, the Ottomans did not intervene and the Serbian army's advance was stopped after the Battle of Slivnitsa. The main body of the Bulgarian army travelled from the Ottoman border in the southeast to the Serbian border in the northwest to defend the capital, Sofia. After the defensive battles at Slivnitsa and Vidin, Bulgaria began an offensive that took the city of Pirot. At this point the Austro-Hungarian Empire stepped in, threatening to join the war on Serbia's side if Bulgarian troops did not retreat. Fighting lasted for only 14 days, from 14–28 November. A peace treaty was signed in Bucharest on 19 February 1886. No territorial changes were made to either country, but Bulgarian unification was recognised by the Great Powers.
First Balkan War
Instability in the Balkan region in the early 1900s quickly became a precondition for a new war. Serbia's aspirations towards Bosnia and Herzegovina were thwarted by the Austrian annexation of the province in October 1908, so the Serbs focused their attention onto Kosovo, and to the south for expansion. Greek officers, revolting in August 1909, had secured the appointment of a progressive government under Eleftherios Venizelos, which they hoped would resolve the Cretan issue in Greece's favor and reverse their defeat of 1897 by the Ottomans. Bulgaria, which had secured Ottoman recognition of its independence in April 1909 and enjoyed the friendship of Russia, also looked to districts of Ottoman Thrace and Macedonia for expansion.
In March 1910 an Albanian insurrection broke out in Kosovo. In August Montenegro followed Bulgaria's precedent by becoming a kingdom. In 1911 Italy launched an invasion of Tripolitania, which was quickly followed by the occupation of the Dodecanese Islands. The Italians' decisive military victories over the Ottoman Empire greatly influenced the Balkan states to prepare for war against Turkey. Thus, in the spring of 1912 consultations among the various Christian Balkan nations resulted in a network of military alliances that became known as the Balkan League. The Great Powers, most notably France and Austria-Hungary, reacted to this diplomatic sensation by trying to dissuade the League from going to war, but failed.
In late September both the League and the Ottoman Empire mobilised their armies. Montenegro was the first to declare war, on 25 September (O.S.)/ 8 October. The other three states, after issuing an impossible ultimatum to the Porte on 13 October, declared war on Turkey on 17 October. The Balkan League relied on 700,000 troops, 370,000 of whom were Bulgarians. Bulgaria, often dubbed "the Prussia of the Balkans", was militarily the most powerful of the four states, with a large, well-trained and well-equipped army. The peacetime army of 60,000 troops was expanded during the war to 370,000, with almost 600,000 men mobilized in total out of a population of 4,300,000. The Bulgarian field army consisted of nine infantry divisions, one cavalry division and 1,116 artillery units. Commander-in-Chief was Tsar Ferdinand, while the actual command was in the hands of his deputy, Gen. Mikhail Savov. The Bulgarians also possessed a small navy of six torpedo boats, which were restricted to operations along the country's Black Sea coast.
Bulgaria's war aims were focused on Thrace and Macedonia. For the latter, Bulgaria had a secret agreement with Serbia to divide it between them, signed on 13 March 1912 during the negotiations that led to the establishment of the Balkan League. However, it was not a secret that Bulgaria's target was the fulfillment of the never-materialized Treaty of San Stefano, signed after the Russo-Turkish War, 1877–78. They deployed their main force in Thrace, forming three armies. The First Army, under Gen. Vasil Kutinchev with three infantry divisions, was deployed to the south of Yambol, with direction of operations along the Tundzha River. The Second Army, under Gen. Nikola Ivanov with two infantry divisions and one infantry brigade, was deployed west of the First and was assigned to capture the strong fortress of Adrianople (now Edirne). According to the plans, the Third Army, under Gen. Radko Dimitriev, was deployed east of and behind the First and was covered by the cavalry division hiding it from the Turkish view. The Third Army had three infantry divisions and was assigned to cross the Stranja mountain and to take the fortress of Lozengrad (Kirk Kilisse). The 2nd and 7th divisions were assigned independent roles, operating in western Thrace and eastern Macedonia, respectively.
The first great battles were at the Adrianople–Kirk Kilisse defensive line, where the Bulgarian 1st and 3rd Armies (together 110,000 men) defeated the Ottoman East Army (130,000 men) near Gechkenli, Seliolu and Petra. The fortress of Adrianople was besieged and Kirk Kilisse was taken without resistance under the pressure of the Bulgarian Third Army. The initial Bulgarian attack by First and Third Army defeated the Turkish forces, numbering some 130,000, and reached the Sea of Marmara. However, the Turks, with the aid of fresh reinforcements from the Asian provinces, established their third and strongest defensive position at the Chataldja Line, across the peninsula where Constantinople is located. New Turkish forces landed at Bulair and Şarköy, but after heavy fighting they were crushed by the newly formed 4th Bulgarian Army under the command of Gen Stiliyan Kovachev. The offensive at Chataldja failed, too. On 11 March the final Bulgarian assault on Adrianople began. Under the command of Gen. Georgi Vazov the Bulgarians, reinforced with two Serb divisions, conquered the "untakeable" city. On 17/30 May a peace treaty was signed between Turkey and the Balkan Alliance. The First Balkan War, which lasted from October 1912-May 1913, strengthened Bulgaria's position as a regional military power, significantly reduced Ottoman influence over the Balkans and resulted in the formation of an independent Albanian state.
Second Balkan War
The peace settlement of the First Balkan War proved unsatisfactory for both Serbia and Bulgaria. Serbia refused to cede a part of the territories in Macedonia, which it occupied and promised to give to Bulgaria according to a secret agreement. Serbia, on its side, was not satisfied with the independence of Albania and sought a secret alliance with Greece. Armed skirmishes between Serbian and Bulgarian troops occurred.
On 16 June 1913, just a few months after the end of the first war, the Bulgarian government ordered an attack on Serbian and Greek positions in Macedonia, without declaring war. Almost all of Bulgaria's 500,000-man standing army was positioned against these two countries, on two fronts—western and southern—while the borders with Romania and the Ottoman Empire were left almost unguarded. Montenegro sent a 12,000-strong force to assist the Serbs. Exhausted from the previous war, which took the highest toll on Bulgaria, the Bulgarian army soon turned to the defensive. Romania attacked from the north and northeast and the Ottoman Empire also intervened in Thrace. Allied numerical superiority was almost 2:1. After a month and two days of fighting, the war ended as a moral disaster for Bulgaria, and at the same time its economy was ruined and its military demoralised.
First World War
The Kingdom of Bulgaria participated in World War I on the side of the Central Powers between 15 October 1915, when the country declared war on Serbia, and 29 September 1918, when the Armistice of Thessalonica was signed. In the aftermath of the Balkan Wars, Bulgarian opinion turned against Russia and the western powers, whom the Bulgarians felt had done nothing to help them. The government of Vasil Radoslavov aligned the country with Germany and Austria-Hungary, even though this meant also becoming an ally of the Ottomans, Bulgaria's traditional enemy. However, Bulgaria now had no claims against the Ottomans, whereas Serbia, Greece and Romania (allies of Britain and France) were all in possession of lands perceived in Bulgaria as its own.
In 1915 Germany promised to restore the boundaries according to the Treaty of San Stefano and Bulgaria, which had the largest army in the Balkans, declared war on Serbia in October of that year. In the First World War Bulgaria decisively asserted its military capabilities. The second Battle of Doiran, with Gen. Vladimir Vazov as commander, inflicted a heavy blow on the numerically superior British army, which suffered 12,000 casualties against 2,000 from the opposite side. One year later, during the third battle of Doiran, the United Kingdom, supported by Greece, once again suffered a humiliating defeat, losing 3,155 men against just about 500 on the Bulgarian side. The reputation of the French army also suffered badly. The Battle of the Red Wall was marked by the total defeat of the French forces, with 5,700 out of 6,000 men killed. The 261 Frenchmen who survived were captured by Bulgarian soldiers.
Despite the outstanding victories, Germany was near defeat, which meant that Bulgaria would be left without its most powerful ally. The Russian Revolution of February 1917 had a great effect in Bulgaria, spreading antiwar and anti-monarchist sentiment among the troops and in the cities. In June Radoslavov's government resigned. In 1919 Bulgaria officially left the war with the Treaty of Neuilly-sur-Seine.
The army between the World Wars
The Treaty of Neuilly-sur-Seine proved to be a severe blow to Bulgaria's military. According to the treaty, the country had no right to organize a conscription-based military. The professional army was to be no more than 20,000 men, including 10,000 internal forces and 3,000 border guards. Equipping the army with tanks, submarines, bombers and heavy artillery was strictly prohibited, although Bulgaria managed to get around some of these prohibitions. Nevertheless, on the eve of World War II the Bulgarian army was still well-trained and well-equipped. In fact, the Bulgarian Army had been expanded in 1935.
World War II
The government of the Kingdom of Bulgaria under Prime Minister Bogdan Filov declared a position of neutrality upon the outbreak of World War II. Bulgaria was determined to observe it until the end of the war but it hoped for bloodless territorial gains, especially in the lands with a significant Bulgarian population occupied by neighbouring countries after the Second Balkan War and World War I. However, it was clear that the central geopolitical position of Bulgaria in the Balkans would inevitably lead to strong external pressure by both World War II factions. Turkey had a non-aggression pact with Bulgaria. On 7 September 1940 Bulgaria succeeded in negotiating a recovery of Southern Dobruja with the Treaty of Craiova (see Second Vienna Award). Southern Dobruja had been part of Romania since 1913. This recovery of territory reinforced hopes for resolving other territorial problems without direct involvement in the war. The country joined the Axis Powers in 1941, when German troops preparing to invade Yugoslavia and Greece reached the Bulgarian borders and demanded permission to pass through its territory.
On 1 March 1941, Bulgaria signed the Tripartite Pact and officially joined the Axis bloc. After a short period of inaction, the army launched an operation against Yugoslavia and Greece. The goal of reaching the shores of the Aegean Sea and completely occupying the region of Macedonia was successful. Even though Bulgaria did not send any troops to support the German invasion of the Soviet Union, its navy was involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping. Besides this, Bulgarian armed forces garrisoned in the Balkans battled various resistance groups. The Bulgarian government declared a token war on the United Kingdom and the United States near the end of 1941, an act that resulted in the bombing of Sofia and other Bulgarian cities by Allied aircraft.
Some communist activists managed to begin a guerrilla movement, headed by the underground Bulgarian Communist Party. A resistance movement called Otechestven front (Fatherland front, Bulgarian: Отечествен фронт) was set up in August 1942 by the Communist Party, the Zveno movement and a number of other parties to oppose the elected government, after a number of Allied victories indicated that the Axis might lose the War. In 1943 Tsar Boris III died suddenly. In the summer of 1944, after having crushed the Nazi defense around Iaşi and Chişinău, the Soviet Army was approaching the Balkans and Bulgaria. On 23 August 1944 Romania quit the Axis Powers, declared war on Germany and allowed Soviet forces to cross its territory to reach Bulgaria. On 26 August 1944 the Fatherland Front made the decision to incite an armed rebellion against the government, which led to the appointment of a new government on 2 September. Support for the government was withheld by the Fatherland Front, since it was composed of pro-Nazi elements, in a desperate attempt to hold on to power. On 5 September 1944 the Soviet Union declared war and invaded Bulgaria. On 8 September 1944 the Bulgarian army joined the Soviet Union in its war against Germany.
Cold War era
As the Red Army invaded Bulgaria in 1944 and installed a communist government, the armed forces were rapidly forced to reorganise following the Soviet model, and were renamed the Bulgarian People's Army (Bohlgarska Narodna Armija, BNA). Moscow quickly supplied Bulgaria with T-34-85 tanks, SU-100 guns, Il-2 attack planes and other new combat machinery. As the country was a Soviet satellite, it was a part of the Eastern Bloc and entered the Warsaw Pact as one of its founders. By this time the army had expanded to over 200,000 men with hundreds of thousands of more reserve troops. Military service was obligatory. A special defensive line, known as the Krali Marko defensive line, was constructed along the entire border with Turkey. It was heavily fortified with concrete walls and turrets of T-34, Panzer III and Panzer IV tanks.
The army was involved in a number of border skirmishes from 1948 to 1952, repulsing several Greek attacks, and took part in the suppression of the Prague Spring events. In the meantime, during the rule of Todor Zhivkov, a significant military-industrial complex was established, capable of producing armored vehicles, self-propelled artillery, small arms and ammunition, as well as aircraft engines and spare parts. Bulgaria provided weapons and military expertise to Algeria, Yemen, Libya, Iraq, Nicaragua, Egypt and Syria. Some military and medical aid was also supplied to North Korea and North Vietnam in the 1950s and 1960s. During the 1970s the Air Force was at the apogee of its power, possessing at least 500 modern combat aircraft in its inventory. Training in the Bulgarian People's Army was exhaustive even by Soviet standards; however, it was never seen as a major force within the Warsaw Pact. In 1989, when the Cold War was coming to its end, the army (the combined number of ground, air and naval forces) numbered about 120,000 men, most of them conscripts. There were, however, a number of services which, while falling outside of Ministry of Defense jurisdiction in peacetime, were considered part of the armed forces. These were foremost the Labour Troops (construction forces), the People's Militia (the police forces of the country, which fell under Ministry of the Interior jurisdiction, but the ministry was itself a militarized structure) and, more importantly, its Interior Troops, the Border Troops—which in different periods fell under either Ministry of Defense or Ministry of the Interior control—Civil Defense Service, the Signals Troops (government communications) and the Transport Troops (mostly railway infrastructure maintenance), which were two separate services under the Postal and Communications Committee (a ministry), etc. The combined strength of the Bulgarian People's Army and all those services reached well over 325,000 troops.
From 1990
With the collapse of the Warsaw Pact and the end of the Cold War, Bulgaria could no longer support a vast military. A rapid reduction in personnel and active equipment was to be carried out in parallel with a general re-alignment of strategic interests. In 1990, Bulgaria had a total of more than 2,400 tanks, 2,000 armored vehicles, 2,500 large caliber artillery systems, 300 fighter and bomber aircraft, 100 trainer aircraft, more than 40 combat and 40 transport helicopters, 4 submarines, 6 fast missile craft, 2 frigates, 5 corvettes, 6 torpedo boats, 9 patrol craft, 30 minesweepers and 21 transport vessels. Due to the economic crisis that affected most former Eastern bloc countries, a steady reform in the military could not be carried out; much of the equipment fell into disrepair and some of it was smuggled and sold to the international black market. Inadequate payments, fuel and spare part shortages and the disbandment of many capable units led to an overall drop in combat readiness, morale and discipline.
After partially recovering from the 1990s crisis, the Bulgarian military became a part of NATO. Even before that, Bulgaria sent a total of 485 soldiers to Iraq (2003–2008) as a participant in the Iraq War, and maintained a 608-men strong force in Afghanistan as part of ISAF. Bulgaria had a significant missile arsenal, including 67 SCUD-B, 50 FROG-7 and 24 SS-23 ballistic missiles. In 2002, Bulgaria disbanded the Rocket Forces despite nationwide protests, and has disbanded its submarine component. Bulgaria is to have 27,000 standing troops by 2014, consisting of 14,310 troops in the land forces, 6,750 in the air force, 3,510 in the navy and 2,420 in the joint command. In 2018, the Bulgarian Armed Forces numbers around 33,150 soldiers, 73 aircraft, 2234 vehicles including 531 tanks, and 29 naval assets.
Organization
Defence Staff
The Bulgarian Armed Forces are headquartered in Sofia, where most of the Defence staff is based. Until recently the supreme military institution was the General Staff and the most senior military officer was known as the Chief of the General Staff. After the latest military reform has been implemented the General Staff became a department within the Ministry of Defence and for that matter its name had to be changed to match the new situation. For that reason the former GS became the Defence Staff and the supreme military commander became the Chief of Defence. Currently headed by Chief of Defence admiral Emil Eftimov, the Defence Staff is responsible for operational command of the Bulgarian Army and its three major branches. Deputies: Vice Admiral Petar Petrov, General Atanas Zaprianov, General Dimitar Zekhtinov.
Supreme officer rank assignments in the Bulgarian Army and other militarised services
Established by Executive Order of the President № 85 / 28.02.2012, most recent amendment published in the State Gazette Issue 96 from December 2, 2022:
Ministry of Defence
Chief of Defence – General / Admiral
Deputy Chief of Defence – Lieutenant-General / Vice-Admiral
Deputy Chief of Defence – Lieutenant-General / Vice-Admiral (until October 1, 2014 Major-General / Rear-Admiral)
Defense Staff
Director of the Defence Staff – Major-General / Rear-Admiral (established on May 6, 2018, the de-facto Chief of Staff of the BAF)
Director, "Operations and Training" Directorate – Brigade General / Flotilla Admiral
Director, "Logistics" Directorate – Brigade General / Flotilla Admiral
Director, "Strategical Planning" Directorate – Brigade General / Flotilla Admiral
Director, "Communication and Information Systems" Directorate – Brigade General / Flotilla Admiral
Director, "Defence Policy and Planning" Directorate (established on January 1, 2019) – Brigade General / Flotilla Admiral
Joint Forces Command
Commander, Joint Forces Command – Major-General / Rear-Admiral (until August 31, 2021 Lieutenant-General / Vice-Admiral)
Deputy Commander, Joint Forces Command – Brigade General / Flotilla Admiral (until August 31, 2021 Major-General / Rear-Admiral)
Chief of Staff, Joint Forces Command – Brigade General / Flotilla Admiral
Land Forces
Commander, Land Forces – Major-General
Deputy Commander, Land Forces – Brigade General
Chief of Staff, Land Forces – Brigade General
Commander, 2nd Mechanised Brigade – Brigade General
Commander, 61st Mechanised Brigade – Brigade General
Air Forces
Commander, Air Forces – Major-General
Deputy Commander, Air Forces – Brigade General
Commander, 3rd Air Base – Brigade General
Commander, 24th Air Base – Brigade General
Navy
Commander, Naval Forces – Rear-Admiral
Deputy Commander, Naval Forces – Flotilla Admiral
Commander, Combat and Support Ships Fltilla – Flotilla Admiral
Joint Special Forces Command
Commander, Joint Special Forces Command – Major-General
Logistics Support Command (established on September 1, 2021)
Commander, Logistics Support Command – Brigade General
Communications and Information Support and Cyber-Defence Command (established on September 1, 2021 on the basis of the Stationary Communications and Information System)
Commander, Communications and Information Support and Cyber-Defence Command – Brigade General
Military Police Service, directly subordinated to the Minister of Defense
Director, Military Police Service – Brigade General / Flotilla Admiral
Military Intelligence Service, directly subordinated to the Minister of Defense
Director, Military Intelligence Service – Brigade General / Flotilla Admiral or civil servant equal in rank
Military education institutions, directly subordinated to the Minister of Defense
Chief of the "Georgi Stoykov Rakovski" Military Academy – Major-General / Rear-Admiral
Chief of the Military Medical Academy and the Armed Forces Medical Service – Major-General / Rear-Admiral
Chief of the "Vasil Levski" National Military University – Brigade General
Chief of the "Georgi Benkovski" Higher Air Force School (re-established on January 1, 2020) – Brigade General
Chief of the "Nikola Yonkov Vaptsarov" Higher Naval School – Flotilla Admiral
Other positions at the Ministry of Defense
Military Advisor on Military Security Matters to the Supreme Commander-in-Chief, the President of the Republic of Bulgaria – Major-General / Rear-Admiral
Military Representative of the Chief of Defense at the NATO Military Committee and at the EU Military Committee – Lieutenant-General / Vice-Admiral
Director of the Cooperation and Regional Security Directorate at the NATO Military Committee – Major-General / Rear-Admiral
National Military Representative at the NATO Supreme Headquarters Allied Powers Europe – Major-General / Rear-Admiral
Deputy Commander of the NATO Rapid Deployable Corps – Greece (Thessaloniki) – Major-General / Rear-Admiral
Deputy Chief of Staff for Operations, Multinational Corps Southeast – Sibiu, Romania – Brigade General
In addition to the aforementioned positions, there are general rank positions in the National Intelligence Service and the National Close Protection Service (the bodyguard service to high-ranking officials and visiting dignitaries). These two services are considered part of the Armed Forces of the Republic of Bulgaria, but are directly subordinated to the President of Bulgaria and fall out of the jurisdiction of the Ministry of Defense.
National Intelligence Service
With the transformation of the National Intelligence Service into the State Agency for Intelligence the positions of Director, National Intelligence Service (Major-General / Rear-Admiral) and Deputy Director, National Intelligence Service (Brigade General / Flotilla Admiral) were stricken from the list of supreme officer assignments through Executive Order of the President №58/22.03.2016. The newly established positions are the civilian assignments of Chairman and Deputy-Chairman of the State Agency for Intelligence.
National Close Protection Service
Director, National Close Protection Service - Major-General / Rear-Admiral
Deputy Director, National Close Protection Service - Brigade General / Flotilla Admiral
With the establishment of the State Agency for National Security - SANS (Bulgarian: Darzhavna Agentsiya za Natsionalna Sigurnost - DANS, Държавна агенция за национална сигурност - ДАНС) part of the military security personnel came under its authority. Before that the security aspects of the armed forces were handled by a unified organisation under the General Staff - the "Military Service of Security and Military Police". After the formation of SANS the service was split, with the military counter-intelligence personnel entering the newly formed structure and the military police personnel staying under Ministry of Defense subordination. While technically civilian servants not part of the armed forces, the military counter-intelligence personnel of the State Agency of National Security retain their military ranks.
Ministry of Defence
Ministry of Defence
The organisation of the Ministry of Defence includes:
Minister of Defence
3 Deputy-Ministers of Defence
Political Cabinet
Permanent Secretary of Defence (the highest-ranking civil servant of the Ministry)
Inspectorate
General Administration
"Administration and Information Support" Directorate
"Public Relations and Protocol" Directorate
"Finances" Directorate
Specialised Administration
"Defence Infrastructure" Main Directorate
"Defence Policy and Planning" Directorate
"Planing, Programming and Budgeting" Directorate
"Defence Legal Activities" Directorate
"Defence Human Resources Management" Directorate
"Defence Public Orders" Directorate
"Armament Policy" Directorate
"Social Policy and Military-Patriotic Upbringing" Directorate
"Security of Information" Directorate
"Internal Audit" Directorate
"Financial Control and Check of Material Accountability" Unit
Civil servant in charge of personal data protection
Chief of Defence (the highest-ranking officer, the only four-star rank on active duty)
Deputy-Chief of Defence (Lieutenant-General / Vice-Admiral)
Deputy-Chief of Defence (Lieutenant-General / Vice-Admiral)
Director of the Defence Staff (Major-General / Rear-Admiral, the Defence Staff is the successor of the General Staff and thus the Director is the Chief of Staff of the Bulgarian Army)
"Operations and Training" Directorate
"Logistics" Directorate
"Strategical Planning" Directorate
"Communication and Information Systems" Directorate
"Defence Policy and Planning" Directorate
Command Sergeant-Major of the Bulgarian Army
Structures directly subordinated to the Ministry of Defence
Structures directly subordinated to the Ministry of Defence include:
Defence Intelligence Service, Sofia (commanded by a Major-General/ Rear-Admiral)
Director
Directorate
Information Division
Analysis Division
Resources Supply Division
Military Police Service, Sofia (commanded by a Brigade General / Flotilla Admiral)
Military Police Command
Military Police Operational Company (MRAV Sand Cat)
Regional Military Police Service Sofia
Regional Military Police Service Plovdiv
Regional Military Police Service Pleven
Regional Military Police Service Varna
Regional Military Police Service Sliven
Military Police Service Logistics and Training Centre, Sofia
Military Geographical Service
MGS Headquarters
Geographical Information Support Centre
Geodesic Observatory (GPS Observatory)
Military Geographical Centre
Information Security Unit
Financial Comptroller
National Guards Unit, Sofia (commanded by a Colonel)
Headquarters
1st Guards Battalion
2nd Mixed Guards Battalion
National Guards Unit Representative Military Band
Armed Forces Representative Dance Company
Guardsmen Training Centre
Logistics Support Company
Military Medical Academy, Sofia (commanded by a Major-General / Rear-Admiral)
Chief of the MMA, Chief of the MATH - Sofia and General Surgeon of the Bulgarian Armed Forces
Deputy Chief for Diagnostics and Medical Treatment Activities
Deputy Chief for Education and Scientific Activities
Deputy Chief for Medical Support of Military Units and Overseas Military Missions
Multiprofile Active Treatment Hospital - Sofia
Multiprofile Active Treatment Hospital (informally known as the Naval Hospital)- Varna
Multiprofile Active Treatment Hospital - Plovdiv
Multiprofile Active Treatment Hospital - Sliven
Multiprofile Active Treatment Hospital - Pleven
Follow-up Long-term Treatment and Rehabilitation Hospital "Saint George the Victorious" - Pomorie
Follow-up Long-term Treatment and Rehabilitation Hospital "Caleroya" - Hisar
Follow-up Long-term Treatment and Rehabilitation Hospital - Bankya
Military Medical Quick Reaction Force (expeditionary disaster and crisis relief unit)
Psychological Health and Prevention Centre
Scientific and Application Centre for Military Medical Expertise and Aviation and Seaborne Medicine
Scientific and Application Centre for Military Epidemiology and Hygiene
Military Academy "Georgi Stoykov Rakovski", Sofia (commanded by a Major-General / Rear-Admiral)
Command
Commandant of the Military Academy
Deputy Chief for Study and Scientific Activities
Deputy Chief for Administrative Activities and Logistics
Administrative Units
Personnel and Administrative Support Department
Logistics Department
Study and Scientific Activities Department
Financial Department
Library and Publishing Activities Sector
Public Relations, International Activities and Protocol Sector
Training Units
National Security and Defence College
Command Staff College
Peacekeeping Operations and Computer Simulations Sector
Foreign Languages Studies Department
Perspective Defence Research Institute
National Military University "Vasil Levski", Veliko Tarnovo (commanded by a Brigade General)
Combined Arms Education Department, Veliko Tarnovo
Artillery and Communication Systems Education Department, Shumen
NCO School, Veliko Tarnovo
Foreign Languages and Computer Systems Education Department, Shumen
Higher Air Force School "Georgi Benkovski", Dolna Mitropoliya (commanded by a Brigade General, temporarily a faculty of the NMU, reinstated on January 20, 2020)
Higher Naval School "Nikola Yonkov Vaptsarov", Varna (commanded by a Flotilla Admiral)
Chief of the Higher Naval Officer School
Deputy Chief for Administration and Logistics
Deputy Chief for Studies and Science Activities
Navigation Department
Engineering Department
Post-Graduate Qualification Department
Professional Petty Officers College
Defence Institute "Prof. Tsvetan Lazarov", Sofia
The Defence Institute is the research and development administration of the MoD. It includes the:
Administration and Financial Management Department
Military Standardisation, Quality and Certification Department
Armament, Equipment and Materials Development Department
Armament, Equipment and Materials Testing and Control Department
C4I Systems Development Department
Central Artillery Technical Evaluation Proving Ground, Stara Zagora
Central Office of Military District, Sofia
Commandment Service of the Ministry of Defence, Sofia
The Commandment Service is an institution in charge of real estate management, transportation, library services, documentation publishing and communications support for the central administration of the MoD, transportation support to the immediate MoD personnel, classified information, cryptographic and perimeter security for the MoD administration buildings.
Director
Deputy Director
Chief Legal Advisor
Financial Comptroller
Administrative Department
Financial Department
Business Department
Transportation Support Department
Support Department
CIS Support Department
Technical Centre for Armed Forces Information Security
Executive Agency for the Military Clubs and Recreational Activities, Sofia
National Museum of Military History, Sofia
Joint Forces Command
The Joint Operational Command (Съвместно оперативно командване (СОК)) was established on October 15, 2004 with HQ in Sofia. The country became member of NATO in the same year and this reorganisation was done to streamline the Bulgarian Armed Forces to NATO practices. The planing and execution of military operation was transferred from the respective armed service commands to a joint organisation.
In 2010 the Ministry of Defence completed a thorough study of the defence policy and issued a White Book, or a White Paper on Defence, calling for a major overhaul of the structure of Defence Forces. On July 1, 2011 the Joint Operational Command was reorganised into the Joint Forces Command (Съвместно командване на силите (СКС)) According to the document the military of the Republic of Bulgaria should include two mechanized brigades, four regiments (Logistics, Artillery, Engineering, SpecOps), four battalions (Reconnaissance, Mechanized, NBC, psychological operations) in the Land Forces; two air bases, SAM air defense base and Air force training base in the Air Force; and one naval base consisting of two homeports in the Navy. There are seven brigade level formations, including the two mechanised brigades and the special forces brigade of the army, the two air bases of the air force, the naval base and the logistical brigade of the JOC.
On September 1, 2021 the Joint Forces Command was reorganised again in accordance with the Development Plant for the Armed Forces until 2026 (План за развитие на Въоръжените сили до 2026 г.), set in action by Resolution of the Government № 183/07.05.2021. The logistics brigade and the movement control units of the JFC formed the Logistics Support Command. Since then the Joint Forces Command has seven units directly subordinated to it:
Military Command Centre
Operational Intelligence Information Center
Centre for Radiological, Chemical, Biological and Ecological Environment Monitoring and Control
Mobile Communication and Information System
Operational Archive of the Bulgarian Army
Joint Forces Training Range "Novo Selo"
National Military Study Complex "Charalitsa"
Support and Maintenance Group of the JFC
With the introduction of the new force structure of the Bulgarian Armed Forces the commands of three armed services of the Bulgarian Army - the Land, Air and Naval Forces are responsible for the generation of combat-ready forces, which are transferred under the operational command and control of the JFC.
Land Forces Command
Naval Forces Command
Air Forces Command
Under the previous structure they were subordinated to the JFC.
The logistics units of the JFC were re-arranged into the newly-formed Logistical Support Command (Командване за логистична поддръжка (КЛП)):
Logistical Support Command, Sofia
Logistics Brigade
Brigade Headquarters
1st Transport Battalion, Sofia
2nd Transport Battalion, Burgas
Central Supply Base, Negushevo
repair and maintenance bases
depots, storage facilities and technical inspection units
Movement Control Headquarters
The previous 62nd Signals Brigade at Gorna Malina was responsible for maintaining the higher military communication lines. Next to the functions of the Signals Regiment in the Sofia suburb of Suhodol, the brigade had at least three dispersed signals regiments for government communications, such as the 75th Signals Regiment (Lovech), the 65th Signals Regiment (Nova Zagora) and at least one additional unknown Signals Regiment in the Rila-Pirin mountain massif. The modern successor of the 62nd Signals Brigade are the Stationary Communication and Information System (Стационарна Комуникационна Информационна Система (СКИС)) of the Defence Staff (which fulfils also the tasks of SIGINT and Cyber Defence next to its strategic communications mission) and the Mobile Communication and Information System (Мобилна Комуникационна Информационна Система (МКИС)) of the Joint Forces Command.
On September 1, 2021 the Stationary Communications and Information System, which was directly subordinated to the Minister of Defence, became the Communications and Information Support and Cyber-Defence Command (Командване за комуникационно-информационна поддръжка и киберотбрана (ККИПКО)).
Communications and Information Support and Cyber-Defence Command, Sofia
Communications and Information Centre
Government Communications Support Centre,
Operational Centres
Engineering and CIS recovery Centre
Stationary Communications Network
Joint Special Operations Command
The 68th Special Forces Brigade was removed from the Land Forces' ORBAT on 1 February 2017, de facto becoming the country's fourth combat service. Unlike Bulgaria's Land, Air and Naval Forces, however, it fell outside of the Joint Forces Command structure, having been assigned directly under the authority of the Chief of Defence. The brigade was transformed into the JSOC, taking effect on November 1, 2019 and its commander, Brigade General Yavor Mateev was promoted to a major general as the chief of the new command.
Joint Special Operations Command, Plovdiv
Command Staff and Command Battalion
68th Special Forces Group (designated in honour of the former 68th Training Para-Recon Base, Plovdiv)
86th Special Forces Group (designated in honour of the former 86th Training Para-Recon Base, Musachevo)
1st Special Forces Group (listed on the official JSOC website, missing on the MoD website, status uncertain)
3rd Special Forces Group
Training and Combat Support Center
Logistics Support Battalion
Medical Point
Personnel and education
Bulgaria's total military personnel as of 2014 is 37,100, of which 30,400 (80.1%) are active military personnel and 8,100 (11.9%) are civilian personnel. The Land Forces are the largest branch, with at least 18,000 men serving there. In terms of percentage, 53% of all Army personnel are in the Land Forces, 25% are in the Air Force, 13% are in the Navy and 9% are in the Joint Forces Command. Annual spending per soldier amounts to 30,000 leva (~ 15,000 euro) and is scheduled to increase to 43,600 leva by 2014.
Unlike many former Soviet bloc militaries, discipline and morale problems are not common. During the Communist era, the army members enjoyed extensive social privileges. After the fall of Communism and Bulgaria's transition to a market economy, wages fell severely. For almost a decade social benefits were virtually non-existent, and some of them have been restored but recently. Nikolai Tsonev, defence minister under the 2005–2009 cabinet, undertook steps to provide the members of the military and their families with certain privileges in terms of healthcare and education, and to improve living conditions.
Military education in Bulgaria is provided in military universities and academies. Due to cuts in spending and manpower some universities have been disbanded and their campuses were included as faculties of other, larger educational entities. The largest institutions of military education in Bulgaria are:
Vasil Levski National Military University
Rakovski Defence and Staff College
Nikola Vaptsarov Naval Academy
Military Medical Academy – a mixed military academy/hospital institution
Training
The Land Forces practice extensive year-round military training in various conditions. Cooperative drills with the United States are very common, the last series of them conducted in 2008. Bulgaria's most recent full-scale exercise simulating a foreign invasion was carried out in 2009. It was conducted at the Koren range, and included some 1,700 personnel with tanks, ATGMs, attack aircraft, AA guns and armored vehicles. The combat skills of individual soldiers are on a very high level, on par with troops of the U.S. Army.
Until recent years the Air Force suffered somewhat from fuel shortages; a problem which was overcome in 2008. Fighter pilots have year-round flights, but gunship pilots do not fly often due to the yet unfulfilled modernization of the Mi-24 gunships. Due to financial difficulties fighter pilots have 60 hours of flying time per year, only a third of the national norm of 180 hours.
The Navy also has some fuel shortage problems, but military training is still effective. The most recent overseas operation of the Navy was along the coast of Libya as part of Operation Unified Protector.
Budget
After the collapse of the Warsaw pact, Bulgaria lost the ability to acquire cheap fuel and spares for its military. A large portion of its nearly 2,000 T-55 tanks fell into disrepair, and eventually almost all of them were scrapped or sold to other countries. In the early 1990s the budget was so small, that regulars only received token-value payments. Many educated and well-trained officers lost the opportunity to educate younger soldiers, as the necessary equipment and basis lacked adequate funding. Military spending increased gradually, especially in the last 10 years. As of 2005, the budget was no more than $400 mln., while military spending for 2009 amounted to more than $1.3 bln. – almost a triple increase for 4 years. Despite this growth, the military still does not receive sufficient funds for modernisation. An example of bad spending plans is the large-scale purchasing of transport aircraft, while the Air Force has a severe need of new fighters (the MiG-29s, even though modernised, are nearing their operational limits). The planned procurement of 2–4 Gowind class corvettes has been cancelled. As of 2009, military spending was about 1.98% of GDP. In 2010 the budget is to be only 1.3% due to the international financial crisis.
Land Forces
The Land Forces are functionally divided into Deployable and Reserve Forces. Their main functions include deterrence, defence, peace support and crisis management, humanitarian and rescue missions, as well as social functions within Bulgarian society. Active troops in the land forces number about 18,000 men, and reserve troops number about 13,000.
The equipment of the land forces is impressive in terms of numbers, but most of it is nonoperational and scheduled to be scrapped or refurbished and exported to other nations. Bulgaria has a military stockpile of about 5,000,000 small arms, models ranging from World War II-era MP 40 machine pistols to modern Steyr AUG, AK-74, HK MP5, HK416 and AR-M12F assault rifles.
National guard unit
The National Guard of Bulgaria, founded in 1879, is the successor to the personal guards of Knyaz Alexander I. On 12 July of that year, the guards escorted the Bulgarian knyaz for the first time; today the official holiday of the National Guard is celebrated on 12 July. Throughout the years the structure of the guards has evolved, going from convoy to squadron, to regiment and, subsequent to 1942, to division. Today it includes military units for army salute and wind orchestra duties.
In 2001, the National Guard unit was designated an official military unit of the Bulgarian army and one of the symbols of state authority, along with the flag, the coat of arms and the national anthem. It is a formation, directly subordinate to the Minister of Defence and while legally part of the armed forces, it is totally independent from the Defence Staff.
Statistics and equipment
Note: This table shows combined active and reserve force.Most are listed here.
In 2019 what remained from the scrapping of the previous new equipment some but not all of the T-72 Main battle tanks were sent for mechanical service for the first time in years.
Most of the equipment that should be battle ready is in dire condition, old, rusty or non-functional, the rest about 50,000 tons of what was sold as scrap" can be found in some of the scrap depots near the railroad in Sofia including battle tanks, artillery, and other battle soviet era equipment.
Navy
The Navy has traditionally been the smallest component of the Bulgarian military. Established almost simultaneously with the Ground forces in 1879, initially it consisted of a small fleet of boats on the Danube river. Bulgaria has a coastline of about 354 kilometres – thus, naval warfare is not considered a priority.
After the downturn in 1990, the Navy was largely overlooked and received almost no funding. No projects for modernisation were carried out until 2005, when a Wielingen class frigate (F912 Wandelaar) was acquired from Belgium. By 2009, Bulgaria acquired two more frigates of the same class. The first of them was renamed 41 Drazki and took part in several operations and exercises, most notably the UNIFIL Maritime Patrol along the coast of Lebanon in 2006, and Operation Active Endeavour. It also participated in the enforcement of the naval blockade against Muammar Gaddafi's regime off the coast of Libya from 2011 until 2012.
The equipment is typical for a small navy, consisting mostly of light multi-purpose vessels – four frigates, three corvettes, five minesweepers, three fast missile craft and two landing ships. Other equipment includes a coastal defence missile battalion armed with locally modified P-15 Termit missiles, a coastal artillery battery, a naval helicopter airbase and a marine special forces unit.
The Bulgarian Navy is centered in two main bases – in Varna and in Burgas.
Air Force
In the past decade Bulgaria has been trying actively to restructure its army as a whole and a lot of attention has been placed on keeping the aging Russian aircraft operational. Currently the attack and defence branches of the Bulgarian air force are mainly MiG-29s and Su-25s. About 15 MiG-29 fighters have been modernised in order to meet NATO standards. The first aircraft arrived on 29 November 2007 and final delivery was due in 03/09. In 2006 the Bulgarian government signed a contract with Alenia Aeronautica for the delivery of five C-27J Spartan transport aircraft in order to replace the Soviet made An-24 and An-26, although the contract was later changed to only three aircraft. Modern EU-made transport helicopters were purchased in 2005 and a total of 12 Eurocopter Cougar have been delivered (eight transport and four CSAR).Three Eurocopter AS565 Panther for the Bulgarian Navy in 2016.
Branches of the Air Force include: fighter aviation, assault aviation, intelligence aviation and transportation aviation, aid defence troops, radio-technical troops, communications troops, radio-technical support troops, logistics and medical troops.
The Bulgarian Ministry of Defense has announced plans to withdraw and replace the MiG-29 fighters with new F-16V Fighting Falcon by 2025–2026.
Aircraft inventory
With the exception of the Navy's small helicopter fleet, the Air Forces are responsible for all military aircraft in Bulgaria. The Air Forces' inventory numbers <50 aircraft, including combat jets and helicopters. Aircraft of western origin have only begun to enter the fleet, numbering of the total in service.Most is unusable, old and inactive
Bulgarian-American cooperation
The Bulgarian-American Joint Military Facilities were established by a Defence Cooperation Agreement signed by the United States and Bulgaria in April 2006. Under the agreement, U.S. forces can conduct training at several bases in the country, which remain under Bulgarian command and under the Bulgarian flag. Under the agreement, no more than 2,500 U.S. military personnel can be located at the joint military facilities.
Foreign Policy magazine lists Bezmer Air Base as one of the six most important overseas facilities used by the USAF.
Deployments
Both during Communist rule and after, Bulgaria has deployed troops with different tasks in various countries. The table below lists Bulgarian military deployments in foreign countries. Active missions are shown in bold.
See also
Defense industry of Bulgaria
Bulgaria and weapons of mass destruction
Medieval Bulgarian Army
References
Sources
Бяла книга на Въоръжените сили (White Paper of the Armed Forces), Ministry of Defence of Bulgaria, 2011.
Wikisource:Great Battles of Bulgaria
Bibliography
External links
Ministry of Defence of Bulgaria
Equipment holdings in 1996
https://web.archive.org/web/20110528070137/http://www.wikileaks.ch/cable/2007/10/07SOFIA1271.html – U.S. Embassy Sofia views via United States diplomatic cables leak on appropriate future equipment purchases, 2007
http://www.mediafire.com/download/heyrxhrnpqx06mz/Bulgarian_Military.docx and http://www.mediafire.com/download/ba571l7jiid2tf8/Bulgarian+Military.pdf - Download the word file and a pdf file for the Bulgarian Military's equipment list and specific details.
Military of Bulgaria
Permanent Structured Cooperation |
Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984),
then AT&T Bell Laboratories (1984–1996)
and Bell Labs Innovations (1996–2007),
is an American industrial research and scientific development company owned by Finnish company Nokia. It is headquartered in Murray Hill, New Jersey, and operates a global network of laboratories.
Researchers working at Bell Laboratories are credited with the development of radio astronomy, the transistor, the laser, the photovoltaic cell, the charge-coupled device (CCD), information theory, the Unix operating system, and the programming languages B, C, C++, S, SNOBOL, AWK, AMPL, and others. Ten Nobel Prizes have been awarded for work completed at Bell Laboratories.
Bell Labs had its origin in the complex corporate organization of the Bell System telephone conglomerate. The laboratory began in the late 19th century as the Western Electric Engineering Department, located at 463 West Street in New York City. After years of conducting research and development under Western Electric, a Bell subsidiary, the Engineering Department was reformed into Bell Telephone Laboratories in 1925 and placed under the shared ownership of Western Electric and the American Telephone and Telegraph Company (AT&T). In the 1960s, laboratory and company headquarters were moved to New Jersey. Nokia acquired Bell Labs in 2016.
Origin and historical locations
Bell's personal research after the telephone
In 1880, when the French government awarded Alexander Graham Bell the Volta Prize of 50,000francs for the invention of the telephone (equivalent to about US$10,000 at the time, or about $ now), he used the award to fund the Volta Laboratory (also known as the "Alexander Graham Bell Laboratory") in Washington, D.C. in collaboration with Sumner Tainter and Bell's cousin Chichester Bell. The laboratory was variously known as the Volta Bureau, the Bell Carriage House, the Bell Laboratory and the Volta Laboratory.
It focused on the analysis, recording, and transmission of sound. Bell used his considerable profits from the laboratory for further research and education advancing the diffusion of knowledge relating to the deaf. This resulted in the founding of the Volta Bureau (c. 1887) at the Washington, D.C. home of his father, linguist Alexander Melville Bell. The carriage house there, at 1527 35th Street N.W., became their headquarters in 1889.
In 1893, Bell constructed a new building close by at 1537 35th Street N.W., specifically to house the lab. This building was declared a National Historic Landmark in 1972.
After the invention of the telephone, Bell maintained a relatively distant role with the Bell System as a whole, but continued to pursue his own personal research interests.
Early antecedent
The Bell Patent Association was formed by Alexander Graham Bell, Thomas Sanders, and Gardiner Hubbard when filing the first patents for the telephone in 1876.
Bell Telephone Company, the first telephone company, was formed a year later. It later became a part of the American Bell Telephone Company.
In 1884, the American Bell Telephone Company created the Mechanical Department from the Electrical and Patent Department formed a year earlier.
American Telephone & Telegraph Company (AT&T) and its own subsidiary company, took control of American Bell and the Bell System by 1889.
American Bell held a controlling interest in Western Electric (which was the manufacturing arm of the business) whereas AT&T was doing research into the service providers.
Formal organization and location changes
In 1896, Western Electric bought property at 463 West Street to centralize the manufacturers and engineers which had been supplying AT&T with such technology as telephones, telephone exchange switches and transmission equipment.
On January 1, 1925, Bell Telephone Laboratories, Inc. was organized to consolidate the development and research activities in the communication field and allied sciences for the Bell System. Ownership was evenly shared between Western Electric and AT&T. The new company had 3600 engineers, scientists, and support staff. Its 400,000 square foot space was expanded with a new building occupying about one quarter of a city block.
The first chairman of the board of directors was John J. Carty, the vice-president of AT&T, and the first president was Frank B. Jewett, also a board member, who stayed there until 1940. The operations were directed by E. B. Craft, executive vice-president, and formerly chief engineer at Western Electric.
By the early 1940s, Bell Labs engineers and scientists had begun to move to other locations away from the congestion and environmental distractions of New York City, and in 1967 Bell Laboratories headquarters was officially relocated to Murray Hill, New Jersey.
Among the later Bell Laboratories locations in New Jersey were Holmdel, Crawford Hill, the Deal Test Site, Freehold, Lincroft, Long Branch, Middletown, Neptune, Princeton, Piscataway, Red Bank, Chester, and Whippany. Of these, Murray Hill and Crawford Hill remain in existence (the Piscataway and Red Bank locations were transferred to and are now operated by Telcordia Technologies and the Whippany site was purchased by Bayer).
The largest grouping of people in the company was in Illinois, at Naperville-Lisle, in the Chicago area, which had the largest concentration of employees (about 11,000) prior to 2001. There also were groups of employees in Indianapolis, Indiana; Columbus, Ohio; North Andover, Massachusetts; Allentown, Pennsylvania; Reading, Pennsylvania; and Breinigsville, Pennsylvania; Burlington, North Carolina (1950s–1970s, moved to Greensboro 1980s) and Westminster, Colorado. Since 2001, many of the former locations have been scaled down or closed.
Bell’s Holmdel research and development lab, a 1.9 million square foot structure set on 473 acres, was closed in 2007. The mirrored-glass building was designed by Eero Saarinen. In August 2013, Somerset Development bought the building, intending to redevelop it into a mixed commercial and residential project. A 2012 article expressed doubt on the success of the newly named Bell Works site, but several large tenants had announced plans to move in through 2016 and 2017.
Building Complex Location (code) information, past and present
Chester (CH) - North Road, Chester, NJ (began 1930, outdoor test site for small size telephone pole preservation, timber-related equipment, cable laying mechanism for the first undersea voice cable, research for loop transmission, Lucent donated land for park)
Crawford Hill (HOH) - Crawfords Corner Road, Holmdel, NJ (built 1930s, currently as exhibit and building sold, horn antenna used for "Big Bang" theory)
Red Hill (HR) - located at exit 109 on the Garden State Parkway (480 Red Hill Rd, Middletown, NJ), the building that formerly housed hundreds of Bell Labs researchers is now in use by Memorial Sloan Kettering
Holmdel (HO) - 101 Crawfords Corner, Holmdel, NJ (built 1959–1962, older structures in the 1920s, currently as private building called Bell Works, discovered extraterrestrial radio emissions, undersea cable research, satellite transmissions systems Telstar 3 and 4); provided office space for ~3000 workers in the 1980s; prized glass building with hollow interior designed by Eero Saarinen; a 3-legged white water tower built to resemble a transistor marks the long entrance drive to this facility.
Indian Hill (IH) - 2000 Naperville Road, Naperville, IL (built 1966, currently Nokia, developed switching technology and systems)
Murray Hill (MH) - 600 Mountain Ave, Murray Hill, NJ (built 1941–1945, currently Nokia, developed transistor, UNIX operating system and C programming language, anechoic chamber, several building sections demolished)
Short Hills (HL) - 101-103 JFK Parkway, Short Hills, NJ (Various departments such as Accounts Payable, IT Purchasing, HR Personnel, Payroll, Telecom, and the Government group, and Unix Administration Systems Computer Center. Buildings exist without the overhead walkway between the two buildings and two different companies are located from banking and business analytics.)
Summit (SF) - 190 River Road, Summit, NJ (building was part of the UNIX Software Operations and became UNIX System Laboratories, Inc. In December 1991, USL combined with Novell. Location is a banking company.)
West St ( ) - 463 West Street, New York, NY (built 1898, 1925 until December 1966 as Bell Labs headquarters, experimental talking movies, wave nature of matter, radar)
Whippany (WH) - 67 Whippany Road, Whippany, NJ (built 1920s, demolished and portion building as Bayer, performed military research and development, research and development in radar, in guidance for the Nike missile, and in underwater sound, Telstar 1, wireless technologies)
List of Bell Labs (1974)
Bell Lab’s 1974 corporate directory listed 22 labs in the United States, located in:
Allentown - Allentown, PA
Atlanta - Norcross, GA
Centennial Park - Piscataway, NJ
Chester - Chester, NJ
Columbus - Columbus, OH
Crawford Hill - Holmdel, NJ
Denver - Denver, CO
Grand Forks-MSR - Cavalier, ND [Missile Site Radar (MSR) Site]
Grand Forks-PAR - Cavalier, ND [Perimeter Acquisition Radar (PAR) Site]
Guilford Center - Greensboro, NC
Holmdel - Holmdel, NJ
Indianapolis - Indianapolis, IN
Indian Hill - Naperville, IL
Kwajalein - San Francisco, CA
Madison - Madison, NJ
Merrimack Valley - North Andover, MA
Murray Hill - Murray Hill, NJ
Raritan River Center - Piscataway, NJ
Reading - Reading, PA
Union - Union, NJ
Warren Service Center - Warren, NJ
Whippany - Whippany, NJ
Discoveries and developments
Bell Laboratories was, and is, regarded by many as the premier research facility of its type, developing a wide range of revolutionary technologies, including radio astronomy, the transistor, the laser, information theory, the operating system Unix, the programming languages C and C++, solar cells, the charge-coupled device (CCD), and many other optical, wireless, and wired communications technologies and systems.
1920s
In 1924, Bell Labs physicist Walter A. Shewhart proposed the control chart as a method to determine when a process was in a state of statistical control. Shewhart's methods were the basis for statistical process control (SPC): the use of statistically based tools and techniques to manage and improve processes. This was the origin of the modern quality control movement, including Six Sigma.
In 1926, the laboratories invented an early synchronous-sound motion picture system, in competition with Fox Movietone and DeForest Phonofilm.
In 1927, a Bell team headed by Herbert E. Ives successfully transmitted long-distance 128-line television images of Secretary of Commerce Herbert Hoover from Washington to New York. In 1928 the thermal noise in a resistor was first measured by John B. Johnson, for which Harry Nyquist provided the theoretical analysis; this is now termed Johnson noise. During the 1920s, the one-time pad cipher was invented by Gilbert Vernam and Joseph Mauborgne at the laboratories. Bell Labs' Claude Shannon later proved that it is unbreakable.
In 1928, Harold Black invented the negative feedback system commonly used in amplifiers. Later, Harry Nyquist analyzed Black's design rule for negative feedback. This work was published in 1932 and became known as the Nyquist criterion.
1930s
In 1931, a foundation for radio astronomy was laid by Karl Jansky during his work investigating the origins of static on long-distance shortwave communications. He discovered that radio waves were being emitted from the center of the galaxy.
In 1931 and 1932, the labs made experimental high fidelity, long playing, and even stereophonic recordings of the Philadelphia Orchestra, conducted by Leopold Stokowski.
In 1933, stereo signals were transmitted live from Philadelphia to Washington, D.C.
In 1937, the vocoder, an electronic speech compression device, or codec, and the Voder, the first electronic speech synthesizer, were developed and demonstrated by Homer Dudley, the Voder being demonstrated at the 1939 New York World's Fair. Bell researcher Clinton Davisson shared the Nobel Prize in Physics with George Paget Thomson for the discovery of electron diffraction, which helped lay the foundation for solid-state electronics.
1940s
In the early 1940s, the photovoltaic cell was developed by Russell Ohl. In 1943, Bell developed SIGSALY, the first digital scrambled speech transmission system, used by the Allies in World War II. The British wartime codebreaker Alan Turing visited the labs at this time, working on speech encryption and meeting Claude Shannon.
Bell Labs Quality Assurance Department gave the world and the United States such statisticians as Walter A. Shewhart, W. Edwards Deming, Harold F. Dodge, George D. Edwards, Harry Romig, R. L. Jones, Paul Olmstead, E.G.D. Paterson, and Mary N. Torrey. During World War II, Emergency Technical Committee – Quality Control, drawn mainly from Bell Labs' statisticians, was instrumental in advancing Army and Navy ammunition acceptance and material sampling procedures.
In 1947, the transistor, arguably the most important invention developed by Bell Laboratories, was invented by John Bardeen, Walter Houser Brattain, and William Bradford Shockley (and who subsequently shared the Nobel Prize in Physics in 1956). In 1947, Richard Hamming invented Hamming codes for error detection and correction. For patent reasons, the result was not published until 1950.
In 1948, "A Mathematical Theory of Communication", one of the founding works in information theory, was published by Claude Shannon in the Bell System Technical Journal. It built in part on earlier work in the field by Bell researchers Harry Nyquist and Ralph Hartley, but it greatly extended these. Bell Labs also introduced a series of increasingly complex calculators through the decade. Shannon was also the founder of modern cryptography with his 1949 paper Communication Theory of Secrecy Systems.
Calculators
Model I: A complex number calculator, completed in 1939 and put into operation in 1940, for doing calculations of complex numbers.
Model II: Relay Computer / Relay Interpolator, September 1943, for interpolating data points of flight profiles (needed for performance testing of a gun director). This model introduced error detection (self checking).
Model III: Ballistic Computer, June 1944, for calculations of ballistic trajectories
Model IV: Error Detector Mark II, March 1945, improved ballistic computer
Model V: General-purpose electromechanical computer, of which two were built, July 1946 and February 1947
Model VI: 1949, an enhanced Model V
1950s
The 1950s also saw developments based upon information theory. The central development was binary code systems. Efforts concentrated on the prime mission of supporting the Bell System with engineering advances, including the N-carrier system, TD microwave radio relay, direct distance dialing, E-repeater, wire spring relay, and the Number Five Crossbar Switching System.
In 1952, William Gardner Pfann revealed the method of zone melting, which enabled semiconductor purification and level doping.
In 1953, Maurice Karnaugh developed the Karnaugh map, used for managing of Boolean algebraic expressions.
In 1954, the first modern solar cell was invented at Bell Laboratories.
In 1956 TAT-1, the first transatlantic communications cable to carry telephone conversations, was laid between Scotland and Newfoundland in a joint effort by AT&T, Bell Laboratories, and British and Canadian telephone companies.
In 1957, Max Mathews created MUSIC, one of the first computer programs to play electronic music. Robert C. Prim and Joseph Kruskal developed new greedy algorithms that revolutionized computer network design.
In 1958, a technical paper by Arthur Schawlow and Charles Hard Townes first described the laser.
In 1959, Mohamed M. Atalla and Dawon Kahng invented the metal-oxide semiconductor field-effect transistor (MOSFET). The MOSFET has achieved electronic hegemony and sustains the large-scale integration (LSI) of circuits underlying today's information society.
1960s
On October 1, 1960, the Kwajalein Field Station was announced as a location for the NIKE-ZEUS test program. Mr. R. W. Benfer was the first director to arrive shortly on October 5 for the program. Bell Labs designed many of the major system elements and conducted fundamental investigations of phase-controlled scanning antenna arrays.
In December 1960, Ali Javan, PhD physicist from the university of Teheran, Iran with help by Rolf Seebach and his associates William Bennett and Donald Heriot, successfully operated the first gas laser, the first continuous-light laser, operating at an unprecedented accuracy and color purity.
In 1962, the electret microphone was invented by Gerhard M. Sessler and James E. West. Also in 1962, John R. Pierce's vision of communications satellites was realized by the launch of Telstar.
On July 10, 1962, the Telstar spacecraft was launched into orbit by NASA and it was designed and built by Bell Laboratories. The first worldwide television broadcast was July 23, 1962 with a press conference by President Kennedy.
In Spring 1964, the building of an electronic switching systems center was planned at Bell Laboratories near Naperville, Illinois. The building in 1966 would be called Indian Hill, and development work from former electronic switching organization at Holmdel and Systems Equipment Engineering organization would occupy the laboratory with engineers from Western Electric Hawthorne Works. Scheduled for work were about 1,200 people when completed in 1966, and peaked at 11,000 before October 2001 Lucent Technologies downsizing occurred.
In 1964, the carbon dioxide laser was invented by Kumar Patel and the discovery/operation of the Nd:YAG laser was demonstrated by J.E. Geusic et al. Experiments by Myriam Sarachik provided the first data that confirmed the Kondo effect. The research of Philip W. Anderson into electronic structure of magnetic and disordered systems led to improved understanding of metals and insulators for which he was awarded the Nobel Prize for Physics in 1977.
In 1965, Penzias and Wilson discovered the cosmic microwave background, for which they were awarded the Nobel Prize in Physics in 1978.
Frank W. Sinden, Edward E. Zajac, Ken Knowlton, and A. Michael Noll made computer-animated movies during the early to mid-1960s. Ken Knowlton invented the computer animation language BEFLIX. The first digital computer art was created in 1962 by Noll.
In 1966, orthogonal frequency-division multiplexing (OFDM), a key technology in wireless services, was developed and patented by R. W. Chang.
In December 1966, the New York City site was sold and became the Westbeth Artists Community complex.
In 1968, molecular beam epitaxy was developed by J.R. Arthur and A.Y. Cho; molecular beam epitaxy allows semiconductor chips and laser matrices to be manufactured one atomic layer at a time.
In 1969, Dennis Ritchie and Ken Thompson created the computer operating system UNIX for the support of telecommunication switching systems as well as general-purpose computing. Also, in 1969, the charge-coupled device (CCD) was invented by Willard Boyle and George E. Smith, for which they were awarded the Nobel Prize in Physics in 2009.
From 1969 to 1971, Aaron Marcus, the first graphic designer involved with computer graphics, researched, designed, and programmed a prototype interactive page-layout system for the Picturephone.
1970s
The 1970s and 1980s saw more and more computer-related inventions at the Bell Laboratories as part of the personal computing revolution.
In the 1970s, major central office technology evolved from crossbar electromechanical relay-based technology and discrete transistor logic to Bell Labs-developed thick film hybrid and transistor–transistor logic (TTL), stored program-controlled switching systems; 1A/#4 TOLL Electronic Switching Systems (ESS) and 2A Local Central Offices produced at the Bell Labs Naperville and Western Electric Lisle, Illinois facilities. This technology evolution dramatically reduced floor space needs. The new ESS also came with its own diagnostic software that needed only a switchman and several frame technicians to maintain.
About 1970, the coax-22 cable was developed by Bell Labs. This coax cable with 22 strands allowed a total capacity of 132,000 telephone calls. Previously, a 12-strand coax cable was used for L-carrier systems. Both of these types of cables were manufactured at Western Electrics' Baltimore Works facility on machines designed by a Western Electric Senior development engineer.
In 1970, A. Michael Noll invented a tactile, force-feedback system, coupled with interactive stereoscopic computer display.
In 1971, an improved task priority system for computerized telephone exchange switching systems for telephone traffic was invented by Erna Schneider Hoover, who received one of the first software patents for it.
In 1972, Dennis Ritchie developed the compiled programming language C as a replacement for the interpreted language B, which was then used in a worse is better rewrite of UNIX. Also, the language AWK was designed and implemented by Alfred Aho, Peter Weinberger, and Brian Kernighan of Bell Laboratories. Also in 1972, Marc Rochkind invented the Source Code Control System.
In 1976, optical fiber systems were first tested in Georgia.
Production of their first internally-designed microprocessor, the BELLMAC-8, began in 1977. In 1980 they demonstrated the first single-chip 32-bit microprocessor, the Bellmac 32A, which went into production in 1982.
In 1978, the proprietary operating system Oryx/Pecos was developed from scratch by Bell Labs in order to run AT&T's large-scale PBX switching equipment. It was first used with AT&T's flagship System 75, and until very recently, was used in all variations up through and including Definity G3 (Generic 3) switches, now manufactured by AT&T/Lucent Technologies spin off Avaya.
1980s
During the 1980s, the operating system Plan 9 from Bell Labs was developed extending the UNIX model. Also, the Radiodrum, an electronic music instrument played in three space dimensions, was invented.
In 1980, the TDMA digital cellular telephone technology was patented.
The launching of the Bell Labs Fellows Award started in 1982 to recognize and honor scientists and engineers who have made outstanding and sustained R&D contributions at AT&T with a level of distinction. As of the 2021 inductees, only 336 people have received the honor.
Ken Thompson and Dennis Ritchie were also Bell Labs Fellows for 1982. Ritchie started in 1967 at Bell Labs in the Bell Labs Computer Systems Research department. Thompson started in 1966. Both co-inventors of the UNIX operating system and C language were also awarded decades later the 2011 Japan Prize for Information and Communications.
In 1982, fractional quantum Hall effect was discovered by Horst Störmer and former Bell Laboratories researchers Robert B. Laughlin and Daniel C. Tsui; they consequently won a Nobel Prize in 1998 for the discovery.
In 1984, the first photoconductive antennas for picosecond electromagnetic radiation were demonstrated by Auston and others. This type of antenna became an important component in terahertz time-domain spectroscopy. In 1984, Karmarkar's algorithm for linear programming was developed by mathematician Narendra Karmarkar. Also in 1984, a divestiture agreement signed in 1982 with the American Federal government forced the breakup of AT&T, and Bellcore (now iconectiv) was split off from Bell Laboratories to provide the same R&D functions for the newly created local exchange carriers. AT&T also was limited to using the Bell trademark only in association with Bell Laboratories. Bell Telephone Laboratories, Inc. became a wholly owned company of the new AT&T Technologies unit, the former Western Electric. The 5ESS Switch was developed during this transition.
The National Medal of Technology was awarded to Bell Labs, the first corporation to achieve this honor in February 1985.
In 1985, laser cooling was used to slow and manipulate atoms by Steven Chu and team. In 1985, the modeling language A Mathematical Programming Language, AMPL, was developed by Robert Fourer, David M. Gay and Brian Kernighan at Bell Laboratories. Also in 1985, Bell Laboratories was awarded the National Medal of Technology "For contribution over decades to modern communication systems".
In 1985, the programming language C++ had its first commercial release. Bjarne Stroustrup started developing C++ at Bell Laboratories in 1979 as an extension to the original C language.
Arthur Ashkin invented optical tweezers that grab particles, atoms, viruses and other living cells with their laser beam fingers. A major breakthrough came in 1987, when Ashkin used the tweezers to capture living bacteria without harming them. He immediately began studying biological systems using the optical tweezers, which are now widely used to investigate the machinery of life. He was awarded the Nobel Prize in Physics (2018) for his work involving optical tweezers and their application to biological systems.
In the mid-1980s, the transmission departments of Bell Labs developed highly reliable long-haul fiber-optic communications systems based on SONET, and network operations techniques, that enabled very high volume, near-instantaneous communications across the North American continent. Fail-safe and disaster-related traffic management operations systems enhanced the usefulness of the fiber optics. There was a synergy in the land-based and seas-based fiber optic systems, although they were developed by different divisions within the company. These systems are still in use throughout the U.S. today.
Charles A. Burrus became a Bell Labs Fellow in 1988 for his work done as a Technical Staff member. Prior to this accomplishment, was awarded in 1982 the AT&T Bell Laboratories Distinguished Technical Staff Award. Charles started in 1955 at the Holmdel Bell Labs location and retired in 1996 with consultations to Lucent Technologies up to 2002.
In 1988, TAT-8 became the first transatlantic fiber-optic cable. Bell Labs in Freehold, NJ developed the 1.3-micron fiber, cable, splicing, laser detector, and 280 Mbit/s repeater for 40,000 telephone-call capacity.
In the late 1980s, realizing that voiceband modems were approaching the Shannon limit on bit rate, Richard D. Gitlin, Jean-Jacques Werner, and their colleagues pioneered a major breakthrough by inventing DSL (Digital Subscriber Line) and creating the technology that enabled megabit transmission on installed copper telephone lines, thus facilitating the broadband era.
1990s
Bell Labs' John Mayo received the National Medal of Technology in 1990.
In May 1990, Ronald Snare was named AT&T Bell Laboratories Fellow, for “Singular contributions to the development of the common-channel signaling network and the signal transfer points globally.” This system began service in the United States in 1978.
In the early 1990s, approaches to increase modem speeds to 56K were explored at Bell Labs, and early patents were filed in 1992 by Ender Ayanoglu, Nuri R. Dagdeviren and their colleagues.
The scientist, W. Lincoln Hawkins in 1992 received the National Medal of Technology for work done at Bell Labs.
In 1992, Jack Salz, Jack Winters and Richard D. Gitlin provided the foundational
technology to demonstrate that adaptive antenna arrays at the transmitter and receiver can
substantially increase both the reliability (via diversity) and capacity (via spatial multiplexing) of
wireless systems without expanding the bandwidth. Subsequently, the BLAST system proposed by Gerard Foschini and colleagues dramatically expanded the capacity of wireless systems. This technology, known today as MIMO (Multiple Input Multiple Output), was a significant factor in
the standardization, commercialization, performance improvement, and growth of
cellular and wireless LAN systems.
Amos Joel in 1993 received the National Medal of Technology.
Two AT&T Bell Labs scientists, Joel Engel and Richard Frenkiel, were honored with the National Medal of Technology, in 1994.
In 1994, the quantum cascade laser was invented by Federico Capasso, Alfred Cho, Jerome Faist and their collaborators. Also in 1994, Peter Shor devised his quantum factorization algorithm.
In 1996, SCALPEL electron lithography, which prints features atoms wide on microchips, was invented by Lloyd Harriott and his team. The operating system Inferno, an update of Plan 9, was created by Dennis Ritchie with others, using the then-new concurrent programming language Limbo. A high performance database engine (Dali) was developed which became DataBlitz in its product form.
In 1996, AT&T spun off Bell Laboratories, along with most of its equipment manufacturing business, into a new company named Lucent Technologies. AT&T retained a small number of researchers who made up the staff of the newly created AT&T Labs.
Lucy Sanders was the third woman to receive the Bell Labs Fellow award in 1996, for her work in creating a RISC chip that allowed more phone calls using software and hardware on a single server. She started in 1977 and was one of the few woman engineers at Bell Labs.
In 1997, the smallest then-practical transistor (60 nanometers, 182 atoms wide) was built. In 1998, the first optical router was invented.
Rudolph Kazarinov and Federico Capasso received the optoelectronics Rank Prize on December 8, 1998.
In December 1998, Ritchie and Thompson also were honorees of the National Medal of Technology for their work done for pre-Lucent Technologies Bell Labs. The award was presented by U.S. President William Clinton in 1999 in a White House ceremony.
2000s
2000 was an active year for the Laboratories, in which DNA machine prototypes were developed; progressive geometry compression algorithm made widespread 3-D communication practical; the first electrically powered organic laser was invented; a large-scale map of cosmic dark matter was compiled; and the F-15 (material), an organic material that makes plastic transistors possible, was invented.
In 2002, physicist Jan Hendrik Schön was fired after his work was found to contain fraudulent data. It was the first known case of fraud at Bell Labs.
In 2003, the New Jersey Institute of Technology Biomedical Engineering Laboratory was created at Murray Hill, New Jersey.
In 2004, Lucent Technologies awarded two women the prestigious Bell Labs Fellow Award. Magaly Spector, a director in INS/Network Systems Group, was awarded for "sustained and exceptional scientific and technological contributions in solid-state physics, III-V material for semiconductor lasers, Gallium Arsenide integrated circuits, and the quality and reliability of products used in high speed optical transport systems for next generation high bandwidth communication." Eve Varma, a technical manager in MNS/Network Systems Group, was awarded for her citation in "sustained contributions to digital and optical networking, including architecture, synchronization, restoration, standards, operations and control."
In 2005, Jeong H. Kim, former President of Lucent's Optical Network Group, returned from academia to become the President of Bell Laboratories.
In April 2006, Bell Laboratories' parent company, Lucent Technologies, signed a merger agreement with Alcatel. On December 1, 2006, the merged company, Alcatel-Lucent, began operations. This deal raised concerns in the United States, where Bell Laboratories works on defense contracts. A separate company, LGS Innovations, with an American board was set up to manage Bell Laboratories' and Lucent's sensitive U.S. government contracts. In March 2019, LGS Innovations was purchased by CACI.
In December 2007, it was announced that the former Lucent Bell Laboratories and the former Alcatel Research and Innovation would be merged into one organization under the name of Bell Laboratories. This is the first period of growth following many years during which Bell Laboratories progressively lost manpower due to layoffs and spin-offs making the company shut down briefly.
In February 2008, Alcatel-Lucent continued the Bell Laboratories tradition of awarding the prestigious award for outstanding technical contributors. Martin J. Glapa, a former chief Technical Officer of Lucent's Cable Communications Business Unit and Director of Advanced Technologies, was presented by Alcatel-Lucent Bell Labs President Jeong H. Kim with the 2006 Bell Labs Fellow Award in Network Architecture, Network Planning, and Professional Services with particular focus in Cable TV Systems and Broadband Services having "significant resulting Alcatel-Lucent commercial successes." Glapa is a patent holder and has co-written the 2004 technical paper called "Optimal Availability & Security For Voice Over Cable Networks" and co-authored the 2008 "Impact of bandwidth demand growth on HFC networks" published by IEEE.
As of July 2008, however, only four scientists remained in physics research, according to a report by the scientific journal Nature.
On August 28, 2008, Alcatel-Lucent announced it was pulling out of basic science, material physics, and semiconductor research, and it will instead focus on more immediately marketable areas, including networking, high-speed electronics, wireless networks, nanotechnology and software.
In 2009, Willard Boyle and George Smith were awarded the Nobel Prize in Physics for the invention and development of the charge-coupled device (CCD).
Rob Soni was an Alcatel-Lucent Bell Labs Fellow in 2009 as cited for work in winning North American customers wireless business and for helping to define 4G wireless networks with transformative system architectures.
2010s
Gee Rittenhouse, former Head of Research, returned from his position as chief operating officer of Alcatel-Lucent's Software, Services, and Solutions business in February 2013, to become the 12th President of Bell Labs.
On November 4, 2013, Alcatel-Lucent announced the appointment of Marcus Weldon as President of Bell Labs. His stated charter was to return Bell Labs to the forefront of innovation in Information and communications technology by focusing on solving the key industry challenges, as was the case in the great Bell Labs innovation eras in the past.
In July 2014, Bell Labs announced it had broken "the broadband Internet speed record" with a new technology dubbed XG-FAST that promises 10 gigabits per second transmission speeds.
In 2014, Eric Betzig shared the Nobel Prize in Chemistry for his work in super-resolved fluorescence microscopy which he began pursuing while at Bell Labs in the Semiconductor Physics Research Department.
On April 15, 2015, Nokia agreed to acquire Alcatel-Lucent, Bell Labs' parent company, in a share exchange worth $16.6 billion. Their first day of combined operations was January 14, 2016.
In September 2016, Nokia Bell Labs, along with Technische Universität Berlin, Deutsche Telekom T-Labs and the Technical University of Munich achieved a data rate of one terabit per second by improving transmission capacity and spectral efficiency in an optical communications field trial with a new modulation technique.
Antero Taivalsaari became a Bell Labs Fellow in 2016 for his specific work.
In 2017, Dragan Samardzija was awarded the Bell Labs Fellow.
In 2018, Arthur Ashkin shared the Nobel Prize in Physics for his work on "the optical tweezers and their application to biological systems" which was developed at Bell Labs in the 1980s.
2020s
In 2020, Alfred Aho and Jeffrey Ullman shared the Turing Award for their work on compilers, starting with their tenure at Bell Labs during 1967–69.
On, November 16, 2021, Nokia presented the 2021 Bell Labs Fellows Award Ceremony, six new members (Igor Curcio, Matthew Andrews, Bjorn Jelonnek, Ed Harstead, Gino Dion, Esa Tiirola) held at Nokia Batvik Mansion, Finland.
In December 2021, Nokia's Chief Strategy and Technology Officer decided to reorganize Bell Labs in two separate functional organizations: Bell Labs Core Research and Bell Labs Solutions research. Bell Labs Core Research is in charge of creating disruptive technologies with 10-year horizon. Bell Labs Solutions Research, looks for shorter term solutions that can provide growth opportunities for Nokia.
The Nokia 2022 Bell Labs Fellows were recognized on November 29, 2022, in a New Jersey ceremony. Five researchers were inducted to the total of 341 recipients since its inception by AT&T Bell Labs in 1982. One member was from New Jersey, two were from Cambridge, UK, and two were from Finland representing Espoo and Tampere locations.
Nobel Prize, Turing Award, IEEE Medal of Honor
Ten Nobel Prizes have been awarded for work completed at Bell Laboratories.
1937: Clinton J. Davisson shared the Nobel Prize in Physics for demonstrating the wave nature of matter.
1956: John Bardeen, Walter H. Brattain, and William Shockley received the Nobel Prize in Physics for inventing the first transistors.
1977: Philip W. Anderson shared the Nobel Prize in Physics for developing an improved understanding of the electronic structure of glass and magnetic materials.
1978: Arno A. Penzias and Robert W. Wilson shared the Nobel Prize in Physics. Penzias and Wilson were cited for their discovering cosmic microwave background radiation, a nearly uniform glow that fills the Universe in the microwave band of the radio spectrum.
1997: Steven Chu shared the Nobel Prize in Physics for developing methods to cool and trap atoms with laser light.
1998: Horst Störmer, Robert Laughlin, and Daniel Tsui, were awarded the Nobel Prize in Physics for discovering and explaining the fractional quantum Hall effect.
2009: Willard S. Boyle, George E. Smith shared the Nobel Prize in Physics with Charles K. Kao. Boyle and Smith were cited for inventing charge-coupled device (CCD) semiconductor imaging sensors.
2014: Eric Betzig shared the Nobel Prize in Chemistry for his work in super-resolved fluorescence microscopy which he began pursuing while at Bell Labs.
2018: Arthur Ashkin shared the Nobel Prize in Physics for his work on "the optical tweezers and their application to biological systems" which was developed at Bell Labs.
2023: Louis Brus shared the Nobel Prize in Chemistry for his work in "the discovery and synthesis of quantum dots" which he began at Bell Labs.
The Turing Award has been won five times by Bell Labs researchers.
1968: Richard Hamming for his work on numerical methods, automatic coding systems, and error-detecting and error-correcting codes.
1983: Ken Thompson and Dennis Ritchie for their work on operating system theory, and for developing Unix.
1986: Robert Tarjan with John Hopcroft, for fundamental achievements in the design and analysis of algorithms and data structures.
2018: Yann LeCun and Yoshua Bengio shared the Turing Award with Geoffrey Hinton for their work in Deep Learning.
2020: Alfred Aho and Jeffrey Ullman shared the Turing Award for their work on Compilers.
First awarded in 1917, the IEEE Medal of Honor is the highest form of recognition by the Institute of Electrical and Electronics Engineers. The IEEE Medal of Honor has been won 22 times by Bell Labs researchers.
1926 Greenleaf Whittier Pickard For his contributions as to crystal detectors, coil antennas, wave propagation and atmospheric disturbances.
1936 G A Campbell For his contributions to the theory of electrical network.
1940 Lloyd Espenschied For his accomplishments as an engineer, as an inventor, as a pioneer in the development of radio telephony, and for his effective contributions to the progress of international radio coordination.
1946 Ralph Hartley For his early work on oscillating circuits employing triode tubes and likewise for his early recognition and clear exposition of the fundamental relationship between the total amount of information which may be transmitted over a transmission system of limited band-width and the time required.
1949 Ralph Brown For his extensive contributions to the field of radio and for his leadership in Institute affairs
1955 Harald T. Friis For his outstanding technical contributions in the expansion of the useful spectrum of radio frequencies, and for the inspiration and leadership he has given to young engineers.
1960 Harry Nyquist For fundamental contributions to a quantitative understanding of thermal noise, data transmission and negative feedback.
1963 George C. Southworth (with John H. Hammond, Jr.) For pioneering contributions to microwave radio physics, to radio astronomy, and to waveguide transmission.
1966 Claude Shannon For his development of a mathematical theory of communication which unified and significantly advanced the state of the art.
1967 Charles H. Townes For his significant contributions in the field of quantum electronics which have led to the maser and the laser.
1971 John Bardeen For his profound contributions to the understanding of the conductivity of solids, to the invention of the transistor, and to the microscopic theory of superconductivity
1973 Rudolf Kompfner For a major contribution to world-wide communication through the conception of the traveling wave tube embodying a new principle of amplification.
1975 John R. Pierce For his pioneering concrete proposals and the realization of satellite communication experiments, and for contributions in theory and design of traveling wave tubes and in electron beam optics essential to this success.
1977 H. Earle Vaughan For his vision, technical contributions and leadership in the development of the first high-capacity pulse-code-modulation time-division telephone switching system.
1980 William Shockley For the invention of the junction transistor, the analog and the junction field-effect transistor, and the theory underlying their operation.
1981 Sidney Darlington For fundamental contributions to filtering and signal processing leading to chirp radar.
1982 John Wilder Tukey For his contributions to the spectral analysis of random processes and the fast Fourier transform algorithm.
1989 C. Kumar N. Patel For fundamental contributions to quantum electronics, including the carbon dioxide laser and the spin-flip Raman laser.
1992 Amos E. Joel Jr. For fundamental contributions to and leadership in telecommunications switching systems.
1994 Alfred Y. Cho For seminal contributions to the development of molecular beam epitaxy.
2001 Herwig Kogelnik For fundamental contributions to the science and technology of lasers and optoelectronics, and for leadership in research and development of photonics and lightwave communication systems.
2005 James L. Flanagan For sustained leadership and outstanding contributions in speech technology.
Emmy Awards, Grammy Award, and Academy Award
The Emmy Award has been won five times by Bell Labs: one under Lucent Technologies, one under Alcatel-Lucent, and three under Nokia.
1997: Primetime Engineering Emmy Award for "work on digital television as part of the HDTV Grand Alliance."
2013: Technology and Engineering Emmy for its "Pioneering Work in Implementation and Deployment of Network DVR"
2016: Technology & Engineering Emmy Award for the pioneering invention and deployment of fiber-optic cable.
2020: Technology & Engineering Emmy Award for the CCD (charge-coupled device) was crucial in the development of television, allowing images to be captured digitally for recording transmission.
2021: Technology & Engineering Emmy Award for the "ISO Base Media File Format standardization, in which our multimedia research unit has played a major role."
The inventions of fiber-optics and research done in digital television and media File Format were under former AT&T Bell Labs ownership.
The Grammy Award has been won once by Bell Labs under Alcatel-Lucent.
2006: Technical GRAMMY® Award for outstanding technical contributions to the recording field.
The Academy Award has been won once by E. C. Wente and Bell Labs.
1937: Scientific or Technical Award (Class II) for their multi-cellular high-frequency horn and receiver.
Publications
The American Telephone and Telegraph Company, Western Electric, and other Bell System companies issued numerous publications, such as local house organs, for corporate distribution, for the scientific and industry communities, and for the general public, including telephone subscribers.
The Bell Laboratories Record was a principal house organ, featuring general interest content such as corporate news, support staff profiles and events, reports of facilities upgrades, but also articles of research and development results written for technical or non-technical audiences. The publication commenced in 1925 with the founding of the laboratories.
A prominent journal for the focussed dissemination of original or reprinted scientific research by Bell Labs engineers and scientists was the Bell System Technical Journal, started in 1922 by the AT&T Information Department. Bell researchers also published widely in industry journals.
Some of these articles were reprinted by the Bell System as Monographs, consecutively issued starting in 1920. These reprints, numbering over 5000, comprise a catalog of Bell research over the decades. Research in the Monographs is aided by access to associated indexes, for monographs 1–1199, 1200-2850 (1958), 2851-4050 (1962), and 4051-4650 (1964).
Essentially all of the landmark work done by Bell Labs is memorialized in one or more corresponding monographs. Examples include:
Monograph 1598 - Shannon, A Mathematical Theory of Communication, 1948 (reprinted from BSTJ).
Monograph 1659 - Bardeen and Brattain, Physical Principles Involved in Transistor Action, 1949 (reprinted from BSTJ).
Monograph 1757 - Hamming, Error Detecting and Error Correcting Codes, 1950 (reprinted from BSTJ).
Monograph 3289 - Pierce, Transoceanic Communications by Means of Satellite, 1959 (reprinted from Proc. I.R.E.).
Monograph 3345 - Schawlow & Townes, Infrared and Optical Masers, 1958 (reprinted from Physical Review).
Presidents
Notable alumni
__ Nobel Prize
__ Turing Award
Programs
On May 20, 2014, Bell Labs announced the Bell Labs Prize, a competition for innovators to offer proposals in information and communication technologies, with cash awards of up to $100,000 for the grand prize.
Bell Labs Technology Showcase
The Murray Hill campus features a exhibit, the Bell Labs Technology Showcase, showcasing the technological discoveries and developments at Bell Labs. The exhibit is located just off the main lobby and is open to the public.
See also
Bell Labs Holmdel Complex
Bell Labs Technical Journal—Published scientific journal of Bell Laboratories (1996–present)
Bell Labs Record
Industrial laboratory
George Stibitz—Bell Laboratories engineer—"father of the modern digital computer"
History of mobile phones—Bell Laboratories conception and development of cellular phones
High speed photography & Wollensak—Fastax high speed (rotating prism) cameras developed by Bell Labs
Knolls Atomic Power Laboratory
Simplified Message Desk Interface
Sound film—Westrex sound system for cinema films developed by Bell Labs
TWX Magazine—A short-lived trade periodical published by Bell Laboratories (1944–1952)
Experiments in Art and Technology—A collaboration between artists and Bell Labs engineers & scientists to create new forms of art
References
Further reading
Martin, Douglas. Ian M. Ross, a President at Bell Labs, Dies at 85, The New York Times, March 16, 2013, p. A23
Gleick, James. The Information: A History, a Theory, a Flood. Vintage Books, 2012, 544 pages. .
External links
Bell Works, the re-imagining of the historic former Bell Labs building in Holmdel, New Jersey
Timeline of discoveries as of 2006 <Nokia Bell-Labs Timeline>
Bell Labs' Murray Hill anechoic chamber
Bell Laboratories and the Development of Electrical Recording
History of Bell Telephone Laboratories, Inc. (from Bell System Memorial)
Bell Communications Around the Globe, public art sculpture, Los Angeles, California
The Idea Factory a video interview with Jon Gertner, author of "The Idea Factory: Bell Labs and the Great Age of American Innovation, by Dave Iverson of KQED-FM Public Radio, San Francisco
Alcatel-Lucent
Bell System
Berkeley Heights, New Jersey
Companies based in Union County, New Jersey
Computer science institutes in the United States
Computer science research organizations
Former AT&T subsidiaries
History of telecommunications in the United States
National Medal of Technology recipients
New Providence, New Jersey
Nokia
Research institutes in New Jersey |
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones.
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually.
Etymology
The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.
According to Bluetooth's official website,
Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.
The Bluetooth logo is a bind rune merging the Younger Futhark runes (ᚼ, Hagall) and (ᛒ, Bjarkan), Harald's initials.
History
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.
In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM studied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX. The first Bluetooth mobile phone was the Ericsson T36, but it was the revised Ericsson model T39 that actually made it to store shelves in 2001. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since Wi-Fi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award.
Implementation
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels.
Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, each giving 2 and 3Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a BR/EDR radio.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC).
Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625µs, and two slots make up a slot pair of 1250µs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.
Communication and connection
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.
Uses
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable.
Bluetooth Classes and power use
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range. The actual range achieved by a given link will depend on the qualities of the devices at both ends of the link, as well as the air and obstacles in between. The primary hardware attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), the transmitter power, the receiver sensitivity, and the gain of both antennas.
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. Mostly, however, the Class 1 devices have a similar sensitivity to Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.
Bluetooth profile
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
The Headset Profile (HSP) connects headphones and earbuds to a cell phone or laptop.
The Health Device Profile (HDP) can connect a cell phone to a digital thermometer, or a heart rate detector.
The Video Distribution Profile (VDP) sends a video stream from a video camera to a TV screen or a recording device.
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.
List of applications
Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.
Wireless control of audio and communication functions between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone).
Wireless communication between a smartphone and a smart lock for unlocking doors.
Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers.
Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth".
Wireless streaming of audio to headphones with or without communication capabilities.
Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC.
Wireless networking between PCs in a confined space and where little bandwidth is required.
Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP.
Triggering the camera shutter of a smartphone using a Bluetooth controlled selfie stick.
Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Game consoles have been using Bluetooth as a wireless communications protocol for peripherals since the seventh generation, including Nintendo's Wii and Sony's PlayStation 3 which use Bluetooth for their respective controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.
Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone.
Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm.
Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists.
Wireless transmission of audio (a more reliable alternative to FM transmitters)
Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017.
Connection of motion controllers to a PC when using VR headsets
Bluetooth vs Wi-Fi (IEEE 802.11)
Bluetooth and Wi-Fi (Wi-Fi is the brand name for products using IEEE 802.11 standards) have some similar applications: setting up networks, printing, or transferring files. Wi-Fi is intended as a replacement for high-speed cabling for general local area network access in work areas or home. This category of applications is sometimes called wireless local area networks (WLAN). Bluetooth was intended for portable equipment and its applications. The category of applications is outlined as the wireless personal area network (WPAN). Bluetooth is a replacement for cabling in various personally carried applications in any setting and also works for fixed location applications such as smart energy functionality in the home (thermostats, etc.).
Wi-Fi and Bluetooth are to some extent complementary in their applications and usage. Wi-Fi is usually access point-centered, with an asymmetrical client-server connection with all traffic routed through the access point, while Bluetooth is usually symmetrical, between two Bluetooth devices. Bluetooth serves well in simple applications where two devices need to connect with a minimal configuration like a button press, as in headsets and speakers.
Devices
Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definition headsets, modems, hearing aids and even watches. Given the variety of devices which use Bluetooth, coupled with the contemporary deprecation of headphone jacks by Apple, Google, and other companies, and the lack of regulation by the FCC, the technology is prone to interference. Nonetheless, Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types.
Computer requirements
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle."
Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.
Operating system implementation
For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).
The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002.
Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom.
There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005.
FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph.
NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained.
DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work.
Specifications and features
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. In 2014 it had a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.
All versions of the Bluetooth standards support backwards compatibility. That lets the latest standard cover all older versions.
The Bluetooth Core Specification Working Group (CSWG) produces mainly 4 kinds of specifications:
The Bluetooth Core Specification, release cycle is typically a few years in between
Core Specification Addendum (CSA), release cycle can be as tight as a few times per year
Core Specification Supplements (CSS), can be released very quickly
Errata (Available with a user account: Errata login)
Bluetooth 1.0 and 1.0B
Products were not interoperable
Anonymity was not possible, preventing certain services from using Bluetooth environments
Bluetooth 1.1
Ratified as IEEE Standard 802.15.1–2002
Many errors found in the v1.0B specifications were fixed.
Added possibility of non-encrypted channels.
Received signal strength indicator (RSSI).
Bluetooth 1.2
Major enhancements include:
Faster connection and discovery
Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence.
Higher transmission speeds in practice than in v1.1, up to 721 kbit/s.
Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer.
Host Controller Interface (HCI) operation with three-wire UART.
Ratified as IEEE Standard 802.15.1–2005
Introduced flow control and retransmission modes for .
Bluetooth 2.0 + EDR
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The bit rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle.
The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.
Bluetooth 2.1 + EDR
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.
The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.
Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Bluetooth 3.0 + HS
Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link.
The main new feature is (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0 or earlier Core Specification Addendum 1.
L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1.
Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes.
Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.
Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset.
Ultra-wideband
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.
On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.
In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer term roadmap.
Bluetooth 4.0
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.
Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression.
In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions.
In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm-Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
Bluetooth 4.1
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.
New features of this specification include:
Mobile wireless service coexistence signaling
Train nudging and generalized interlaced scanning
Low Duty Cycle Directed Advertising
L2CAP connection-oriented and dedicated channels with credit-based flow control
Dual Mode and Topology
LE Link Layer Topology
802.11n PAL
Audio architecture updates for Wide Band Speech
Fast data advertising interval
Limited discovery time
Notice that some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Bluetooth 4.2
Released on 2 December 2014, it introduces features for the Internet of things.
The major areas of improvement are:
Low Energy Secure Connection with Data Packet Length Extension
Link Layer Privacy with Extended Scanner Filter Policies
Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth Smart things to support connected home
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.
Bluetooth 5
The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, for BLE, options that can double the speed (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections.
The major areas of improvement are:
Slot Availability Mask (SAM)
2 Mbit/s PHY for
LE Long Range
High Duty Cycle Non-Connectable Advertising
LE Advertising Extensions
LE Channel Selection Algorithm #2
Features Added in CSA5 – Integrated in v5.0:
Higher Output Power
The following features were removed in this version of the specification:
Park State
Bluetooth 5.1
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.
The major areas of improvement are:
Angle of Arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices
Advertising Channel Index
GATT caching
Minor Enhancements batch 1:
HCI support for debug keys in LE Secure Connections
Sleep clock accuracy update mechanism
ADI field in scan response data
Interaction between and Flow Specification
Block Host channel classification for secondary advertising
Allow the SID to appear in scan response reports
Specify the behavior when rules are violated
Periodic Advertising Sync Transfer
Features Added in Core Specification Addendum (CSA) 6 – Integrated in v5.1:
Models
Mesh-based model hierarchy
The following features were removed in this version of the specification:
Unit keys
Bluetooth 5.2
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification Version 5.2. The new specification adds new features:
Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT)
LE Power Control
LE Isochronous Channels
LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one transmission, allowing multiple receivers from one source or one receiver for multiple sources, known as Auracast. It uses a new LC3 codec. BLE Audio will also add support for hearing aids. On 12 July 2022, the Bluetooth SIG announced the completion of Bluetooth LE Audio. The standard has a lower minimum latency claim of 20–30 ms vs Bluetooth Classic audio of 100–200 ms. At IFA in August 2023 Samsung announced support for Auracast through a software update for their Galaxy Buds2 Pro and two of their TV's. In October users started getting updates for the earbuds.
Bluetooth 5.3
The Bluetooth SIG published the Bluetooth Core Specification Version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:
Connection Subrating
Periodic Advertisement Interval
Channel Classification Enhancement
Encryption key size control enhancements
The following features were removed in this version of the specification:
Alternate MAC and PHY (AMP) Extension
Bluetooth 5.4
The Bluetooth SIG released the Bluetooth Core Specification Version 5.4 on 7 February 2023. This new version adds the following features:
Periodic Advertising with Responses (PAwR)
Encrypted Advertising Data
LE Security Levels Characteristic
Advertising Coding Selection
Technical information
Architecture
Software
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
Hardware
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips.
Bluetooth protocol stack
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM.
Link Manager
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
Transmission and reception of data.
Name request
Request of the link addresses.
Establishment of the connection.
Authentication.
Negotiation of link mode and connection establishment.
Host Controller Interface
The Host Controller Interface provides a command interface between the controller and the host.
Logical Link Control and Adaptation Protocol
The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU.
In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel.
Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
Service Discovery Protocol
The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally Unique Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications
Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates [[EIA-1325
]] (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
Bluetooth Network Encapsulation Protocol
The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function to SNAP in Wireless LAN.
Audio/Video Control Transport Protocol
The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
Audio/Video Distribution Transport Protocol
The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.
Telephony Control Protocol
The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link.
TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite
Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation
Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services.
Baseband error correction
Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ).
Setting up connections
Any Bluetooth device in discoverable mode transmits the following information on demand:
Device name
Device class
List of services
Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset)
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking).
Pairing and bonding
Motivation
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
Implementation
During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated ACL link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes.
Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.
Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length.
Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use.
Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms:
Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.
Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.
Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection.
Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.
SSP is considered simple for the following reasons:
In most cases, it does not require a user to generate a passkey.
For use cases not requiring MITM protection, user interaction can be eliminated.
For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user.
Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.
Security concerns
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.
Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack.
Bluetooth v2.1 addresses this in the following ways:
Encryption is required for all non-SDP (Service Discovery Protocol) connections
A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks.
The encryption key must be refreshed before it expires.
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Security
Overview
Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.
The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.
In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes.
The most common cyber attacks of Bluetooth devices
When a Bluetooth device transmits unwanted spam and phishing messages to another Bluetooth device, it is known as bluejacking. Bluesnarfing is a malicious hack that uses a Bluetooth connection to steal information from one's device. Bluesmacking is a denial of service (DoS) attack that attempts to overload one's device and shut it down. Bluebugging is a sort of attack in which a cybercriminal uses a hidden Bluetooth connection to acquire backdoor access to one's device. Car whispering is a Bluetooth security flaw that affects Bluetooth-enabled car radios.
Cybersecurity compliance of bluetooth devices
RED
The Radio Equipment Directive 2014/53/EU (RED) regulates radio equipment's electromagnetic compatibility, safety, health, and radio spectrum efficiency. The Directive's Article 3(3) replaces radio-specific equipment cybersecurity with common interface standards. Radio equipment marketed in the EU must meet cybersecurity criteria in Article 3 (3) of the Radio Equipment Directive (RED).
Consumer IoT devices
ETSI EN 303 645 prepares the devices to guard against the most prevalent cybersecurity risks and prevent large-scale assaults on connected devices. It lays the groundwork for IoT certification. It has 13 cybersecurity categories and data protection regulations. In addition to device security requirements, it offers advice for managing security risks, including identification, assessment, deployment of controls, and continuing monitoring.
Bluejacking
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device.
Some form of DoS is also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
History of security concerns
2001–2004
In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS.
The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers.
This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.
2005
In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable.
In April 2005, Cambridge University security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.
In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.
In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.
2006
In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.
In October 2006, at the Luxemburgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.
2017
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.
2018
In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.
2019
In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation Of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".
Google released an Android security patch on 5 August 2019, which removed this vulnerability.
Health concerns
Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for class 1, 2.5mW for class 2, and 1mW for class 3 devices. Even the maximum power output of class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW.
Award programs
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.
See also
ANT+
Bluetooth stack – building blocks that make up the various implementations of the Bluetooth protocol
List of Bluetooth profiles – features used within the Bluetooth stack
Bluesniping
BlueSoleil – proprietary Bluetooth driver
Bluetooth Low Energy beacons (AltBeacon, iBeacon, Eddystone)
Bluetooth mesh networking
Continua Health Alliance
DASH7
Headset (audio)
Wi-Fi hotspot
Java APIs for Bluetooth
Key finder
Li-Fi
List of Bluetooth protocols
MyriaNed
Near-field communication
NearLink
RuBee – secure wireless protocol alternative
Tethering
Thread (network protocol)
Wi-Fi HaLow
Zigbee – low-power lightweight wireless protocol in the ISM band based on IEEE 802.15.4
Notes
References
External links
Specifications at Bluetooth SIG
Bluetooth
Mobile computers
Networking standards
Wireless communication systems
Telecommunications-related introductions in 1989
Swedish inventions |
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.
The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters.
The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. This is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency.
Discrete-time approximation
The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of
where is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for or a similar approximation for can be performed.
The inverse of this mapping (and its first-order bilinear approximation) is
The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function,
That is
Stability and minimum-phase property preserved
A continuous-time causal filter is stable if the poles of its transfer function fall in the left half of the complex s-plane. A discrete-time causal filter is stable if the poles of its transfer function fall inside the unit circle in the complex z-plane. The bilinear transform maps the left half of the complex s-plane to the interior of the unit circle in the z-plane. Thus, filters designed in the continuous-time domain that are stable are converted to filters in the discrete-time domain that preserve that stability.
Likewise, a continuous-time filter is minimum-phase if the zeros of its transfer function fall in the left half of the complex s-plane. A discrete-time filter is minimum-phase if the zeros of its transfer function fall inside the unit circle in the complex z-plane. Then the same mapping property assures that continuous-time filters that are minimum-phase are converted to discrete-time filters that preserve that property of being minimum-phase.
Transformation of a General LTI System
A general LTI system has the transfer function
The order of the transfer function is the greater of and (in practice this is most likely as the transfer function must be proper for the system to be stable). Applying the bilinear transform
where is defined as either or otherwise if using frequency warping, gives
Multiplying the numerator and denominator by the largest power of present, , gives
It can be seen here that after the transformation, the degree of the numerator and denominator are both .
Consider then the pole-zero form of the continuous-time transfer function
The roots of the numerator and denominator polynomials, and , are the zeros and poles of the system. The bilinear transform is a one-to-one mapping, hence these can be transformed to the z-domain using
yielding some of the discretized transfer function's zeros and poles and
As described above, the degree of the numerator and denominator are now both , in other words there is now an equal number of zeros and poles. The multiplication by means the additional zeros or poles are
Given the full set of zeros and poles, the z-domain transfer function is then
Example
As an example take a simple low-pass RC filter. This continuous-time filter has a transfer function
If we wish to implement this filter as a digital filter, we can apply the bilinear transform by substituting for the formula above; after some reworking, we get the following filter representation:
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
The coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used to implement a real-time digital filter.
Transformation for a general first-order continuous-time filter
It is possible to relate the coefficients of a continuous-time, analog filter with those of a similar discrete-time digital filter created through the bilinear transform process. Transforming a general, first-order continuous-time filter with the given transfer function
using the bilinear transform (without prewarping any frequency specification) requires the substitution of
where
.
However, if the frequency warping compensation as described below is used in the bilinear transform, so that both analog and digital filter gain and phase agree at frequency , then
.
This results in a discrete-time digital filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
Normally the constant term in the denominator must be normalized to 1 before deriving the corresponding difference equation. This results in
The difference equation (using the Direct form I) is
General second-order biquad transformation
A similar process can be used for a general second-order filter with the given transfer function
This results in a discrete-time digital biquad filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
Again, the constant term in the denominator is generally normalized to 1 before deriving the corresponding difference equation. This results in
The difference equation (using the Direct form I) is
Frequency warping
To determine the frequency response of a continuous-time filter, the transfer function is evaluated at which is on the axis. Likewise, to determine the frequency response of a discrete-time filter, the transfer function is evaluated at which is on the unit circle, . The bilinear transform maps the axis of the s-plane (of which is the domain of ) to the unit circle of the z-plane, (which is the domain of ), but it is not the same mapping which also maps the axis to the unit circle. When the actual frequency of is input to the discrete-time filter designed by use of the bilinear transform, then it is desired to know at what frequency, , for the continuous-time filter that this is mapped to.
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
This shows that every point on the unit circle in the discrete-time filter z-plane, is mapped to a point on the axis on the continuous-time filter s-plane, . That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is
and the inverse mapping is
The discrete-time filter behaves at frequency the same way that the continuous-time filter behaves at frequency . Specifically, the gain and phase shift that the discrete-time filter has at frequency is the same gain and phase shift that the continuous-time filter has at frequency . This means that every feature, every "bump" that is visible in the frequency response of the continuous-time filter is also visible in the discrete-time filter, but at a different frequency. For low frequencies (that is, when or ), then the features are mapped to a slightly different frequency; .
One can see that the entire continuous frequency range
is mapped onto the fundamental frequency interval
The continuous-time filter frequency corresponds to the discrete-time filter frequency and the continuous-time filter frequency correspond to the discrete-time filter frequency
One can also see that there is a nonlinear relationship between and This effect of the bilinear transform is called frequency warping. The continuous-time filter can be designed to compensate for this frequency warping by setting for every frequency specification that the designer has control over (such as corner frequency or center frequency). This is called pre-warping the filter design.
It is possible, however, to compensate for the frequency warping by pre-warping a frequency specification (usually a resonant frequency or the frequency of the most significant feature of the frequency response) of the continuous-time system. These pre-warped specifications may then be used in the bilinear transform to obtain the desired discrete-time system. When designing a digital filter as an approximation of a continuous time filter, the frequency response (both amplitude and phase) of the digital filter can be made to match the frequency response of the continuous filter at a specified frequency , as well as matching at DC, if the following transform is substituted into the continuous filter transfer function. This is a modified version of Tustin's transform shown above.
However, note that this transform becomes the original transform
as .
The main advantage of the warping phenomenon is the absence of aliasing distortion of the frequency response characteristic, such as observed with Impulse invariance.
See also
Impulse invariance
Matched Z-transform method
References
External links
MIT OpenCourseWare Signal Processing: Continuous to Discrete Filter Design
Lecture Notes on Discrete Equivalents
The Art of VA Filter Design
Digital signal processing
Transforms
Control theory |
The economy of the Cayman Islands, a British overseas territory located in the western Caribbean Sea, is mainly fueled by the tourism sector and by the financial services sector, together representing 50–60 percent of the country's gross domestic product (GDP). The Cayman Islands Investment Bureau, a government agency, has been established with the mandate of promoting investment and economic development in the territory. Because of the territory’s strong economy and it being a popular banking destination for wealthy individuals and businesses, it is often dubbed the ‘financial capital’ of the Caribbean.
The emergence of what is now considered the Cayman Islands' "twin pillars of economic development" (tourism and international finance) started in the 1950s with the introduction of modern transportation and telecommunications.
History
From the earliest settlement of the Cayman Islands, economic activity was hindered by isolation and a limited natural resource base. The harvesting of sea turtles to resupply passing sailing ships was the first major economic activity on the islands, but local stocks were depleted by the 1790s. Agriculture, while sufficient to support the small early settler population, has always been limited by the scarcity of arable land. Fishing, shipbuilding, and cotton production boosted the economy during the early days of settlement. In addition, settlers scavenged shipwreck remains from the surrounding coral reefs.
The boom in the Cayman Islands' international finance industry can also be at least partly attributed to the British overseas territory having no direct taxation. A popular legend attributes the tax-free status to the heroic acts of the inhabitants during a maritime tragedy in 1794, often referred to as "Wreck of the Ten Sails". The wreck involved nine British merchant vessels and their naval escort, the frigate HMS Convert, that ran aground on the reefs off Grand Cayman. Due to the rescue efforts by the Caymanians using canoes, the loss of life was limited to eight. However, records from the colonial era indicate that Cayman Islands, then a dependency of Jamaica, was not tax-exempt during the period that followed. In 1803, the inhabitants signed a petition addressed to the Jamaican governor asking him to grant them a tax exemption from the "Transient Tax on Wreck Goods".
Sir Vassel Johnson, the second caymanian to be knighted, was a pioneer of Cayman's financial services industry. Cayman Islands Past Governor Stuart Jack said 'As one of the architects of modern Cayman, especially the financial industry, Sir Vassel guided the steady growth of these Islands as the first financial secretary. His remarkable vision set the foundation for the prosperity and economic stability of these islands. Without his input, Cayman might well have remained the islands that time forgot.'
International finance
The Cayman Islands' tax-free status has attracted numerous banks and other companies to its shores. More than 92,000 companies were registered in the Cayman Islands as of 2014, including almost 600 banks and trust companies, with banking assets exceeding $500 billion. Numerous large corporations are based in the Cayman Islands, including, for example, Semiconductor Manufacturing International Corporation (SMIC). The Cayman Islands Stock Exchange was opened in 1997.
Financial services industry
The Cayman Islands is a major international financial centre. The largest sectors are "banking, hedge fund formation and investment, structured finance and securitisation, captive insurance, and general corporate activities". Regulation and supervision of the financial services industry is the responsibility of the Cayman Islands Monetary Authority (CIMA). Sir Vassel Johnson was a pioneer of Cayman's financial services industry.
Sir Vassel, who became the only Caymanian ever knighted in 1994, served as the Cayman Islands financial secretary from 1965 through 1982 and then as an Executive Council member from 1984 through 1988. In his government roles, Sir Vassel was a driving force in shaping the Cayman Islands financial services industry.
The Cayman Islands is the fifth-largest banking centre in the world, with $1.5 trillion in banking liabilities . In March 2017 there were 158 banks, 11 of which were licensed to conduct banking activities with domestic (Cayman-based) and international clients, and the remaining 147 were licensed to operate on an international basis with only limited domestic activity. Financial services generated KYD$1.2 billion of GDP in 2007 (55% of the total economy), 36% of all employment and 40% of all government revenue. In 2010, the country ranked fifth internationally in terms of value of liabilities booked and sixth in terms of assets booked. It has branches of 40 of the world's 50 largest banks. The Cayman Islands is the second largest captive domicile (Bermuda is largest) in the world with more than 700 captives, writing more than US$7.7 billion of premiums and with US$36.8 billion of assets under management.
There are a number of service providers. These include global financial institutions including HSBC, Deutsche Bank, UBS, and Goldman Sachs; over 80 administrators, leading accountancy practices (incl. the Big Four auditors), and offshore law practices including Maples & Calder. They also include wealth management such as Rothschilds private banking and financial advice.
Since the introduction of the Mutual Funds Law in 1993, which has been copied by jurisdictions around the world, the Cayman Islands has grown to be the world's leading offshore hedge fund jurisdiction. In June 2008, it passed 10,000 hedge fund registrations, and over the year ending June 2008 CIMA reported a net growth rate of 12% for hedge funds.
Starting in the mid-late 1990s, offshore financial centres, such as the Cayman Islands, came under increasing pressure from the OECD for their allegedly harmful tax regimes, where the OECD wished to prevent low-tax regimes from having an advantage in the global marketplace. The OECD threatened to place the Cayman Islands and other financial centres on a "black list" and impose sanctions against them. However, the Cayman Islands successfully avoided being placed on the OECD black list in 2000 by committing to regulatory reform to improve transparency and begin information exchange with OECD member countries about their citizens.
In 2004, under pressure from the UK, the Cayman Islands agreed in principle to implement the European Union Savings Directive (EUSD), but only after securing some important benefits for the financial services industry in the Cayman Islands. As the Cayman Islands is not subject to EU laws, the implementation of the EUSD is by way of bilateral agreements between each EU member state and the Cayman Islands. The government of the Cayman Islands agreed on a model agreement, which set out how the EUSD would be implemented with the Cayman Islands.
A report published by the International Monetary Fund (IMF), in March 2005, assessing supervision and regulation in the Cayman Islands' banking, insurance and securities industries, as well as its money laundering regime, recognised the jurisdiction's comprehensive regulatory and compliance frameworks. "An extensive program of legislative, rule and guideline development has introduced an increasingly effective system of regulation, both formalizing earlier practices and introducing enhanced procedures", noted IMF assessors. The report further stated that "the supervisory system benefits from a well-developed banking infrastructure with an internationally experienced and qualified workforce as well as experienced lawyers, accountants and auditors", adding that, "the overall compliance culture within Cayman is very strong, including the compliance culture related to AML (anti-money laundering) obligations".
On 4 May 2009, the United States President, Barack Obama, declared his intentions to curb the use of financial centres by multinational corporations. In his speech, he singled out the Cayman Islands as a tax shelter. The next day, the Cayman Island Financial Services Association submitted an open letter to the president detailing the Cayman Islands' role in international finance and its value to the US financial system.
The Cayman Islands was ranked as the world's second most significant tax haven on the Tax Justice Network's "Financial Secrecy Index" from 2011, scoring slightly higher than Luxembourg and falling behind only Switzerland. In 2013, the Cayman Islands was ranked by the Financial Secrecy Index as the fourth safest tax haven in the world, behind Hong Kong but ahead of Singapore. In the first conviction of a non-Swiss financial institution for US tax evasion conspiracy, two Cayman Islands financial institutions pleaded guilty in Manhattan Federal Court in 2016 to conspiring to hide more than $130 million in Cayman Islands bank accounts. The companies admitted to helping US clients hide assets in offshore accounts, and agreed to produce account files of non-compliant US taxpayers.
Foreign Account Tax Compliance Act
On 30 June 2014, the tax jurisdiction of the Cayman Islands was deemed to have an inter-governmental agreement (IGA) with the United States of America with respect to the "Foreign Account Tax Compliance Act" of the United States of America.
The Model 1 Agreement recognizes:
The Tax Information Exchange Agreement (TIEA) between the United States of America and The Cayman Islands which was signed in London, United Kingdom on 29 November 2013. Page 1 – Clause 2 of the FATCA Agreement.
The Government of Great Britain and Northern Ireland provided a copy of the Letter of Entrustment which was sent to the Government of the Cayman Islands, to the Government of the United States of America "via diplomatic note of October 16, 2013".
The Letter of Entrustment dated 20 October 2013, The Govt of Great Britain and Northern Ireland, authorized the Govt of the Cayman Islands to sign an agreement on information exchange to facilitate the Implementation of the Foreign Account Tax Compliance Act – Page 1 – Clause 10.
On 26 March 2017, the US Treasury site disclosed that the Model 1 agreement and related agreement were "In Force" on 1 July 2014.
Sanctions and Anti-Money Laundering Act
Under the UK Sanctions and Anti-Money Laundering Act of 2018, beneficial ownership of companies in British overseas territories such as the Cayman Islands must be publicly registered for disclosure by 31 December 2020. The Government of the Cayman Islands plans to challenge this law, arguing that it violates the Constitutional sovereignty granted to the islands. The British National Crime Agency said in September 2018 that the authorities in the Cayman Islands were not supplying information about the beneficial ownership of firms registered in the Cayman Islands.
Tourism
Tourism is also a mainstay, accounting for about 70% of GDP and 75% of foreign currency earnings. The tourist industry is aimed at the luxury market and caters mainly to visitors from North America. Unspoiled beaches, duty-free shopping, scuba diving, and deep-sea fishing draw almost a million visitors to the islands each year. Due to the well-developed tourist industry, many citizens work in service jobs in that sector.
Diversification
The Cayman Islands is seeking to diversify beyond its two traditional industries, and invest in health care and technology. Health City Cayman Islands, opened in 2014, is a medical tourism hospital in East End, led by surgeon Devi Shetty. Cayman Enterprise City is a special economic zone that was opened in 2011 for technology, finance, and education investment. Cayman Sea Salt (producing gourmet sea salt) and Cayman Logwood products are now made in the Cayman Islands.
Standard of living
Because the islands cannot produce enough goods to support the population, about 90% of their food and consumer goods must be imported. In addition, the islands have few natural fresh water resources. Desalination of sea water is used to solve this.
Despite those challenges, the Caymanians enjoy one of the highest outputs per capita and one of the highest standards of living in the world.
Education is compulsory to the age of 16 and is free to all Caymanian children. Most schools follow the British educational system. Ten primary, one special education and two highs schools ('junior high and senior high') are operated by the government, along with eight private high schools. In addition, there is a law school, a university-college and a medical school.
Poverty relief is provided by the Needs Assessment Unit, a government agency established by the Poor Persons (Relief) Law in January 1964.
References
See also
Economy of the Caribbean
Cayman Islands dollar
Cayman Islands Monetary Authority
Cayman Islands Stock Exchange
Central banks and currencies of the Caribbean
List of countries by credit rating
List of Commonwealth of Nations countries by GDP
List of Latin American and Caribbean countries by GDP growth
List of Latin American and Caribbean countries by GDP (nominal)
List of Latin American and Caribbean countries by GDP (PPP)
List of countries by tax revenue as percentage of GDP
List of countries by future gross government debt
List of countries by leading trade partners |
The R5000 is a 64-bit, bi-endian, superscalar, in-order execution 2-issue design microprocessor, that implements the MIPS IV instruction set architecture (ISA) developed by Quantum Effect Design (QED) in 1996. The project was funded by MIPS Technologies, Inc (MTI), also the licensor. MTI then licensed the design to Integrated Device Technology (IDT), NEC, NKK, and Toshiba. The R5000 succeeded the QED R4600 and R4700 as their flagship high-end embedded microprocessor. IDT marketed its version of the R5000 as the 79RV5000, NEC as VR5000, NKK as the NR5000, and Toshiba as the TX5000. The R5000 was sold to PMC-Sierra when the company acquired QED. Derivatives of the R5000 are still in production today for embedded systems.
Users
Users of the R5000 in workstation and server computers were Silicon Graphics, Inc. (SGI) and Siemens-Nixdorf. SGI used the R5000 in their O2 and Indy low-end workstations. The R5000 was also used in embedded systems such as network routers and high-end printers. The R5000 found its way into the arcade gaming industry, R5000 powered mainboards were used by Atari and Midway. Initially the Cobalt Qube and Cobalt RaQ used a derivative model, the RM5230 and RM5231. The Qube 2700 used the RM5230 microprocessor, whereas the Qube 2 used the RM5231. The original RaQ systems were equipped with RM5230 or RM5231 CPUs but later models used AMD K6-2 chips and then eventually Intel Pentium III CPUs for the final models.
History
The original roadmap called for 200 MHz operation in early 1996, 250 MHz in late 1996, succeeded in 1997 by R5000A. The R5000 was introduced in January 1996 and failed to achieve 200 MHz, topping out at 180 MHz. When positioned as a low-end workstation microprocessor, the competition included the IBM and Motorola PowerPC 604, the HP PA-7300LC and the Intel Pentium Pro.
Description
The R5000 is a two-way superscalar design that executes instructions in-order. The R5000 could simultaneously issue an integer and a floating-point instruction. It had one simple pipeline for integer instructions and another for floating-point to save transistors and die area to reduce cost. The R5000 did not perform dynamic branch prediction for cost reasons. Instead it uses a static approach, utilizing the hints encoded by the compiler in the branch-likely instructions first introduced in the MIPS II architecture to determine how likely a branch is taken.
The R5000 had large L1 caches, a distinct characteristic of QED, whose designers favored simple designs with large caches. The R5000 had two L1 caches, one for instructions and the other for data. Both have a capacity of 32 KB. The caches are two-way set-associative, have a 32-byte line size, and are virtually indexed, physically tagged. Instructions were predecoded as they enter the instruction cache by appending four bits to each instruction. These four bits specify whether can be issued together and which execution unit they are executed by. This assisted superscalar instruction issue by moving some of the dependency and conflict checking out of the critical path.
The integer unit executes most instructions with a one cycle latency and throughput except for multiply and divide. 32-bit multiplies have a five-cycle latency and a four-cycle throughput. 64-bit multiplies have an extra four cycles of latency and half the throughput. Divides have a 36-cycle latency and throughput for 32-bit integers, and for 64-bit integers, they are increased to 68 cycles.
The floating-point unit (FPU) was a fast single-precision (32-bit) design, for reduced cost and to benefit SGI, whose mid-range 3D graphics workstations relied mostly on single-precision math for 3D graphics applications. It was fully pipelined, which made it significantly better than that of the R4700. The R5000 implements the multiply-add instruction of the MIPS IV ISA. Single-precision adds, multiplies and multiply-adds have a four-cycle latency and a one cycle throughput. Single-precision divides have a 21-cycle latency and a 19-cycle throughput, while square roots have a 26-cycle latency and a 38-cycle throughput. Division and square-root was not pipelined. Instructions that operate on double precision numbers have a significantly higher latency and lower throughput except for add, which has identical latency and throughput with single-precision add. Multiply and multiply-add have a five-cycle latency and a two-cycle throughput. Divide has a 36-cycle latency and a 34-cycle throughput. Square root has a 68-cycle latency and a 66-cycle throughput.
The R5000 had an integrated L2 cache controller that supported capacities of 512 KB, 1 MB and 2 MB. The L2 cache shares the SysAD bus with the external interface. The cache was built with custom synchronous SRAMs (SSRAMs). The microprocessor uses the SysAD bus that is also used by several other MIPS microprocessors. The bus is multiplexed (address and data share the same set of wires) and can operate at clock frequencies up to 100 MHz. The initial R5000 did not support multiprocessing, but the package reserved eight pins for the future addition of this feature.
QED was a fabless company and did not fabricate their own designs. The R5000 was fabricated by IDT, NEC and NKK. All three companies fabricated the R5000 in a 0.35 μm complementary metal–oxide–semiconductor (CMOS) process, but with different process features. IDT fabricated the R5000 in a process with two levels of polysilicon and three levels of aluminium interconnect. The two levels of polysilicon enabled IDT to use a four-transistor SRAM cell, resulting in a transistor count of 3.6 million and a die that measured 8.7 mm by 9.7 mm (84.39 mm2). NEC and NKK fabricated the R5000 in a process with one level of polysilicon and three levels of aluminium interconnect. Without an extra level of polysilicon, both companies had to use a six-transistor SRAM cell, resulting in a transistor count of 5.0 million and a larger die with an area of around 87 mm2. Die sizes in the range of 80 to 90 mm2 were claimed by MTI. 0.8 million of the transistors in both versions were for logic, and the remainder contained in the caches. It was packaged in a 272-ball plastic ball grid array (BGA) or 272-pin plastic pin grid array (PGA). It was not pin-compatible with any previous MIPS microprocessor.
Derivatives
In the late 1990s, Quantum Effect Design acquired a license to manufacture and sell MIPS microprocessors from MTI and became a microprocessor vendor, changing its name to Quantum Effect Devices to reflect its new business model. The company's first products were members of the RM52xx family, which initially consisted of two models, the RM5230 and RM5260. These were announced on 24 March 1997. The RM5230 was initially available at 100 and 133 MHz, and the RM5260 at 133 and 150 MHz. On 29 September 1997, new 150 and 175 MHz RM5230s were introduced, as were 175 and 200 MHz RM5260s.
Both the RM5230 and RM5260 are derivatives of the R5000 and differ in the size of their primary caches (16 KB each instead of 32 KB), the width of their system interfaces (the RM5230 has a 32-bit 67 MHz SysAD bus, and the RM5260 a 64-bit 75 MHz SysAD bus), and the addition of multiply-add and three-operand multiply instructions for digital signal processing applications. These microprocessors were fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC) in its 0.35 μm process with three levels of interconnect. They were packaged by Amkor Technology in its Power-Quad 4 packages, the RM5230 in a 128-pin version, and the RM5260 in a 208-pin version.
The RM52xx family was later joined by the RM5270, which was announced at the Embedded Systems Conference on 29 September 1997. Intended for high-end embedded applications, the RM5270 was available at 150 and 200 MHz. Improvements were the addition of an on-chip secondary cache controller that supported up to 2 MB of cache. The SysAD bus is 64 bits wide and can operate at 100 MHz. It was packaged in a 304-pin Super-BGA (SBGA) that was pin-compatible with the RM7000 and was offered as a migration path to the RM7000.
On 20 July 1998, the RM52x1 family was announced. The family consisted of the RM5231, RM5261, and RM5271. These microprocessors were derivatives of the corresponding devices from the RM52x0 family fabricated in a 0.25 μm process with four levels of metal. The RM5231 was initially available at 150, 200, and 250 MHz; whereas the RM5261 and RM5271 were available at 250 and 266 MHz. On 6 July 1999, a 300 MHz RM5271 was introduced, priced at US$140 in quantities of 10,000. The RM52x1 improved upon the previous family with larger 32 KB primary caches and a faster SysAD bus that supported clock rates up to 125 MHz.
After QED was acquired by PMC-Sierra, the RM52xx and RM52x1 families were continued as PMC-Sierra products. PMC-Sierra introduced two RM52x1 derivatives, the RM5231A and RM5261A, on 4 April 2001. These microprocessors were fabricated by TSMC in its 0.18 μm process and differ from the previous devices by featuring higher clock rates and lower power consumption. The RM5231A was available at clock rates of 250 to 350 MHz, and the RM5261A from 250 to 400 MHz.
R5900 used in Sony's PlayStation 2 is a modified version of R5000 CPU dubbed the Emotion Engine with a customized instruction/data cache arrangement and Sony's proprietary 107 vector SIMD Multimedia Extensions(MMI). Its custom FPU is not IEEE 754 compliant unlike FPUs used by R5000. It also has a second MIPS core which acted as a sync controller for specialized vector coprocessors, important for 3D math which at the time was principally computed on the CPU.
References
Computergram (8 January 1996). "MIPS Ready With R5000 Successor to the 4600/4700". Computer Business Review.
Gwennap, Linley (22 January 1996). "R5000 Improves FP for MIPS Midrange". Microprocessor Report, 10 (1).
Halfhill, Tom R. (April 1996). "R5000 Cuts 3-D Cost". Byte.
Halfhill, Tom R. (May 1996). "Mips R5000: Fast, Affordable 3-D". Byte, 161–162.
MIPS Technologies, Inc. MIPS R5000 Microprocessor Technical Backgrounder.
PMC-Sierra, Inc. (4 April 2001). "PMC-Sierra Ships Third Generation R5200A MIPS Microprocessors". Press release.
Quantum Effect Devices (24 March 1997). "QED Introduces RM52xx Microprocessor Family". Press release.
Quantum Effect Devices (29 September 1997). "QED Introduces RM5270 Superscalar 64-bit Microprocessor". Press release.
Quantum Effect Devices (20 July 1998). "QED Introduces The RM52x1 Microprocessor Family". Press release.
Quantum Effect Devices (6 July 1999). "QED's RM5271 Available Immediately at 300MHz". Press release.
MIPS implementations
Quantum Effect Devices microprocessors
Superscalar microprocessors
64-bit microprocessors |
The École nationale supérieure d'informatique et de mathématiques appliquées, or Ensimag, is a prestigious French Grande École located in Grenoble, France. Ensimag is part of the Institut polytechnique de Grenoble (Grenoble INP). The school is one of the top French engineering institutions and specializes in computer science, applied mathematics and telecommunications.
In the fields of computer science and applied mathematics, Ensimag ranks first in France, as measured by the position of its students in the national admission examinations and by the ranking of companies hiring its students and specialized media.
Students are usually admitted to Ensimag competitively following two years of undergraduate studies in classes préparatoires aux grandes écoles. Studies at Ensimag are of three years' duration and lead to the French degree of "Diplôme National d'Ingénieur" (equivalent to a master's degree).
Grenoble, in the French Alps, has always been a pioneer for high-tech engineering education in France. The first French school of electrical engineering has been created in Grenoble in 1900 (one of the first in the world after MIT). In 1960 the eminent French mathematician Jean Kuntzmann founded Ensimag. Since that time it has become the highest ranking French engineering school in computer science and applied mathematics.
About 250 students graduates from the school each year in its different degrees, and counts with more than 5500 alumni worldwide.
Ensimag Graduate specializations
Ensimag's curriculum offers a variety of compulsory and elective advanced courses, making up specific profiles.
Most of the common core courses are taught in the first year and the first semester of the second year, allowing students to acquire the basics in applied mathematics and informatics.
Students then choose a graduate specialization.
Financial Engineering
Financial Mathematics
Mathematics and Informatics for Finance
Computer Systems for Finance
Software and Systems Engineering
Architecture of Complex Systems
Security
Information Systems
Mathematical modeling, Vision, Graphics and Simulation
Modeling, Calculus, Simulation
Images, Virtual Reality and Multimedia
Decision-making
Bio-informatics
Embedded Systems and Connected Devices
Software, hardware and systems for embedded and intelligent applications
High level modeling, virtual prototyping and validation of complex systems
Control theory and informatics
Architecture and telecommunication services
Networks transmission systems
International master’s programmes (Courses in English)
Master of Science in Informatics at Grenoble
Since September 2008, a joint degree programme with Université Joseph Fourier. Highly competitive, two-year graduate program offering training in the areas of:
Distributed Embedded Mobile and Interactive Systems
Graphics, Vision and Robotics
AI and the Web
Security and Cryptology of Information Systems. This program is common between the Grenoble INP and the Université Grenoble Alpes.
Website: http://mosig.imag.fr/
Master in Communication Systems Engineering
Offered jointly by Ensimag and Politecnico di Torino (Italy)
This course aims to train engineers to specialize in the design and management of communication systems, ranging from simple point-to-point transmissions to diversified telecommunications networks.
A four-semester course:
First and second semesters taught at Politecnico di Torino
Third semester taught at Grenoble INP
Fourth semester: Master's Thesis
Website: http://cse.ensimag.fr
Research at Ensimag
Ensimag students can perform research work as part of their curriculum in second year, as well as a second-year internship and their end of studies project in a research laboratory. 15% of Ensimag graduates choose to pursue a Ph.D.
Junior enterprise: Nsigma
Nsigma was founded on November 17, 1980 as a voluntary association under the name ENSIGMA PROGRAMMATION. This association obtained the label Junior-Entreprise ® in 1981 and managed to renew it every year since then. The Junior enterprise took advantage of the Ensimag reform in 2008 to update its status and title. Currently called Nsigma, it is now an information technology service provider.
Website: http://nsigma.fr/
External links
(fr) The official Ensimag website
(en) The official Ensimag website
References
Informatique et de mathématiques appliquées de Grenoble
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Universities and colleges established in 1960
1960 establishments in France |
Richard Blahut (born June 9, 1937), former chair of the Electrical and Computer Engineering Department at the University of Illinois at Urbana–Champaign, is best known for his work in information theory (e.g. the Blahut–Arimoto algorithm used in rate–distortion theory). He received his PhD Electrical Engineering from Cornell University in 1972.
Blahut was elected a member of the National Academy of Engineering in 1990 for pioneering work in coherent emitter signal processing and for contributions to information theory and error control codes.
Academic life
Blahut taught at Cornell from 1973 to 1994. He has taught at Princeton University, the Swiss Federal Institute of Technology, the NATO Advanced Study Institute, and has also been a Consulting Professor at the South China University of Technology. He is also the Henryk Magnuski Professor of Electrical and Computer Engineering and is affiliated with the Coordinated Science Laboratory.
Awards and recognition
IEEE Claude E. Shannon Award, 2005
IEEE Third Millennium Medal
TBP Daniel C. Drucker Eminent Faculty Award 2000
IEEE Alexander Graham Bell Medal 1998, for "contributions to error-control coding, particularly by combining algebraic coding theory and digital transform techniques."
National Academy of Engineering 1990
Japanese Society for the Propagation of Science Fellowship 1982
Fellow of Institute of Electrical and Electronics Engineers, 1981, for the development of passive surveillance systems and for contributions to information theory and error control codes.
Fellow of IBM Corporation, 1980
IBM Corporate Recognition Award 1979
IBM Outstanding Innovation Award 1978
IBM Outstanding Contribution Award 1976
IBM Resident Study Program 1969–1971
IBM Outstanding Contribution Award 1968
Books
Lightwave Communications, with George C. Papen (Cambridge University Press, 2019)
Cryptography and Secure Communication, (Cambridge University Press, 2014)
Modem Theory: An Introduction to Telecommunications, (Cambridge University Press, 2010)
Fast Algorithms for Signal Processing, (Cambridge University Press, 2010)
Algebraic Codes on Lines, Planes, and Curves: An Engineering Approach, (Cambridge University Press, 2008)
Theory of Remote Image Formation, (Cambridge University Press, 2004)
Algebraic Codes for Data Transmission, (Cambridge University Press, 2003)
Algebraic Methods for Signal Processing and Communications Coding, (Springer-Verlag, 1992)
Digital Transmission of Information, (Addison–Wesley Press, 1990)
Fast Algorithms for Digital Signal Processing, (Addison–Wesley Press, 1985)
Theory and Practice of Error Control Codes, (Addison–Wesley Press, 1983)
See also
IEEE Biography
ECE @ UIUC
References
External links
Living people
1937 births
Members of the United States National Academy of Engineering
Fellows of the American Association for the Advancement of Science
University of Illinois Urbana-Champaign faculty
Cornell University faculty
Cornell University College of Engineering alumni
American electrical engineers
Fellow Members of the IEEE
American telecommunications engineers |
Multiple sequence alignment (MSA) may refer to the process or the result of sequence alignment of three or more biological sequences, generally protein, DNA, or RNA. In many cases, the input set of query sequences are assumed to have an evolutionary relationship by which they share a linkage and are descended from a common ancestor. From the resulting MSA, sequence homology can be inferred and phylogenetic analysis can be conducted to assess the sequences' shared evolutionary origins. Visual depictions of the alignment as in the image at right illustrate mutation events such as point mutations (single amino acid or nucleotide changes) that appear as differing characters in a single alignment column, and insertion or deletion mutations (indels or gaps) that appear as hyphens in one or more of the sequences in the alignment. Multiple sequence alignment is often used to assess sequence conservation of protein domains, tertiary and secondary structures, and even individual amino acids or nucleotides.
Computational algorithms are used to produce and analyse the MSAs due to the difficulty and intractability of manually processing the sequences given their biologically-relevant length. MSAs require more sophisticated methodologies than pairwise alignment because they are more computationally complex. Most multiple sequence alignment programs use heuristic methods rather than global optimization because identifying the optimal alignment between more than a few sequences of moderate length is prohibitively computationally expensive. On the other hand, heuristic methods generally fail to give guarantees on the solution quality, with heuristic solutions shown to be often far below the optimal solution on benchmark instances.
Problem statement
Given sequences , similar to the form below:
A multiple sequence alignment is taken of this set of sequences by inserting any amount of gaps needed into each of the sequences of until the modified sequences, , all conform to length and no values in the sequences of of the same column consists of only gaps. The mathematical form of an MSA of the above sequence set is shown below:
To return from each particular sequence to , remove all gaps.
Graphing approach
A general approach when calculating multiple sequence alignments is to use graphs to identify all of the different alignments.
When finding alignments via graph, a complete alignment is created in a weighted graph that contains a set of vertices and a set of edges. Each of the graph edges has a weight based on a certain heuristic that helps to score each alignment or subset of the original graph.
Tracing alignments
When determining the best suited alignments for each MSA, a trace is usually generated. A trace is a set of realized, or corresponding and aligned, vertices that has a specific weight based on the edges that are selected between corresponding vertices. When choosing traces for a set of sequences it is necessary to choose a trace with a maximum weight to get the best alignment of the sequences.
Alignment methods
There are various alignment methods used within multiple sequence to maximize scores and correctness of alignments. Each is usually based on a certain heuristic with an insight into the evolutionary process. Most try to replicate evolution to get the most realistic alignment possible to best predict relations between sequences.
Dynamic programming
A direct method for producing an MSA uses the dynamic programming technique to identify the globally optimal alignment solution. For proteins, this method usually involves two sets of parameters: a gap penalty and a substitution matrix assigning scores or probabilities to the alignment of each possible pair of amino acids based on the similarity of the amino acids' chemical properties and the evolutionary probability of the mutation. For nucleotide sequences, a similar gap penalty is used, but a much simpler substitution matrix, wherein only identical matches and mismatches are considered, is typical. The scores in the substitution matrix may be either all positive or a mix of positive and negative in the case of a global alignment, but must be both positive and negative, in the case of a local alignment.
For n individual sequences, the naive method requires constructing the n-dimensional equivalent of the matrix formed in standard pairwise sequence alignment. The search space thus increases exponentially with increasing n and is also strongly dependent on sequence length. Expressed with the big O notation commonly used to measure computational complexity, a naïve MSA takes O(LengthNseqs) time to produce. To find the global optimum for n sequences this way has been shown to be an NP-complete problem. In 1989, based on Carrillo-Lipman Algorithm, Altschul introduced a practical method that uses pairwise alignments to constrain the n-dimensional search space. In this approach pairwise dynamic programming alignments are performed on each pair of sequences in the query set, and only the space near the n-dimensional intersection of these alignments is searched for the n-way alignment. The MSA program optimizes the sum of all of the pairs of characters at each position in the alignment (the so-called sum of pair score) and has been implemented in a software program for constructing multiple sequence alignments. In 2019, Hosseininasab and van Hoeve showed that by using decision diagrams, MSA may be modeled in polynomial space complexity.
Progressive alignment construction
The most widely used approach to multiple sequence alignments uses a heuristic search known as progressive technique (also known as the hierarchical or tree method) developed by Da-Fei Feng and Doolittle in 1987. Progressive alignment builds up a final MSA by combining pairwise alignments beginning with the most similar pair and progressing to the most distantly related. All progressive alignment methods require two stages: a first stage in which the relationships between the sequences are represented as a tree, called a guide tree, and a second step in which the MSA is built by adding the sequences sequentially to the growing MSA according to the guide tree. The initial guide tree is determined by an efficient clustering method such as neighbor-joining or UPGMA, and may use distances based on the number of identical two-letter sub-sequences (as in FASTA rather than a dynamic programming alignment).
Progressive alignments are not guaranteed to be globally optimal. The primary problem is that when errors are made at any stage in growing the MSA, these errors are then propagated through to the final result. Performance is also particularly bad when all of the sequences in the set are rather distantly related. Most modern progressive methods modify their scoring function with a secondary weighting function that assigns scaling factors to individual members of the query set in a nonlinear fashion based on their phylogenetic distance from their nearest neighbors. This corrects for non-random selection of the sequences given to the alignment program.
Progressive alignment methods are efficient enough to implement on a large scale for many (100s to 1000s) sequences. Progressive alignment services are commonly available on publicly accessible web servers so users need not locally install the applications of interest. The most popular progressive alignment method has been the Clustal family, especially the weighted variant ClustalW to which access is provided by a large number of web portals including GenomeNet, EBI, and EMBNet. Different portals or implementations can vary in user interface and make different parameters accessible to the user. ClustalW is used extensively for phylogenetic tree construction, in spite of the author's explicit warnings that unedited alignments should not be used in such studies and as input for protein structure prediction by homology modeling. Current version of Clustal family is ClustalW2. EMBL-EBI announced that CLustalW2 will be expired in August 2015. They recommend Clustal Omega which performs based on seeded guide trees and HMM profile-profile techniques for protein alignments. They offer different MSA tools for progressive DNA alignments. One of them is MAFFT (Multiple Alignment using Fast Fourier Transform).
Another common progressive alignment method called T-Coffee is slower than Clustal and its derivatives but generally produces more accurate alignments for distantly related sequence sets. T-Coffee calculates pairwise alignments by combining the direct alignment of the pair with indirect alignments that aligns each sequence of the pair to a third sequence. It uses the output from Clustal as well as another local alignment program LALIGN, which finds multiple regions of local alignment between two sequences. The resulting alignment and phylogenetic tree are used as a guide to produce new and more accurate weighting factors.
Because progressive methods are heuristics that are not guaranteed to converge to a global optimum, alignment quality can be difficult to evaluate and their true biological significance can be obscure. A semi-progressive method that improves alignment quality and does not use a lossy heuristic while still running in polynomial time has been implemented in the program PSAlign.
Iterative methods
A set of methods to produce MSAs while reducing the errors inherent in progressive methods are classified as "iterative" because they work similarly to progressive methods but repeatedly realign the initial sequences as well as adding new sequences to the growing MSA. One reason progressive methods are so strongly dependent on a high-quality initial alignment is the fact that these alignments are always incorporated into the final result — that is, once a sequence has been aligned into the MSA, its alignment is not considered further. This approximation improves efficiency at the cost of accuracy. By contrast, iterative methods can return to previously calculated pairwise alignments or sub-MSAs incorporating subsets of the query sequence as a means of optimizing a general objective function such as finding a high-quality alignment score.
A variety of subtly different iteration methods have been implemented and made available in software packages; reviews and comparisons have been useful but generally refrain from choosing a "best" technique. The software package PRRN/PRRP uses a hill-climbing algorithm to optimize its MSA alignment score and iteratively corrects both alignment weights and locally divergent or "gappy" regions of the growing MSA. PRRP performs best when refining an alignment previously constructed by a faster method.
Another iterative program, DIALIGN, takes an unusual approach of focusing narrowly on local alignments between sub-segments or sequence motifs without introducing a gap penalty. The alignment of individual motifs is then achieved with a matrix representation similar to a dot-matrix plot in a pairwise alignment. An alternative method that uses fast local alignments as anchor points or "seeds" for a slower global-alignment procedure is implemented in the CHAOS/DIALIGN suite.
A third popular iteration-based method called MUSCLE (multiple sequence alignment by log-expectation) improves on progressive methods with a more accurate distance measure to assess the relatedness of two sequences. The distance measure is updated between iteration stages (although, in its original form, MUSCLE contained only 2-3 iterations depending on whether refinement was enabled).
Consensus methods
Consensus methods attempt to find the optimal multiple sequence alignment given multiple different alignments of the same set of sequences. There are two commonly used consensus methods, M-COFFEE and MergeAlign. M-COFFEE uses multiple sequence alignments generated by seven different methods to generate consensus alignments. MergeAlign is capable of generating consensus alignments from any number of input alignments generated using different models of sequence evolution or different methods of multiple sequence alignment. The default option for MergeAlign is to infer a consensus alignment using alignments generated using 91 different models of protein sequence evolution.
Hidden Markov models
Hidden Markov models are probabilistic models that can assign likelihoods to all possible combinations of gaps, matches, and mismatches to determine the most likely MSA or set of possible MSAs. HMMs can produce a single highest-scoring output but can also generate a family of possible alignments that can then be evaluated for biological significance. HMMs can produce both global and local alignments. Although HMM-based methods have been developed relatively recently, they offer significant improvements in computational speed, especially for sequences that contain overlapping regions.
Typical HMM-based methods work by representing an MSA as a form of directed acyclic graph known as a partial-order graph, which consists of a series of nodes representing possible entries in the columns of an MSA. In this representation a column that is absolutely conserved (that is, that all the sequences in the MSA share a particular character at a particular position) is coded as a single node with as many outgoing connections as there are possible characters in the next column of the alignment. In the terms of a typical hidden Markov model, the observed states are the individual alignment columns and the "hidden" states represent the presumed ancestral sequence from which the sequences in the query set are hypothesized to have descended. An efficient search variant of the dynamic programming method, known as the Viterbi algorithm, is generally used to successively align the growing MSA to the next sequence in the query set to produce a new MSA. This is distinct from progressive alignment methods because the alignment of prior sequences is updated at each new sequence addition. However, like progressive methods, this technique can be influenced by the order in which the sequences in the query set are integrated into the alignment, especially when the sequences are distantly related.
Several software programs are available in which variants of HMM-based methods have been implemented and which are noted for their scalability and efficiency, although properly using an HMM method is more complex than using more common progressive methods. The simplest is POA (Partial-Order Alignment); a similar but more generalized method is implemented in the packages SAM (Sequence Alignment and Modeling System). and HMMER.
SAM has been used as a source of alignments for protein structure prediction to participate in the CASP structure prediction experiment and to develop a database of predicted proteins in the yeast species S. cerevisiae. HHsearch is a software package for the detection of remotely related protein sequences based on the pairwise comparison of HMMs. A server running HHsearch (HHpred) was by far the fastest of the 10 best automatic structure prediction servers in the CASP7 and CASP8 structure prediction competitions.
Phylogeny-aware methods
Most multiple sequence alignment methods try to minimize the number of insertions/deletions (gaps) and, as a consequence, produce compact alignments. This causes several problems if the sequences to be aligned contain non-homologous regions, if gaps are informative in a phylogeny analysis. These problems are common in newly produced sequences that are poorly annotated and may contain frame-shifts, wrong domains or non-homologous spliced exons. The first such method was developed in 2005 by Löytynoja and Goldman. The same authors released a software package called PRANK in 2008. PRANK improves alignments when insertions are present. Nevertheless, it runs slowly compared to progressive and/or iterative methods which have been developed for several years.
In 2012, two new phylogeny-aware tools appeared. One is called PAGAN that was developed by the same team as PRANK. The other is ProGraphMSA developed by Szalkowski. Both software packages were developed independently but share common features, notably the use of graph algorithms to improve the recognition of non-homologous regions, and an improvement in code making these software faster than PRANK.
Motif finding
Motif finding, also known as profile analysis, is a method of locating sequence motifs in global MSAs that is both a means of producing a better MSA and a means of producing a scoring matrix for use in searching other sequences for similar motifs. A variety of methods for isolating the motifs have been developed, but all are based on identifying short highly conserved patterns within the larger alignment and constructing a matrix similar to a substitution matrix that reflects the amino acid or nucleotide composition of each position in the putative motif. The alignment can then be refined using these matrices. In standard profile analysis, the matrix includes entries for each possible character as well as entries for gaps. Alternatively, statistical pattern-finding algorithms can identify motifs as a precursor to an MSA rather than as a derivation. In many cases when the query set contains only a small number of sequences or contains only highly related sequences, pseudocounts are added to normalize the distribution reflected in the scoring matrix. In particular, this corrects zero-probability entries in the matrix to values that are small but nonzero.
Blocks analysis is a method of motif finding that restricts motifs to ungapped regions in the alignment. Blocks can be generated from an MSA or they can be extracted from unaligned sequences using a precalculated set of common motifs previously generated from known gene families. Block scoring generally relies on the spacing of high-frequency characters rather than on the calculation of an explicit substitution matrix. The BLOCKS server provides an interactive method to locate such motifs in unaligned sequences.
Statistical pattern-matching has been implemented using both the expectation-maximization algorithm and the Gibbs sampler. One of the most common motif-finding tools, known as MEME, uses expectation maximization and hidden Markov methods to generate motifs that are then used as search tools by its companion MAST in the combined suite MEME/MAST .
Non-coding multiple sequence alignment
Non-coding DNA regions, especially TFBSs, are rather more conserved and not necessarily evolutionarily related, and may have converged from non-common ancestors. Thus, the assumptions used to align protein sequences and DNA coding regions are inherently different from those that hold for TFBS sequences. Although it is meaningful to align DNA coding regions for homologous sequences using mutation operators, alignment of binding site sequences for the same transcription factor cannot rely on evolutionary related mutation operations. Similarly, the evolutionary operator of point mutations can be used to define an edit distance for coding sequences, but this has little meaning for TFBS sequences because any sequence variation has to maintain a certain level of specificity for the binding site to function. This becomes specifically important when trying to align known TFBS sequences to build supervised models to predict unknown locations of the same TFBS. Hence, Multiple Sequence Alignment methods need to adjust the underlying evolutionary hypothesis and the operators used as in the work published incorporating neighbouring base thermodynamic information to align the binding sites searching for the lowest thermodynamic alignment conserving specificity of the binding site, EDNA .
Optimization
Genetic algorithms and simulated annealing
Standard optimization techniques in computer science — both of which were inspired by, but do not directly reproduce, physical processes — have also been used in an attempt to more efficiently produce quality MSAs. One such technique, genetic algorithms, has been used for MSA production in an attempt to broadly simulate the hypothesized evolutionary process that gave rise to the divergence in the query set. The method works by breaking a series of possible MSAs into fragments and repeatedly rearranging those fragments with the introduction of gaps at varying positions. A general objective function is optimized during the simulation, most generally the "sum of pairs" maximization function introduced in dynamic programming-based MSA methods. A technique for protein sequences has been implemented in the software program SAGA (Sequence Alignment by Genetic Algorithm) and its equivalent in RNA is called RAGA.
The technique of simulated annealing, by which an existing MSA produced by another method is refined by a series of rearrangements designed to find better regions of alignment space than the one the input alignment already occupies. Like the genetic algorithm method, simulated annealing maximizes an objective function like the sum-of-pairs function. Simulated annealing uses a metaphorical "temperature factor" that determines the rate at which rearrangements proceed and the likelihood of each rearrangement; typical usage alternates periods of high rearrangement rates with relatively low likelihood (to explore more distant regions of alignment space) with periods of lower rates and higher likelihoods to more thoroughly explore local minima near the newly "colonized" regions. This approach has been implemented in the program MSASA (Multiple Sequence Alignment by Simulated Annealing).
Mathematical programming and exact solution algorithms
Mathematical programming and in particular Mixed integer programming models are another approach to solve MSA problems. The advantage of such optimization models is that they can be used to find the optimal MSA solution more efficiently compared to the traditional DP approach. This is due in part, to the applicability of decomposition techniques for mathematical programs, where the MSA model is decomposed into smaller parts and iteratively solved until the optimal solution is found. Example algorithms used to solve mixed integer programming models of MSA include branch and price and Benders decomposition. Although exact approaches are computationally slow compared to heuristic algorithms for MSA, they are guaranteed to reach the optimal solution eventually, even for large-size problems.
Simulated quantum computing
In January 2017, D-Wave Systems announced that its qbsolv open-source quantum computing software had been successfully used to find a faster solution to the MSA problem.
Alignment visualization and quality control
The necessary use of heuristics for multiple alignment means that for an arbitrary set of proteins, there is always a good chance that an alignment will contain errors. For example, an evaluation of several leading alignment programs using the BAliBase benchmark found that at least 24% of all pairs of aligned amino acids were incorrectly aligned. These errors can arise because of unique insertions into one or more regions of sequences, or through some more complex evolutionary process leading to proteins that do not align easily by sequence alone. As the number of sequence and their divergence increases many more errors will be made simply because of the heuristic nature of MSA algorithms. Multiple sequence alignment viewers enable alignments to be visually reviewed, often by inspecting the quality of alignment for annotated functional sites on two or more sequences. Many also enable the alignment to be edited to correct these (usually minor) errors, in order to obtain an optimal 'curated' alignment suitable for use in phylogenetic analysis or comparative modeling.
However, as the number of sequences increases and especially in genome-wide studies that involve many MSAs it is impossible to manually curate all alignments. Furthermore, manual curation is subjective. And finally, even the best expert cannot confidently align the more ambiguous cases of highly diverged sequences. In such cases it is common practice to use automatic procedures to exclude unreliably aligned regions from the MSA. For the purpose of phylogeny reconstruction (see below) the Gblocks program is widely used to remove alignment blocks suspect of low quality, according to various cutoffs on the number of gapped sequences in alignment columns. However, these criteria may excessively filter out regions with insertion/deletion events that may still be aligned reliably, and these regions might be desirable for other purposes such as detection of positive selection. A few alignment algorithms output site-specific scores that allow the selection of high-confidence regions. Such a service was first offered by the SOAP program, which tests the robustness of each column to perturbation in the parameters of the popular alignment program CLUSTALW. The T-Coffee program uses a library of alignments in the construction of the final MSA, and its output MSA is colored according to confidence scores that reflect the agreement between different alignments in the library regarding each aligned residue. Its extension, TCS : (Transitive Consistency Score), uses T-Coffee libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. Another alignment program that can output an MSA with confidence scores is FSA, which uses a statistical model that allows calculation of the uncertainty in the alignment. The HoT (Heads-Or-Tails) score can be used as a measure of site-specific alignment uncertainty due to the existence of multiple co-optimal solutions. The GUIDANCE program calculates a similar site-specific confidence measure based on the robustness of the alignment to uncertainty in the guide tree that is used in progressive alignment programs. An alternative, more statistically justified approach to assess alignment uncertainty is the use of probabilistic evolutionary models for joint estimation of phylogeny and alignment. A Bayesian approach allows calculation of posterior probabilities of estimated phylogeny and alignment, which is a measure of the confidence in these estimates. In this case, a posterior probability can be calculated for each site in the alignment. Such an approach was implemented in the program BAli-Phy.
There are free programs available for visualization of multiple sequence alignments, for example Jalview and UGENE.
Phylogenetic use
Multiple sequence alignments can be used to create a phylogenetic tree. This is made possible by two reasons. The first is because functional domains that are known in annotated sequences can be used for alignment in non-annotated sequences. The other is that conserved regions known to be functionally important can be found. This makes it possible for multiple sequence alignments to be used to analyze and find evolutionary relationships through homology between sequences. Point mutations and insertion or deletion events (called indels) can be detected.
Multiple sequence alignments can also be used to identify functionally important sites, such as binding sites, active sites, or sites corresponding to other key functions, by locating conserved domains. When looking at multiple sequence alignments, it is useful to consider different aspects of the sequences when comparing sequences. These aspects include identity, similarity, and homology. Identity means that the sequences have identical residues at their respective positions. On the other hand, similarity has to do with the sequences being compared having similar residues quantitatively. For example, in terms of nucleotide sequences, pyrimidines are considered similar to each other, as are purines. Similarity ultimately leads to homology, in that the more similar sequences are, the closer they are to being homologous. This similarity in sequences can then go on to help find common ancestry.
See also
Alignment-free sequence analysis
Cladistics
Generalized tree alignment
Multiple sequence alignment viewers
PANDIT, a biological database covering protein domains
Phylogenetics
Sequence alignment software
Structural alignment
References
Survey articles
External links
ExPASy sequence alignment tools
Archived Multiple Alignment Resource Page — from the Virtual School of Natural Sciences
Tools for Multiple Alignments — from Pôle Bioinformatique Lyonnais
An entry point to clustal servers and information
An entry point to the main T-Coffee servers
An entry point to the main MergeAlign server and information
European Bioinformatics Institute servers:
ClustalW2 — general purpose multiple sequence alignment program for DNA or proteins.
Muscle — MUltiple Sequence Comparison by Log-Expectation
T-coffee — multiple sequence alignment.
MAFFT — Multiple Alignment using Fast Fourier Transform
KALIGN — a fast and accurate multiple sequence alignment algorithm.
Lecture notes, tutorials, and courses
Multiple sequence alignment lectures — from the Max Planck Institute for Molecular Genetics
Lecture Notes and practical exercises on multiple sequence alignments at the EMBL
Molecular Bioinformatics Lecture Notes
Molecular Evolution and Bioinformatics Lecture Notes
Bioinformatics
Computational phylogenetics
Markov models |
Major General Trudy H. Clark, USAF (retired) is a former Deputy Director of the Defense Threat Reduction Agency (DTRA) in Fort Belvoir, Virginia in the United States.
Military career
Clark received her commission in 1973 as a distinguished graduate of Officer Training School. From her commissioning until 1984, she served as chief of communications branches at various Air Force bases in the United States and in Turkey and South Korea. In July 1984, she was assigned as commander of the 1880th Information Systems Squadron at the Tonopah Test Range, Nev. She then completed Air Command and Staff College before taking her assignment as chief of Tactical Command and Control Communication Systems, and later executive officer for the Deputy Director of Programs and Evaluation at U.S. Air Force Headquarters, Washington, D.C., in 1989. While in Washington, she was assigned as commander of the Staff Support Unit, and presidential communications officer in the White House Communications Agency. In 1992, she left Washington to attend Armed Forces Staff College and the Air War College.
After completing the Air War College, she was assigned to Travis Air Force Base, Calif., where she served as commander of the 60th Communications Group, chief of the Communications Division and commander of the 615th Air Mobility Communications Squadron. In 1995, Clark was named commander of the 17th Support Group, Goodfellow Air Force Base, Texas, before returning to Washington, D.C., as executive officer to the Air Force Chief of Staff. In 1997, she was named commandant of the Squadron Officer School at Maxwell Air Force Base in Alabama. She served there for two years before being named director for Command, Control, Communications and Computer Systems at U.S. Strategic Command, Offutt Air Force Base, Neb.
Prior to assuming her current duties, Clark was the deputy chief information officer (CIO), Headquarters U.S. Air Force, Washington, D.C. There, she assisted the CIO in leading the Air Force in creating and enforcing information technology standards, in promoting and shaping effective strategic and operational Information Technology (IT) planning processes, and in acquiring IT systems. The general worked with other Air Force leaders to ensure that IT processes were efficient and effective in meeting the needs of the Air Force.
Clark was promoted to major general on March 1, 2003. Major General Clark retired from the USAF on December 1, 2006.
Education
1972 Bachelor of arts degree in sociology, with honors, University of Maryland where she became a member of Pi Beta Phi
1980 Distinguished graduate, Squadron Officer School, Maxwell AFB, Alabama
1987 Master of science degree in guidance and counseling, Troy State University, Montgomery, Alabama
1987 Air Command and Staff College, Maxwell AFB, Alabama
1992 Armed Forces Staff College, Norfolk, Virginia
1993 Air War College, Maxwell AFB, Alabama
2001 Senior Information Warfare Applications Course, Maxwell AFB, Alabama
2002 National Security Leadership Course, Syracuse University, Syracuse, New York
2003 National Security Decision-Making Seminar, School of Advanced International Studies, Johns Hopkins University, Washington, D.C.
2004 U.S. - Russia Executive Security Program, John F. Kennedy School of Government, Harvard University, Cambridge, Mass.
Assignments
September 1973 - July 1974, student, Communications-Electronics Officer School, Keesler AFB, Mississippi
July 1974 - September 1976, Chief of Telephone Installations, 392nd Communications Group, Vandenberg Air Force Base, Calif.
September 1976 - January 1979, Chief, Programs Management Division, 2006th Communications Group, Incirlik Air Base, Turkey
January 1979 - July 1981, Chief, Communication Branch, Joint Studies Group, later, Chief of Threats Analysis, 4440th Tactical Fighter Training Group, Red Flag, Nellis AFB, Nev.
July 1981 - August 1982, Chief, Facilities Operation Branch, 2146th Communications Group, Osan Air Base, South Korea
August 1982 - July 1984, Chief, Telecommunications Division and Executive Officer, Headquarters Tactical Communications Division, Langley AFB, Va.
July 1984 - July 1986, Commander, 1880th Information Systems Squadron, Tonopah Test Range, Nev.
August 1986 - June 1987, student, Air Command and Staff College, Maxwell AFB, Ala.
June 1987 - August 1989, Chief of Tactical Command and Control Communication Systems, Directorate of Programs and Evaluation, later, Executive Officer for the Deputy Director of Programs and Evaluation, Headquarters U.S. Air Force, Washington, D.C.
August 1989 - April 1992, Commander, Staff Support Unit, and Presidential Communications Officer, White House Communications Agency, Washington, D.C.
April 1992 - July 1992, student, Armed Forces Staff College, Norfolk, Va.
August 1992 - June 1993, student, Air War College, Maxwell AFB, Ala.
June 1993 - July 1994, Commander, 60th Communications Group, and Chief, Communications Division, Headquarters 15th Air Force, Travis AFB, Calif.
July 1994 - April 1995, Chief, Communications Division, Headquarters 15th Air Force, and Commander, 615th Air Mobility Communications Squadron, Travis AFB, Calif.
April 1995 - June 1996, Commander, 17th Support Group, Goodfellow Air Force Base, Texas
June 1996 - November 1997, Executive Officer to the Air Force Chief of Staff, Headquarters U.S. Air Force, Washington, D.C.
November 1997 - August 1999, Commandant, Squadron Officer School, Maxwell AFB, Ala.
August 1999 - September 2001, Director for Command, Control, Communications and Computer Systems, U.S. Strategic Command, Offutt AFB, Neb.
September 2001 - May 2003, Deputy Chief Information Officer, Headquarters U.S. Air Force, Washington, D.C.
June 2003 - December 2006, Deputy Director, Defense Threat Reduction Agency, Fort Belvoir, Va.
Awards and decorations
Effective dates of promotion
References
DTRA Official Biography
Air Force Official Biography
Living people
University of Maryland, College Park alumni
Troy University alumni
Syracuse University alumni
Johns Hopkins University alumni
Harvard Kennedy School alumni
Recipients of the Air Force Distinguished Service Medal
Recipients of the Legion of Merit
Female generals of the United States Air Force
1951 births
Recipients of the Defense Superior Service Medal
21st-century American women |
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.
Either transfer function specifies the response to a periodic sine-wave pattern passing through the lens system, as a function of its spatial frequency or period, and its orientation. Formally, the OTF is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is complex-valued; but it will be real-valued in the common case of a PSF that is symmetric about its center. The MTF is formally defined as the magnitude (absolute value) of the complex OTF.
The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp.
Definition and related concepts
Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively.
Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions:
where
and represents the complex argument function, while is the spatial frequency of the periodic pattern. In general is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern.
The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope.
True to the definition of transfer function, should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence .
Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors.
Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF).
Examples
The OTF of an ideal lens system
A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure.
It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500 of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between the two is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200.
The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity.
OTF of an imperfect lens system
An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure.
As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns.
While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.
The OTF of an optical system with a non-rotational symmetric aberration
Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration.
Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.
Practical example – high-definition video system
While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language.
Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.
The three-dimensional optical transfer function
Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions.
A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope.
The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z ≠ 0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function.
Calculation
Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches:
as the Fourier transform of the incoherent point spread function, or
as the auto-correlation of the pupil function of the optical system
Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.
Example
Ideal lens system with circular aperture
Auto-correlation of the pupil function
Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function.
The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of , where is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for , so the resulting area should be divided by .
The intersecting area can be calculated as the sum of the areas of two identical circular segments: , where is the circle segment angle. By substituting , and using the equalities and , the equation for the area can be rewritten as . Hence the normalized optical transfer function is given by:
A more detailed discussion can be found in and.
Numerical evaluation
The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data.
The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations:
Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform .<ref>Chapra, S.C.; Canale, R.P. (2006). Numerical Methods for Engineers (5th ed.). New York, New York: McGraw-Hill</ref>
where
= the value of the MTF
= number of data points
= index
= term of the LSF data
= pixel position
The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.
The vectorial transfer function
At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in () and ().
Measurement
The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.
Starting from the point spread function
The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.
Using extended test objects for spatially invariant optics
When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.
The line-spread function
The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.
The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.
Edge-spread function
The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric.
As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10% of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function.
where
ESF = the output array of normalized pixel intensity data
= the input array of pixel intensity data
= the ith element of
= the average value of the pixel intensity data
= the standard deviation of the pixel intensity data
= number of pixels used in average
The line spread function is identical to the first derivative of the edge spread function, which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows:
Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference:
where:
= the index
= position of the pixel
= ESF of the pixel
Using a grid of black and white lines
Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).
Factors affecting MTF in typical camera systems
In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display.
Oversampling and downconversion to maintain the optical transfer function
The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x'' weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop.
Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).
Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency.
Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range.
Trend to large-format DSLRs and improved MTF potential
There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.
Digital inversion of the optical transfer function
Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio.
Limitations
In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly upon lateral translation, the optical transfer function does not describe the optical system accurately.
See also
Bokeh
Gamma correction
Minimum resolvable contrast
Minimum resolvable temperature difference
Optical resolution
Signal-to-noise ratio
Signal transfer function
Strehl ratio
Transfer function
Wavefront coding
References
External links
"Modulation transfer function", by Glenn D. Boreman on SPIE Optipedia.
"How to Measure MTF and other Properties of Lenses", by Optikos Corporation.
Transfer function |
PEX is cross-linked polyethylene, a form of polyethylene with cross-links.
PEX or Pex may also refer to:
Science and technology
Peer exchange, a method to gather peers for BitTorrent
PHIGS Extension to X, in programming
Pex (software), a unit testing framework for the .NET programming languages
Pex (company), a digital rights technology company
Physical examination, in medicine
Plasma exchange, a form of plasmapheresis
Pseudoexfoliation syndrome, condition related to glaucoma
Parasitic extraction, tool in IC Layout Design, used for extract parasitic elements and see their effects on the circuit
Other
Palestine Exchange, a stock exchange
People Express Airlines (1980s)
PEX, operators of OPEX stock exchange
Pex, a character in the Doctor Who story Paradise Towers
See also
PXE (disambiguation) |
Microsoft XCPU, codenamed Xenon, is a CPU used in the Xbox 360 game console, to be used with ATI's Xenos graphics chip.
The processor was developed by Microsoft and IBM under the IBM chip program codenamed "Waternoose", which was named after the Monsters, Inc. character Henry J. Waternoose III. The development program was originally announced on November 3, 2003.
The processor is based on IBM PowerPC instruction set architecture. It consists of three independent processor cores on a single die. These cores are slightly modified versions of the PPE in the Cell processor used on the PlayStation 3. Each core has two symmetric hardware threads (SMT), for a total of six hardware threads available to games. Each individual core also includes 32 KB of L1 instruction cache and 32 KB of L1 data cache.
The XCPU processors were manufactured at IBM's East Fishkill, New York fabrication plant and Chartered Semiconductor Manufacturing (now part of GlobalFoundries) in Singapore. Chartered reduced the fabrication process in 2007 to 65 nm from 90 nm, thus reducing manufacturing costs for Microsoft.
Specifications
90 nm process, 65 nm process upgrade in 2007 (codenamed "Loki"), 45 nm process since Xbox 360 S model
165 million transistors
Three cores, each two-way SMT-capable and clocked at 3.2 GHz
SIMD: Two VMX128 units with a dedicated (128×128 bit) register file for each core, one for each thread
1 MB L2 cache (lockable by the GPU) running at half-speed (1.6 GHz) with a 256-bit bus
51.2 GB/s of L2 memory bandwidth (256 bit × 1600 MHz)
21.6 GB/s front-side bus (On the CPU side, this interfaces to a 1.35 GHz, 8B wide, FSB dataflow; on the GPU side, it connects to a 16B wide FSB dataflow running at 675 MHz.)
Dot product performance: 9.6 billion per second
In-order instruction execution
768 bits of IBM eFUSE-based OTP memory
ROM (and 64 KB SRAM) storing Microsoft's Secure Bootloader, and encryption hypervisor
Big-endian architecture
XCGPU
The Xbox 360 S introduced the XCGPU, which integrated the Xenon CPU and the Xenos GPU onto the same die, and the eDRAM into the same package. The XCGPU follows the trend started with the integrated EE+GS in PlayStation 2 Slimline, combining CPU, GPU, memory controllers and IO in a single cost-reduced chip. It also contains a "front side bus replacement block" that connects the CPU and GPU internally in exactly the same manner as the front side bus would have done when the CPU and GPU were separate chips, so that the XCGPU doesn't change the hardware characteristics of the Xbox 360.
XCGPU contains 372 million transistors and is manufactured by GlobalFoundries on a 45 nm process. Compared to the original chipset in the Xbox 360 the combined power requirements are reduced by 60% and the physical chip area by 50%.
Gallery
Illustrations of the different generations of processors in Xbox 360 and Xbox 360 S.
References
Xenon hardware overview by Pete Isensee, Development Lead, Xbox Advanced Technology Group, written some time before June 23, 2007
External links
Ars Technica explains the Xenon CPU
Xenon
PowerPC microprocessors
IBM microprocessors |
In the microelectronics industry, a semiconductor fabrication plant (commonly called a fab; sometimes foundry) is a factory for semiconductor device fabrication.
Fabs require many expensive devices to function. Estimates put the cost of building a new fab over one billion U.S. dollars with values as high as $3–4 billion not being uncommon. TSMC invested $9.3 billion in its Fab15 300 mm wafer manufacturing facility in Taiwan. The same company estimations suggest that their future fab might cost $20 billion. A foundry model emerged in the 1990s: Foundries that produced their own designs were known as integrated device manufacturers (IDMs). Companies that farmed out manufacturing of their designs to foundries were termed fabless semiconductor companies. Those foundries, which did not create their own designs, were called pure-play semiconductor foundries.
The central part of a fab is the clean room, an area where the environment is controlled to eliminate all dust, since even a single speck can ruin a microcircuit, which has nanoscale features much smaller than dust particles. The clean room must also be damped against vibration to enable nanometer-scale alignment of machines and must be kept within narrow bands of temperature and humidity. Vibration control may be achieved by using deep piles in the cleanroom's foundation that anchor the cleanroom to the bedrock, careful selection of the construction site, and/or using vibration dampers. Controlling temperature and humidity is critical for minimizing static electricity. Corona discharge sources can also be used to reduce static electricity. Often, a fab will be constructed in the following manner: (from top to bottom): the roof, which may contain air handling equipment that draws, purifies and cools outside air, an air plenum for distributing the air to several floor-mounted fan filter units, which are also part of the cleanroom's ceiling, the cleanroom itself, which may or may not have more than one story, a return air plenum, the clean subfab that may contain support equipment for the machines in the cleanroom such as chemical delivery, purification, recycling and destruction systems, and the ground floor, that may contain electrical equipment. Fabs also often have some office space.
The clean room is where all fabrication takes place and contains the machinery for integrated circuit production such as steppers and/or scanners for photolithography, in addition to etching, cleaning, doping and dicing machines. All these devices are extremely precise and thus extremely expensive. Prices for most common pieces of equipment for the processing of 300 mm wafers range from $700,000 to upwards of $4,000,000 each with a few pieces of equipment reaching as high as $340,000,000 each (e.g. EUV scanners). A typical fab will have several hundred equipment items.
History
Typically an advance in chip-making technology requires a completely new fab to be built. In the past, the equipment to outfit a fab was not very expensive and there were a huge number of smaller fabs producing chips in small quantities. However, the cost of the most up-to-date equipment has since grown to the point where a new fab can cost several billion dollars.
Another side effect of the cost has been the challenge to make use of older fabs. For many companies these older fabs are useful for producing designs for unique markets, such as embedded processors, flash memory, and microcontrollers. However, for companies with more limited product lines, it is often best to either rent out the fab, or close it entirely. This is due to the tendency of the cost of upgrading an existing fab to produce devices requiring newer technology to exceed the cost of a completely new fab.
There has been a trend to produce ever larger wafers, so each process step is being performed on more and more chips at once. The goal is to spread production costs (chemicals, fab time) over a larger number of saleable chips. It is impossible (or at least impracticable) to retrofit machinery to handle larger wafers. This is not to say that foundries using smaller wafers are necessarily obsolete; older foundries can be cheaper to operate, have higher yields for simple chips and still be productive.
The industry was aiming to move from the state-of-the-art wafer size 300 mm (12 in) to 450 mm by 2018. In March 2014, Intel expected 450 mm deployment by 2020. But in 2016, corresponding joint research efforts were stopped.
Additionally, there is a large push to completely automate the production of semiconductor chips from beginning to end. This is often referred to as the "lights-out fab" concept.
The International Sematech Manufacturing Initiative (ISMI), an extension of the US consortium SEMATECH, is sponsoring the "300 mm Prime" initiative. An important goal of this initiative is to enable fabs to produce greater quantities of smaller chips as a response to shorter lifecycles seen in consumer electronics. The logic is that such a fab can produce smaller lots more easily and can efficiently switch its production to supply chips for a variety of new electronic devices. Another important goal is to reduce the waiting time between processing steps.
See also
Foundry model for the business aspects of foundries and fabless companies
Klaiber's law
List of semiconductor fabrication plants
Rock's law
Semiconductor consolidation
Semiconductor device fabrication for the process of manufacturing devices
Notes
References
Handbook of Semiconductor Manufacturing Technology, Second Edition by Robert Doering and Yoshio Nishi (Hardcover – Jul 9, 2007)
Semiconductor Manufacturing Technology by Michael Quirk and Julian Serda (paperback – Nov 19, 2000)
Fundamentals of Semiconductor Manufacturing and Process Control by Gary S. May and Costas J. Spanos (hardcover – May 22, 2006)
The Essential Guide to Semiconductors (Essential Guide Series) by Jim Turley (paperback – Dec 29, 2002)
Semiconductor Manufacturing Handbook (McGraw–Hill Handbooks) by Hwaiyu Geng (hardcover – April 27, 2005)
Further reading
"Chip Makers Watch Their Waste", The Wall Street Journal, July 19, 2007, p.B3
Semiconductor device fabrication
Manufacturing plants |
ARM9 is a group of 32-bit RISC ARM processor cores licensed by ARM Holdings for microcontroller use. The ARM9 core family consists of ARM9TDMI, ARM940T, ARM9E-S, ARM966E-S, ARM920T, ARM922T, ARM946E-S, ARM9EJ-S, ARM926EJ-S, ARM968E-S, ARM996HS. Since ARM9 cores were released from 1998 to 2006, they are no longer recommended for new IC designs, instead ARM Cortex-A, ARM Cortex-M, ARM Cortex-R cores are preferred.
Overview
With this design generation, ARM moved from a von Neumann architecture (Princeton architecture) to a (modified; meaning split cache) Harvard architecture with separate instruction and data buses (and caches), significantly increasing its potential speed. Most silicon chips integrating these cores will package them as modified Harvard architecture chips, combining the two address buses on the other side of separated CPU caches and tightly coupled memories.
There are two subfamilies, implementing different ARM architecture versions.
Differences from ARM7 cores
Key improvements over ARM7 cores, enabled by spending more transistors, include:
Decreased heat production and lower overheating risk.
Clock frequency improvements. Shifting from a three-stage pipeline to a five-stage one lets the clock speed be approximately doubled, on the same silicon fabrication process.
Cycle count improvements. Many unmodified ARM7 binaries were measured as taking about 30% fewer cycles to execute on ARM9 cores. Key improvements include:
Faster loads and stores; many instructions now cost just one cycle. This is helped by both the modified Harvard architecture (reducing bus and cache contention) and the new pipeline stages.
Exposing pipeline interlocks, enabling compiler optimizations to reduce blockage between stages.
Additionally, some ARM9 cores incorporate "Enhanced DSP" instructions, such as a multiply-accumulate, to support more efficient implementations of digital signal processing algorithms.
Switching from a von Neumann architecture entailed using a non-unified cache, so that instruction fetches do not evict data (and vice versa). ARM9 cores have separate data and address bus signals, which chip designers use in various ways. In most cases they connect at least part of the address space in von Neumann style, used for both instructions and data, usually to an AHB interconnect connecting to a DRAM interface and an External Bus Interface usable with NOR flash memory. Such hybrids are no longer pure Harvard architecture processors.
ARM license
ARM Holdings neither manufactures nor sells CPU devices based on its own designs, but rather licenses the processor architecture to interested parties. ARM offers a variety of licensing terms, varying in cost and deliverables. To all licensees, ARM provides an integratable hardware description of the ARM core, as well as complete software development toolset and the right to sell manufactured silicon containing the ARM CPU.
Silicon customization
Integrated device manufacturers (IDM) receive the ARM Processor IP as synthesizable RTL (written in Verilog). In this form, they have the ability to perform architectural level optimizations and extensions. This allows the manufacturer to achieve custom design goals, such as higher clock speed, very low power consumption, instruction set extensions, optimizations for size, debug support, etc. To determine which components have been included in a particular ARM CPU chip, consult the manufacturer datasheet and related documentation.
Cores
The ARM MPCore family of multicore processors support software written using either the asymmetric (AMP) or symmetric (SMP) multiprocessor programming paradigms. For AMP development, each central processing unit within the MPCore may be viewed as an independent processor and as such can follow traditional single processor development strategies.
ARM9TDMI
ARM9TDMI is a successor to the popular ARM7TDMI core, and is also based on the ARMv4T architecture. Cores based on it support both 32-bit ARM and 16-bit Thumb instruction sets and include:
ARM920T with 16 KB each of I/D cache and an MMU
ARM922T with 8 KB each of I/D cache and an MMU
ARM940T with cache and a Memory Protection Unit (MPU)
ARM9E-S and ARM9EJ-S
ARM9E, and its ARM9EJ sibling, implement the basic ARM9TDMI pipeline, but add support for the ARMv5TE architecture, which includes some DSP-esque instruction set extensions. In addition, the multiplier unit width has been doubled, halving the time required for most multiplication operations. They support 32-bit, 16-bit, and sometimes 8-bit instruction sets.
ARM926EJ-S with ARM Jazelle technology, which enables the direct execution of 8-bit Java bytecode in hardware, and an MMU
ARM946
ARM966
ARM968
The TI-Nspire CX (2011) and CX II (2019) graphing calculators use an ARM926EJ-S processor, clocked at 132 and 396 MHz respectively.
Chips
ARM920T
Atmel AT91RM9200
Cirrus Logic EP9315 ARM9 CPU, 200 MHz
NXP i.MX1
Samsung S3C2410, S3C2440, S3C2442, S3C2443
ARM922T
Micrel/Kendin KS8695
NXP LH7A4xx
ARM925T
Texas Instruments OMAP 1510
ARM926EJ-S
ASPEED AST2400
Cypress Semiconductor EZ-USB FX3
Microchip Technology (former Atmel) AT91SAM9260, AT91SAM9G, AT91SAM9M, AT91SAM9N/CN, AT91SAM9R/RL, AT91SAM9X, AT91SAM9XE (see AT91SAM9)
Nintendo Starlet (Wii coprocessor)
Nuvoton NUC900
NXP (former Freescale Semiconductor) i.MX2 Series, (see I.MX), LPC3100 and LPC3200 Series
Samsung S3C2412, S3C2416, S3C2450
STMicroelectronics Nomadik
Texas Instruments OMAP 850, 750, 733, 730, 5912 (also 5948, which is a customer specific version of it, made for Bosch), 1610
Texas Instruments Sitara AM1x, OMAP L137/L138, Davinci DA830/DA850/DM355/DM365
HP iLO 4 baseboard management controller
5V Technologies 5VT1310/1312/1314
STMicroelectronics SPEAr300/600
VIA WonderMedia 8505 and 8650
ARM940T
Conexant CX22490 STB SoC
ARM946E-S
Nintendo NTR-CPU (Nintendo DS CPU), TWL-CPU (Nintendo DSi CPU; same as the DS but clocked at 133 MHz instead of 67 MHz)
NXP Nexperia PNX5230
ARM966E-S
STMicroelectronics STR9
ARM968E-S
NXP Semiconductors LPC2900
Unreferenced ARM9 core
Anyka AK32xx
Atmel AT91CAP9
CSR Quatro 4300
Centrality Atlas III
Digi NS9215, NS9210
HiSilicon Kirin K3V1
Infineon Technologies S-GOLDlite PMB 8875
LeapFrog LF-1000
NXP Semiconductors (former Freescale Semiconductor) i.MX1x
MediaTek MT1000, MT6235-39, MT6268, MT6516
PRAGMATEC RABBITV3 (ARM920T rev 0 (v4l)) used in Karotz)
Qualcomm MSM6xxx
Qualcomm Atheros AR6400
Texas Instruments TMS320DM365/TMS320DM368 ARM9EJ-S
Zilog Encore! 32
Documentation
The amount of documentation for all ARM chips is daunting, especially for newcomers. The documentation for microcontrollers from past decades would easily be inclusive in a single document, but as chips have evolved so has the documentation grown. The total documentation is especially hard to grasp for all ARM chips since it consists of documents from the IC manufacturer and documents from CPU core vendor (ARM Holdings).
A typical top-down documentation tree is: high-level marketing slides, datasheet for the exact physical chip, a detailed reference manual that describes common peripherals and other aspects of physical chips within the same series, reference manual for the exact ARM core processor within the chip, reference manual for the ARM architecture of the core which includes detailed description of all instruction sets.
Documentation tree (top to bottom)
IC manufacturer marketing slides.
IC manufacturer datasheets.
IC manufacturer reference manuals.
ARM core reference manuals.
ARM architecture reference manuals.
IC manufacturer has additional documents, including: evaluation board user manuals, application notes, getting started with development software, software library documents, errata, and more.
See also
ARM architecture
List of ARM architectures and cores
JTAG
Interrupt, Interrupt handler
Real-time operating system, Comparison of real-time operating systems
References
External links
ARM9 official documents
Architecture Reference Manual: ARMv4/5/6
Core Reference Manuals: ARM9E-S, ARM9EJ-S,ARM9TDMI,ARM920T,ARM922T,ARM926EJ-S,ARM940T,ARM946E-S,ARM966E-S,ARM968E-S
Coprocessor Reference Manuals: VFP9-S (Floating-Point), MOVE (MPEG4)
Quick Reference Cards
Instructions: Thumb (1), ARM and Thumb-2 (2), Vector Floating Point (3)
Opcodes: Thumb (1, 2), ARM (3, 4), GNU Assembler Directives 5.
ARM processors |
The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.
The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution).
Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms.
Mathematical definition
There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series , its non-stationary auto-covariance function is given by
where denotes the average over all possible realizations of the process and is the mean, which may or may not be a function of time. The Wigner function is then given by first expressing the autocorrelation function in terms of the average time and time lag , and then Fourier transforming the lag.
So for a single (mean-zero) time series, the Wigner function is simply given by
The motivation for the Wigner function is that it reduces to the spectral density function at all times for stationary processes, yet it is fully equivalent to the non-stationary autocorrelation function. Therefore, the Wigner function tells us (roughly) how the spectral density changes in time.
Time-frequency analysis example
Here are some examples illustrating how the WDF is used in time-frequency analysis.
Constant input signal
When the input signal is constant, its time-frequency distribution is a horizontal line along the time axis. For example, if x(t) = 1, then
Sinusoidal input signal
When the input signal is a sinusoidal function, its time-frequency distribution is a horizontal line parallel to the time axis, displaced from it by the sinusoidal signal's frequency. For example, if , then
Chirp input signal
When the input signal is a linear chirp function, the instantaneous frequency is a linear function. This means that the time frequency distribution should be a straight line. For example, if
,
then its instantaneous frequency is
and its WDF
Delta input signal
When the input signal is a delta function, since it is only non-zero at t=0 and contains infinite frequency components, its time-frequency distribution should be a vertical line across the origin. This means that the time frequency distribution of the delta function should also be a delta function. By WDF
The Wigner distribution function is best suited for time-frequency analysis when the input signal's phase is 2nd order or lower. For those signals, WDF can exactly generate the time frequency distribution of the input signal.
Boxcar function
,
the rectangular function ⇒
Cross term property
The Wigner distribution function is not a linear transform. A cross term ("time beats") occurs when there is more than one component in the input signal, analogous in time to frequency beats. In the ancestral physics Wigner quasi-probability distribution, this term has important and useful physics consequences, required for faithful expectation values. By contrast, the short-time Fourier transform does not have this feature. Negative features of the WDF are reflective of the Gabor limit of the classical signal and physically unrelated to any possible underlay of quantum structure.
The following are some examples that exhibit the cross-term feature of the Wigner distribution function.
In order to reduce the cross-term difficulty, several approaches have been proposed in the literature, some of them leading to new transforms as the modified Wigner distribution function, the Gabor–Wigner transform, the Choi-Williams distribution function and Cohen's class distribution.
Properties of the Wigner distribution function
The Wigner distribution function has several evident properties listed in the following table.
Projection property
Energy property
Recovery property
Mean condition frequency and mean condition time
Moment properties
Real properties
Region properties
Multiplication theorem
Convolution theorem
Correlation theorem
Time-shifting covariance
Modulation covariance
Scale covariance
Windowed Wigner Distribution Function
When a signal is not time limited, its Wigner Distribution Function is hard to implement. Thus, we add a new function(mask) to its integration part, so that we only have to implement part of the original function instead of integrating all the way from negative infinity to positive infinity. Original function: Function with mask: is real and time-limited
Implementation
According to definition:
Suppose that for for and
We take as example
where is a real function
And then we compare the difference between two conditions.
Ideal:
When mask function , which means no mask function.
3 Conditions
Then we consider the condition with mask function:
We can see that have value only between –B to B, thus conducting with can remove cross term of the function. But if x(t) is not a Delta function nor a narrow frequency function, instead, it is a function with wide frequency or ripple. The edge of the signal may still exist between –B and B, which still cause the cross term problem.
for example:
See also
Time-frequency representation
Short-time Fourier transform
Spectrogram
Gabor transform
Autocorrelation
Gabor–Wigner transform
Modified Wigner distribution function
Optical equivalence theorem
Polynomial Wigner–Ville distribution
Cohen's class distribution function
Wigner quasi-probability distribution
Transformation between distributions in time-frequency analysis
Bilinear time–frequency distribution
References
Further reading
J. Ville, 1948. "Théorie et Applications de la Notion de Signal Analytique", Câbles et Transmission, 2, 61–74 .
T. A. C. M. Classen and W. F. G. Mecklenbrauker, 1980. "The Wigner distribution-a tool for time-frequency signal analysis; Part I," Philips J. Res., vol. 35, pp. 217–250.
L. Cohen (1989): Proceedings of the IEEE 77 pp. 941–981, Time-frequency distributions---a review
L. Cohen, Time-Frequency Analysis, Prentice-Hall, New York, 1995.
S. Qian and D. Chen, Joint Time-Frequency Analysis: Methods and Applications, Chap. 5, Prentice Hall, N.J., 1996.
B. Boashash, "Note on the Use of the Wigner Distribution for Time Frequency Signal Analysis", IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 36, No. 9, pp. 1518–1521, Sept. 1988. . B. Boashash, editor,Time-Frequency Signal Analysis and Processing – A Comprehensive Reference, Elsevier Science, Oxford, 2003, .
F. Hlawatsch, G. F. Boudreaux-Bartels: "Linear and quadratic time-frequency signal representation," IEEE Signal Processing Magazine, pp. 21–67, Apr. 1992.
R. L. Allen and D. W. Mills, Signal Analysis: Time, Frequency, Scale, and Structure, Wiley- Interscience, NJ, 2004.
Jian-Jiun Ding, Time frequency analysis and wavelet transform class notes, the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2015.
Kakofengitis, D., & Steuernagel, O. (2017). "Wigner's quantum phase space current in weakly anharmonic weakly excited two-state systems" European Physical Journal Plus 14.07.2017
External links
Sonogram Visible Speech Under GPL Licensed Freeware for the visual extraction of the Wigner Distribution.
Signal processing
Transforms |
The Jerusalem College of Technology - Lev Academic Center (JCT; ) is a private college in Israel, recognized by the Council for Higher Education, which specializes in providing high-level science and technology education to the Jewish community. More than 2,000 of JCT's 4,700 students are ultra-Orthodox, and the remainder of the students are from diverse segments of Israeli society including Ethiopian-Israelis, national religious and international students.
JCT's main campus ("Lev") is situated in the Givat Mordechai neighborhood of Jerusalem. Other branches are located in the Givat Shaul neighborhood ("Tal Campus") of Jerusalem and Ramat Gan ("Lustig Campus"). JCT offers bachelor's degrees and master's degrees in several fields of study combined with intensive Jewish studies.
History
The college, founded in 1969 by Professor Josef Haim Yakopow and Professor Ze'ev Lev, specializes in high-tech engineering, industrial management and life and health sciences. JCT is particularly known for its electro-optics faculty. The institution is fully accredited by the Council for Higher Education in Israel, the main authority overseeing Israel's academic institutions. Some 5,000 students are currently enrolled in JCT, with a faculty of over 500 professors, instructors and researchers. JCT's goal to bring higher education to under-served communities is most evident in their Program for Students from the Ethiopian Community and Haredi Integration programs.
JCT has separate campuses for men and women in order to allow the Orthodox and Haredi communities, who comprise the majority of its student body and insist on gender-separated classes, to study comfortably.
The college trains 20 percent of Israel's women engineers. One out of every five Israeli women studying for a BSc in computer science and/or software engineering does so at JCT, and 53 percent of the school's computer science students are women—18 percent higher than any other Israeli university.
Branches
The Jerusalem College of Technology comprises the following campuses:
Lev Campus - academic studies combined with yeshiva studies for men. This campus also includes the Naveh program for Haredi men.
Tal Campus - academic studies combined with midrasha (religious) studies for women. This campus also includes the Tvuna program for Haredi and Hassidic women.
Lustig Campus - founded in 1999 and geared toward Haredi women.
Degrees awarded
Bachelor of Science
Electronic Engineering
Applied Physics/Electro-Optical Engineering
Applied Physics/Medical Engineering
Software Engineering
Communication Systems Engineering
Computer science
Bioinformatics
Industrial Engineering and Marketing
Nursing (BSN)
Bachelor of Arts
Accounting & Information systems
Business Administration
Masters Degree
(M.B.A.) - Business Administration
(M.Sc) - Telecommunications Systems Engineering
(M.Sc) - Physics/Electro-Optical Engineering
(MSN) - Nursing
Special Programs
The Reuven Surkis Program for Students from The Ethiopian Community
JCT was the pioneer among Israel's leading institutions of higher education in advancing the integration of Ethiopian immigrants. The Reuven Surkis Program for Students from The Ethiopian Community consists of a preparatory year program (Mechina) and a full degree program; most of the students studying in the full degree program participated first in the preparatory year program. The Reuven Surkis Program has produced 158 graduates.
Haredi Integration Program
The Center for Advancement of Haredim at JCT encourages Haredi men and women to pursue academic careers and consists, much like the program for the Ethiopian community of a preparatory year program (Mechina) and a full degree program. The Haredi Integration program has graduated thousands. There are currently more than 2,000 Haredi men and women studying towards degrees at JCT. According to Israel's Central Bureau of Statistics, about 50 percent of Haredi men in the country were employed by the end of 2017. JCT's Haredi graduates have attained an 89-percent employment rate, including 77 percent that are employed in their field of choice. Among the 1,000 Israeli Haredim who studied computer science in 2017, two-thirds of them studied at JCT.
International Program
The International Program in English at JCT is a three-year-long program with majors in Computer Science (Full-Time BSC), and Business Administration (Part-Time BA).
Cyber Elite
JCT's Cyber Elite program provides training to graduates in software engineering and computer science, while simultaneously placing them in cyber departments of multinational, aerospace and defense companies, and in cyber startups. This opens up the cyber field to the Haredi community and to others who previously experience difficulty attaining cyber positions because they were not represented in cyber units within the Israel Defense Forces.
Nursing program
JCT's BSN (bachelor's of nursing) program in nursing accounts for 20 percent of all nursing students in Israel. The college's Nursing Department was awarded (2018) the Israeli Ministry of Health's National Prize for Excellence, ranking first among 24 departments nationwide in all measured criteria.
Israel's First Master's Program in Health Informatics
JCT's Nursing Department is launching Israel's first master's degree program in the growing field of Health Informatics, which focuses on managing and analyzing data to support the best clinical decisions and treatment for patients. Health informatics utilizes the study and application of clinical information and computer science to design and deploy effective technologies that support the delivery of health care services and improve information management.
JCT's health informatics program is open to registered nurses with a bachelor's degree and was developed with the assistance of the University of Toronto's Institute of Health Policy, Management and Evaluation, in addition to the support of the Canadian Friends of JCT. The certificate program that ran this year as a prelude to opening the MSc programt completed its studies in April (2018), just as Israel's Council for Higher Education approved the Master of Health Informatics degree for the 2018-2019 academic year. The partnership between JCT and U of T was facilitated by Professor Judith Shamian, past president of the International Council of Nurses and a member of JCT's board of trustees.
See also
Education in Israel
List of universities and colleges in Israel
Science and technology in Israel
References
Colleges in Israel
Universities and colleges in Jerusalem
Educational institutions established in 1969
Judaism and science
1969 establishments in Israel |
All electronic devices and circuitry generate excess heat and thus require thermal management to improve reliability and prevent premature failure. The amount of heat output is equal to the power input, if there are no other energy interactions. There are several techniques for cooling including various styles of heat sinks, thermoelectric coolers, forced air systems and fans, heat pipes, and others. In cases of extreme low environmental temperatures, it may actually be necessary to heat the electronic components to achieve satisfactory operation.
Overview
Thermal resistance of devices
This is usually quoted as the thermal resistance from junction to case of the semiconductor device. The units are °C/W. For example, a heatsink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 Watt of heat. Thus, a heatsink with a low °C/W value is more efficient than a heatsink with a high °C/W value.
Given two semiconductor devices in the same package, a lower junction to ambient resistance (RθJ-C) indicates a more efficient device. However, when comparing two devices with different die-free package thermal resistances (Ex. DirectFET MT vs wirebond 5x6mm PQFN), their junction to ambient or junction to case resistance values may not correlate directly to their comparative efficiencies. Different semiconductor packages may have different die orientations, different copper(or other metal) mass surrounding the die, different die attach mechanics, and different molding thickness, all of which could yield significantly different junction to case or junction to ambient resistance values, and could thus obscure overall efficiency numbers.
Thermal time constants
A heatsink's thermal mass can be considered as a capacitor (storing heat instead of charge) and the thermal resistance as an electrical resistance (giving a measure of how fast stored heat can be dissipated). Together, these two components form a thermal RC circuit with an associated time constant given by the product of R and C. This quantity can be used to calculate the dynamic heat dissipation capability of a device, in an analogous way to the electrical case.
Thermal interface material
A thermal interface material or mastic (aka TIM) is used to fill the gaps between thermal transfer surfaces, such as between microprocessors and heatsinks, in order to increase thermal transfer efficiency.
It has a higher thermal conductivity value in Z-direction than xy-direction.
Applications
Personal computers
Due to recent technological developments and public interest, the retail heat sink market has reached an all-time high. In the early 2000s, CPUs were produced that emitted more and more heat than earlier, escalating requirements for quality cooling systems.
Overclocking has always meant greater cooling needs, and the inherently hotter chips meant more concerns for the enthusiast. Efficient heat sinks are vital to overclocked computer systems because the higher a microprocessor's cooling rate, the faster the computer can operate without instability; generally, faster operation leads to higher performance. Many companies now compete to offer the best heat sink for PC overclocking enthusiasts. Prominent aftermarket heat sink manufacturers include: Aero Cool, Foxconn, Thermalright, Thermaltake, Swiftech, and Zalman.
Soldering
Temporary heat sinks were sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to higher powered soldering irons, so this practice is still very much in use.
Batteries
In the battery used for electric vehicles, Nominal battery performance is usually specified for working temperatures somewhere in the +20 °C to +30 °C range; however, the actual performance can deviate substantially from this if the battery is operated at higher or, in particular, lower temperatures, so some electric cars have heating and cooling for their batteries.
Methodologies
Heat sinks
Heat sinks are widely used in electronics and have become essential to modern microelectronics. In common use, it is a metal object brought into contact with an electronic component's hot surface—though in most cases, a thin thermal interface material mediates between the two surfaces. Microprocessors and power handling semiconductors are examples of electronics that need a heat sink to reduce their temperature through increased thermal mass and heat dissipation (primarily by conduction and convection and to a lesser extent by radiation). Heat sinks have become almost essential to modern integrated circuits like microprocessors, DSPs, GPUs, and more.
A heat sink usually consists of a metal structure with one or more flat surfaces to ensure good thermal contact with the components to be cooled, and an array of comb or fin like protrusions to increase the surface contact with the air, and thus the rate of heat dissipation.
A heat sink is sometimes used in conjunction with a fan to increase the rate of airflow over the heat sink. This maintains a larger temperature gradient by replacing warmed air faster than convection would. This is known as a forced air system.
Cold plate
Placing a conductive thick metal plate, referred to as a cold plate, as a heat transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by way of conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat transfer surface area, that is quite different from fins (extended surfaces).
Principle
Heat sinks function by efficiently transferring thermal energy ("heat") from an object at high temperature to a second object at a lower temperature with a much greater heat capacity. This rapid transfer of thermal energy quickly brings the first object into thermal equilibrium with the second, lowering the temperature of the first object, fulfilling the heat sink's role as a cooling device. Efficient function of a heat sink relies on rapid transfer of thermal energy from the first object to the heat sink, and the heat sink to the second object.
The most common design of a heat sink is a metal device with many fins. The high thermal conductivity of the metal combined with its large surface area result in the rapid transfer of thermal energy to the surrounding, cooler, air. This cools the heat sink and whatever it is in direct thermal contact with. Use of fluids (for example coolants in refrigeration) and thermal interface material (in cooling electronic devices) ensures good transfer of thermal energy to the heat sink. Similarly, a fan may improve the transfer of thermal energy from the heat sink to the air.
Construction and materials
A heat sink usually consists of a base with one or more flat surfaces and an array of comb or fin-like protrusions to increase the heat sink's surface area contacting the air, and thus increasing the heat dissipation rate. While a heat sink is a static object, a fan often aids a heat sink by providing increased airflow over the heat sink—thus maintaining a larger temperature gradient by replacing the warmed air more quickly than passive convection achieves alone—this is known as a forced-air system.
Ideally, heat sinks are made from a good thermal conductor such as silver, gold, copper, or aluminum alloy. Copper and aluminum are among the most-frequently used materials for this purpose within electronic devices. Copper (401 W/(m·K) at 300 K) is significantly more expensive than aluminum (237 W/(m·K) at 300 K) but is also roughly twice as efficient as a thermal conductor. Aluminum has the significant advantage that it can be easily formed by extrusion, thus making complex cross-sections possible. Aluminum is also much lighter than copper, offering less mechanical stress on delicate electronic components. Some heat sinks made from aluminum have a copper core as a trade off. The heat sink's contact surface (the base) must be flat and smooth to ensure the best thermal contact with the object needing cooling. Frequently a thermally conductive grease is used to ensure optimal thermal contact; such compounds often contain colloidal silver. Further, a clamping mechanism, screws, or thermal adhesive hold the heat sink tightly onto the component, but specifically without pressure that would crush the component.
Performance
Heat sink performance (including free convection, forced convection, liquid cooled, and any combination thereof) is a function of material, geometry, and overall surface heat transfer coefficient. Generally, forced convection heat sink thermal performance is improved by increasing the thermal conductivity of the heat sink materials, increasing the surface area (usually by adding extended surfaces, such as fins or foam metal) and by increasing the overall area heat transfer coefficient (usually by increase fluid velocity, such as adding fans, pumps, etc.).
Online heat sink calculators from companies such as Novel Concepts, Inc. and at www.heatsinkcalculator.com can accurately estimate forced and natural convection heat sink performance. For more complex heat sink geometries, or heat sinks with multiple materials or multiple fluids, computation fluid dynamics (CFD) analysis is recommended (see graphics on this page).
Convective air cooling
This term describes device cooling by the convection currents of the warm air being allowed to escape the confines of the component to be replaced by cooler air. Since warm air normally rises, this method usually requires venting at the top or sides of the casing to be effective.
Forced air cooling
If there is more air being forced into a system than being pumped out (due to an imbalance in the number of fans), this is referred to as a 'positive' airflow, as the pressure inside the unit is higher than outside.
A balanced or neutral airflow is the most efficient, although a slightly positive airflow can result in less dust build up if filtered properly
Heat pipes
A heat pipe is a heat transfer device that uses evaporation and condensation of a two-phase "working fluid" or coolant to transport large quantities of heat with a very small difference in temperature between the hot and cold interfaces. A typical heat pipe consists of sealed hollow tube made of a thermoconductive metal such as copper or aluminium, and a wick to return the working fluid from the evaporator to the condenser. The pipe contains both saturated liquid and vapor of a working fluid (such as water, methanol or ammonia), all other gases being excluded. The most common heat pipe for electronics thermal management has a copper envelope and wick, with water as the working fluid. Copper/methanol is used if the heat pipe needs to operate below the freezing point of water, and aluminum/ammonia heat pipes are used for electronics cooling in space.
The advantage of heat pipes is their great efficiency in transferring heat. The thermal conductivity of heat pipes can be as high as 100,000 W/m K, in contrast to copper, which has a thermal conductivity of around 400 W/m K.
Peltier cooling plates
Peltier cooling plates take advantage of the Peltier effect to create a heat flux between the junction of two different conductors of electricity by applying an electric current. This effect is commonly used for cooling electronic components and small instruments. In practice, many such junctions may be arranged in series to increase the effect to the amount of heating or cooling required.
There are no moving parts, so a Peltier plate is maintenance free. It has a relatively low efficiency, so thermoelectric cooling is generally used for electronic devices, such as infra-red sensors, that need to operate at temperatures below ambient. For cooling these devices, the solid state nature of the Peltier plates outweighs their poor efficiency. Thermoelectric junctions are typically around 10% as efficient as the ideal Carnot cycle refrigerator, compared with 40% achieved by conventional compression cycle systems.
Synthetic jet air cooling
A synthetic jet is produced by a continual flow of vortices that are formed by alternating brief ejection and suction of air across an opening such that the net mass flux is zero. A unique feature of these jets is that they are formed entirely from the working fluid of the flow system in which they are deployed can produce a net momentum to the flow of a system without net mass injection to the system.
Synthetic jet air movers have no moving parts and are thus maintenance free. Due to the high heat transfer coefficients, high reliability but lower overall flow rates, Synthetic jet air movers are usually used at the chip level and not at the system level for cooling. However depending on the size and complexity of the systems they can be used for both at times.
Electrostatic fluid acceleration
An electrostatic fluid accelerator (EFA) is a device which pumps a fluid such as air without any moving parts. Instead of using rotating blades, as in a conventional fan, an EFA uses an electric field to propel electrically charged air molecules. Because air molecules are normally neutrally charged, the EFA has to create some charged molecules, or ions, first. Thus there are three basic steps in the fluid acceleration process: ionize air molecules, use those ions to push many more neutral molecules in a desired direction, and then recapture and neutralize the ions to eliminate any net charge.
The basic principle has been understood for some time but only in recent years have seen developments in the design and manufacture of EFA devices that may allow them to find practical and economical applications, such as in micro-cooling of electronics components.
Recent developments
More recently, high thermal conductivity materials such as synthetic diamond and boron arsenide cooling sinks are being researched to provide better cooling. Boron arsenide has been reported with high thermal conductivity and high thermal boundary conductance with gallium nitride transistors and thus better performance than diamond and silicon carbide cooling technologies. Also, some heat sinks are constructed of multiple materials with desirable characteristics, such as phase change materials, which can store a great deal of energy due to their heat of fusion.
Thermal simulation of electronics
Thermal simulations give engineers a visual representation of the temperature and airflow inside the equipment. Thermal simulations enable engineers to design the cooling system; to optimise a design to reduce power consumption, weight and cost; and to verify the thermal design to ensure there are no issues when the equipment is built. Most thermal simulation software uses Computational fluid dynamics techniques to predict temperature and airflow of an electronics system.
Design
Thermal simulation is often required to determine how to effectively cool components within design constraints. Simulation enables the design and verification of the thermal design of the equipment at a very early stage and throughout the design of the electronic and mechanical parts. Designing with thermal properties in mind from the start reduces the risk of last minute design changes to fix thermal issues.
Using thermal simulation as part of the design process enables the creation of an optimal and innovative product design that performs to specification and meets customers' reliability requirements.
Optimise
It is easy to design a cooling system for almost any equipment if there is unlimited space, power and budget. However, the majority of equipment will have a rigid specification that leaves a limited margin for error. There is a constant pressure to reduce power requirements, system weight and cost parts, without compromising performance or reliability. Thermal simulation allows experimentation with optimisation, such as modifying heatsink geometry or reducing fan speeds in a virtual environment, which is faster, cheaper and safer than physical experiment and measurement.
Verify
Traditionally, the first time the thermal design of the equipment is verified is after a prototype has been built. The device is powered up, perhaps inside an environmental chamber, and temperatures of the critical parts of the system are measured using sensors such as thermocouples. If any problems are discovered, the project is delayed while a solution is sought. A change to the design of a PCB or enclosure part may be required to fix the issue, which will take time and cost a significant amount of money. If thermal simulation is used as part of the design process of the equipment, thermal design issue will be identified before a prototype is built. Fixing an issue at the design stage is both quicker and cheaper than modifying the design after a prototype is created.
Software
There are a wide range of software tools that are designed for thermal simulation of electronics include 6SigmaET, Ansys' IcePak and Mentor Graphics' FloTHERM.
Telecommunications environments
Thermal management measures must be taken to accommodate high heat release equipment in telecommunications rooms. Generic supplemental/spot cooling techniques, as well as turnkey cooling solutions developed by equipment manufacturers are viable solutions. Such solutions could allow very high heat release equipment to be housed in a central office that has a heat density at or near the cooling capacity available from the central air handler.
According to Telcordia GR-3028, Thermal Management in Telecommunications Central Offices, the most common way of cooling modern telecommunications equipment internally is by utilizing multiple high-speed fans to create forced convection cooling. Although direct and indirect liquid cooling may be introduced in the future, the current design of new electronic equipment is geared towards maintaining air as the cooling medium.
A well-developed "holistic" approach is required to understand current and future thermal management problems. Space cooling on one hand, and equipment cooling on the other, cannot be viewed as two isolated parts of the overall thermal challenge. The main purpose of an equipment facility's air-distribution system is to distribute conditioned air in such a way that the electronic equipment is cooled effectively. The overall cooling efficiency depends on how the air distribution system moves air through the equipment room, how the equipment moves air through the equipment frames, and how these airflows interact with one another. High heat-dissipation levels rely heavily on a seamless integration of equipment-cooling and room-cooling designs.
The existing environmental solutions in telecommunications facilities have inherent limitations. For example, most mature central offices have limited space available for large air duct installations that are required for cooling high heat density equipment rooms. Furthermore, steep temperature gradients develop quickly should a cooling outage occur; this has been well documented through computer modeling and direct measurements and observations. Although environmental backup systems may be in place, there are situations when they will not help. In a recent case, telecommunications equipment in a major central office was overheated, and critical services were interrupted by a complete cooling shut down initiated by a false smoke alarm.
A major obstacle for effective thermal management is the way heat-release data is currently reported. Suppliers generally specify the maximum (nameplate) heat release from the equipment. In reality, equipment configuration and traffic diversity will result in significantly lower heat release numbers.
Equipment cooling classes
As stated in GR-3028, most equipment environments maintain cool front (maintenance) aisles and hot rear (wiring) aisles, where cool supply air is delivered to the front aisles and hot air is removed from the rear aisles. This scheme provides multiple benefits, including effective equipment cooling and high thermal efficiency.
In the traditional room cooling class utilized by the majority of service providers, equipment cooling would benefit from air intake and exhaust locations that help move air from the front aisle to the rear aisle. The traditional front-bottom to top-rear pattern, however, has been replaced in some equipment with other airflow patterns that may not ensure adequate equipment cooling in high heat density areas.
A classification of equipment (shelves and cabinets) into Equipment-Cooling (EC) classes serves the purpose of classifying the equipment with regard to the cooling air intake and hot air exhaust locations, i.e., the equipment airflow schemes or protocols.
The EC-Class syntax provides a flexible and important “common language.” It is used for developing Heat-Release Targets (HRTs), which are important for network reliability, equipment and space planning, and infrastructure capacity planning. HRTs take into account physical limitations of the environment and environmental baseline criteria, including the supply airflow capacity, air diffusion into the equipment space, and air-distribution/equipment interactions. In addition to being used for developing the HRTs, the EC Classification can be used to show compliance on product sheets, provide internal design specifications, or specify requirements in purchase orders.
The Room-Cooling classification (RC-Class) refers to the way the overall equipment space
is air-conditioned (cooled). The main purpose of RC-Classes is to provide a logical classification and description of legacy and non-legacy room-cooling schemes or protocols in the central office environment. In addition to being used for developing HRTs, the RC-classification can be used in internal central office design specifications or in purchase orders.
Supplemental-Cooling classes (SC-Class) provide a classification of supplemental cooling techniques. Service providers use supplemental/spot-cooling solutions to supplement the
cooling capacity (e.g., to treat occurrences of “hot spots”) provided by the general
room-cooling protocol as expressed by the RC-Class.
Economic impact
Energy consumption by telecommunications equipment currently accounts for a high percentage of the total energy consumed in central offices. Most of this energy is subsequently released as heat to the surrounding equipment space. Since most of the remaining central office energy use goes to cool the equipment room, the economic impact of making the electronic equipment energy-efficient would be considerable for companies that use and operate telecommunications equipment. It would reduce capital costs for support systems, and improve thermal conditions in the equipment room.
See also
Heat generation in integrated circuits
Thermal resistance in electronics
Thermal management of high-power LEDs
Thermal design power
Heat pipe
Computer cooling
Radiator
Active cooling
References
Further reading
External links
Computer hardware cooling
Electronic design
de:Kühlkörper
es:Disipador
fr:Radiateur#.C3.89changeur solide.2Fair
it:Dissipatore (elettronica)
he:צלעות קירור
lt:Radiatorius (elektronikoje)
nl:Koelvin
ja:ヒートシンク
pl:Radiator
pt:Dissipador
ru:Кулер
sk:Chladič (elektronika) |
A Wallace multiplier is a hardware implementation of a binary multiplier, a digital circuit that multiplies two integers. It uses a selection of full and half adders (the Wallace tree or Wallace reduction) to sum partial products in stages until two numbers are left. Wallace multipliers reduce as much as possible on each layer, whereas Dadda multipliers try to minimize the required number of gates by postponing the reduction to the upper layers.
Wallace multipliers were devised by the Australian computer scientist Chris Wallace in 1964.
The Wallace tree has three steps:
Multiply each bit of one of the arguments, by each bit of the other.
Reduce the number of partial products to two by layers of full and half adders.
Group the wires in two numbers, and add them with a conventional adder.
Compared to naively adding partial products with regular adders, the benefit of the Wallace tree is its faster speed. It has reduction layers, but each layer has only propagation delay. A naive addition of partial products would require time.
As making the partial products is and the final addition is , the total multiplication is , not much slower than addition. From a complexity theoretic perspective, the Wallace tree algorithm puts multiplication in the class NC1.
The downside of the Wallace tree, compared to naive addition of partial products, is its much higher gate count.
These computations only consider gate delays and don't deal with wire delays, which can also be very substantial.
The Wallace tree can be also represented by a tree of 3/2 or 4/2 adders.
It is sometimes combined with Booth encoding.
Detailed explanation
The Wallace tree is a variant of long multiplication. The first step is to multiply each digit (each bit) of one factor by each digit of the other. Each of this partial products has weight equal to the product of its factors. The final product is calculated by the weighted sum of all these partial products.
The first step, as said above, is to multiply each bit of one number by each bit of the other, which is accomplished as a simple AND gate, resulting in bits; the partial product of bits by has weight
In the second step, the resulting bits are reduced to two numbers; this is accomplished as follows:
As long as there are three or more wires with the same weight add a following layer:-
Take any three wires with the same weights and input them into a full adder. The result will be an output wire of the same weight and an output wire with a higher weight for each three input wires.
If there are two wires of the same weight left, input them into a half adder.
If there is just one wire left, connect it to the next layer.
In the third and final step, the two resulting numbers are fed to an adder, obtaining the final product.
Example
, multiplying by :
First we multiply every bit by every bit:
weight 1 –
weight 2 – ,
weight 4 – , ,
weight 8 – , , ,
weight 16 – , ,
weight 32 – ,
weight 64 –
Reduction layer 1:
Pass the only weight-1 wire through, output: 1 weight-1 wire
Add a half adder for weight 2, outputs: 1 weight-2 wire, 1 weight-4 wire
Add a full adder for weight 4, outputs: 1 weight-4 wire, 1 weight-8 wire
Add a full adder for weight 8, and pass the remaining wire through, outputs: 2 weight-8 wires, 1 weight-16 wire
Add a full adder for weight 16, outputs: 1 weight-16 wire, 1 weight-32 wire
Add a half adder for weight 32, outputs: 1 weight-32 wire, 1 weight-64 wire
Pass the only weight-64 wire through, output: 1 weight-64 wire
Wires at the output of reduction layer 1:MESCOE
weight 1 – 1
weight 2 – 1
weight 4 – 2
weight 8 – 3
weight 16 – 2
weight 32 – 2
weight 64 – 2
Reduction layer 2:
Add a full adder for weight 8, and half adders for weights 4, 16, 32, 64
Outputs:
weight 1 – 1
weight 2 – 1
weight 4 – 1
weight 8 – 2
weight 16 – 2
weight 32 – 2
weight 64 – 2
weight 128 – 1
Group the wires into a pair of integers and an adder to add them.
See also
Dadda tree
References
Further reading
External links
Generic VHDL Implementation of Wallace Tree Multiplier.
Arithmetic logic circuits
Computer arithmetic
Multiplication
1964 introductions
1964 in science |
LIGA is a fabrication technology used to create high-aspect-ratio microstructures. The term is a German acronym for – lithography, electroplating, and molding.
Overview
The LIGA consists of three main processing steps; lithography, electroplating and molding.
There are two main LIGA-fabrication technologies, X-Ray LIGA, which uses X-rays produced by a synchrotron to create high aspect ratio structures, and UV LIGA, a more accessible method which uses ultraviolet light to create structures with relatively low aspect ratios.
Notable characteristics of X-ray LIGA-fabricated structures include:
high aspect ratios on the order of 100:1
parallel side walls with a flank angle on the order of 89.95°
smooth side walls with = , suitable for optical mirrors
structural heights from tens of micrometers to several millimeters
structural details on the order of micrometers over distances of centimeters
X-Ray LIGA
X-Ray LIGA is a fabrication process in microtechnology that was developed in the early 1980s
by
a team under the leadership of Erwin Willy Becker and Wolfgang Ehrfeld at the Institute for Nuclear Process Engineering
(Institut für Kernverfahrenstechnik, IKVT) at the Karlsruhe Nuclear Research Center, since renamed to the Institute for Microstructure Technology (Institut für Mikrostrukturtechnik, IMT) at the Karlsruhe Institute of Technology (KIT).
LIGA was one of the first major techniques to allow on-demand manufacturing of high-aspect-ratio structures (structures that are much taller than wide) with lateral precision below one micrometer.
In the process, an X-ray sensitive polymer photoresist, typically PMMA, bonded to an electrically conductive substrate, is exposed to parallel beams of high-energy X-rays from a synchrotron radiation source through a mask partly covered with a strong X-ray absorbing material. Chemical removal of exposed (or unexposed) photoresist results in a three-dimensional structure, which can be filled by the electrodeposition of metal. The resist is chemically stripped away to produce a metallic mold insert. The mold insert can be used to produce parts in polymers or ceramics through injection molding.
The LIGA technique's unique value is the precision obtained by the use of deep X-ray lithography (DXRL). The technique enables microstructures with high aspect ratios and high precision to be fabricated in a variety of materials (metals, plastics, and ceramics). Many of its practitioners and users are associated with or are located close to synchrotron facilities.
UV LIGA
UV LIGA utilizes an inexpensive ultraviolet light source, like a mercury lamp, to expose a polymer photoresist, typically SU-8. Because heating and transmittance are not an issue in optical masks, a simple chromium mask can be substituted for the technically sophisticated X-ray mask. These reductions in complexity make UV LIGA much cheaper and more accessible than its X-ray counterpart. However, UV LIGA is not as effective at producing precision molds and is thus used when cost must be kept low and very high aspect ratios are not required.
Process details
Mask
X-ray masks are composed of a transparent, low-Z carrier, a patterned high-Z absorber, and a metallic ring for alignment and heat removal. Due to extreme temperature variations induced by the X-ray exposure, carriers are fabricated from materials with high thermal conductivity to reduce thermal gradients. Currently, vitreous carbon and graphite are considered the best material, as their use significantly reduces side-wall roughness. Silicon, silicon nitride, titanium, and diamond are also in use as carrier substrates but not preferred, as the required thin membranes are comparatively fragile and titanium masks tend to round sharp features due to edge fluorescence. Absorbers are gold, nickel, copper, tin, lead, and other X-ray absorbing metals.
Masks can be fabricated in several fashions. The most accurate and expensive masks are those created by electron beam lithography, which provides resolutions as fine as in resist thick and features in resist thick. An intermediate method is the plated photomask which provides resolution and can be outsourced at a cost on the order of $1000 per mask. The least expensive method is a direct photomask, which provides resolution in resist thick. In summary, masks can cost between $1000 and $20,000 and take between two weeks and three months for delivery. Due to the small size of the market, each LIGA group typically has its own mask-making capability. Future trends in mask creation include larger formats, from a diameter of to , and smaller feature sizes.
Substrate
The starting material is a flat substrate, such as a silicon wafer or a polished disc of beryllium, copper, titanium, or other material. The substrate, if not already electrically conductive, is covered with a conductive plating base, typically through sputtering or evaporation.
The fabrication of high-aspect-ratio structures requires the use of a photoresist able to form a mold with vertical sidewalls. Thus the photoresist must have a high selectivity and be relatively free from stress when applied in thick layers. The typical choice, poly(methyl methacrylate) (PMMA) is applied to the substrate by a glue-down process in which a precast, high-molecular-weight sheet of PMMA is attached to the plating base on the substrate. The applied photoresist is then milled down to the precise height by a fly cutter prior to pattern transfer by X-ray exposure. Because the layer must be relatively free from stress, this glue-down process is preferred over alternative methods such as casting. Further, the cutting of the PMMA sheet by the fly cutter requires specific operating conditions and tools to avoid introducing any stress and crazing of the photoresist.
Exposure
A key enabling technology of LIGA is the synchrotron, capable of emitting high-power, highly collimated X-rays. This high collimation permits relatively large distances between the mask and the substrate without the penumbral blurring that occurs from other X-ray sources. In the electron storage ring or synchrotron, a magnetic field constrains electrons to follow a circular path and the radial acceleration of the electrons causes electromagnetic radiation to be emitted forward. The radiation is thus strongly collimated in the forward direction and can be assumed to be parallel for lithographic purposes. Because of the much higher flux of usable collimated X-rays, shorter exposure times become possible. Photon energies for a LIGA exposure are approximately distributed between 2.5 and .
Unlike optical lithography, there are multiple exposure limits, identified as the top dose, bottom dose, and critical dose, whose values must be determined experimentally for a proper exposure. The exposure must be sufficient to meet the requirements of the bottom dose, the exposure under which a photoresist residue will remain, and the top dose, the exposure over which the photoresist will foam. The critical dose is the exposure at which unexposed resist begins to be attacked. Due to the insensitivity of PMMA, a typical exposure time for a thick PMMA is six hours. During exposure, secondary radiation effects such as Fresnel diffraction, mask and substrate fluorescence, and the generation of Auger electrons and photoelectrons can lead to overexposure.
During exposure the X-ray mask and the mask holder are heated directly by X-ray absorption and cooled by forced convection from nitrogen jets. Temperature rise in PMMA resist is mainly from heat conducted from the substrate backward into the resist and from the mask plate through the inner cavity air forward to the resist, with X-ray absorption being tertiary. Thermal effects include chemistry variations due to resist heating and geometry-dependent mask deformation.
Development
For high-aspect-ratio structures the resist-developer system is required to have a ratio of dissolution rates in the exposed and unexposed areas of 1000:1. The standard, empirically optimized developer is a mixture of tetrahydro-1,4-oxazine (), 2-aminoethanol-1 (), 2-(2-butoxyethoxy)ethanol (), and water (). This developer provides the required ratio of dissolution rates and reduces stress-related cracking from swelling in comparison to conventional PMMA developers. After development, the substrate is rinsed with deionized water and dried either in a vacuum or by spinning. At this stage, the PMMA structures can be released as the final product (e.g., optical components) or can be used as molds for subsequent metal deposition.
Electroplating
In the electroplating step, nickel, copper, or gold is plated upward from the metalized substrate into the voids left by the removed photoresist. Taking place in an electrolytic cell, the current density, temperature, and solution are carefully controlled to ensure proper plating. In the case of nickel deposition from NiCl2 in a KCl solution, Ni is deposited on the cathode (metalized substrate) and Cl2 evolves at the anode. Difficulties associated with plating into PMMA molds include voids, where hydrogen bubbles nucleate on contaminates; chemical incompatibility, where the plating solution attacks the photoresist; and mechanical incompatibility, where film stress causes the plated layer to lose adhesion. These difficulties can be overcome through the empirical optimization of the plating chemistry and environment for a given layout.
Stripping
After exposure, development, and electroplating, the resist is stripped. One method for removing the remaining PMMA is to flood expose the substrate and use the developing solution to cleanly remove the resist. Alternatively, chemical solvents can be used. Stripping of a thick resist chemically is a lengthy process, taking two to three hours in acetone at room temperature. In multilayer structures, it is common practice to protect metal layers against corrosion by backfilling the structure with a polymer-based encapsulant. At this stage, metal structures can be left on the substrate (e.g., microwave circuitry) or released as the final product (e.g., gears).
Replication
After stripping, the released metallic components can be used for mass replication through standard means of replication such as stamping or injection molding.
Commercialization
In the 1990s, LIGA was a cutting-edge MEMS fabrication technology, resulting in the design of components showcasing the technique's unique versatility. Several companies that begin using the LIGA process later changed their business model (e.g., Steag microParts becoming Boehringer Ingelheim microParts, Mezzo Technologies). Currently, only two companies, HTmicro and microworks, continue their work in LIGA, benefiting from limitations of other competing fabrication technologies. UV LIGA, due to its lower production cost, is employed more broadly by several companies, such as Veco, Tecan, Temicon, and Mimotec in Switzerland, who supply the Swiss watch market with metal parts made of nickel and nickel-phosphorus.
Gallery
Below is a gallery of LIGA-fabricated structures arranged by date.
Notes
See also
Photolithography
X-ray lithography
Electroplating
Molding
Synchrotron
PMMA
SU-8 photoresist
Enriched Uranium — Aerodynamic Processes
References
External links
LiMiNT - LIGA process from Singapore Synchrotron Light Source
LIGA process Karlsruhe Institute of Technology, Institute of Microstrucutre Technology
Illustrated LIGA-process by Arndt Last
Materials science
Microtechnology
Lithography (microfabrication) |
Singapore Science Park is a research, development and technologies hub in Queenstown, Singapore. Managed by Ascendas, a subsidiary of Capitaland, it was set up under a government initiative in 1980 to provide the necessary infrastructure for local retail and development companies to flourish in the country.
One of the most prominent local tenants headquartered within the parks is Singaporean multinational technology company Shopee.
Milestones
Singapore Science Park I
In 1980, the Government gave its seal of approval to proceed with the construction of the Singapore Science Park on a 30-hectare plot of land. In 1982, Singapore Science Park I welcomed its first tenant, Det Norske Veritas (DNV).
On 3 September 2019, Shopee officially opened its new six-storey regional headquarters at Singapore Science Park I. The new building has of space, which can accommodate 3,000 employees and is six times larger than the previous headquarters at Ascent Building, also located within the park.
Singapore Science Park II
In 1993, construction of Singapore Science Park II began on a 20-hectare plot of land, the first building constructed was the Institute of Microelectronics (IME). The Alpha, a multi-tenant building was the next building constructed in Singapore Science Park II.
In 2000, Arcasia would expand and develop a 15-hectare plot of land next to the Singapore Science Park II into a Singapore Science Park III at a cost of about $600 million. However, it was revealed that the latter was an expansion of Singapore Science Park II. Galen, the first building on the Singapore Science Park II's expanded plot was completed on 29 June 2003.
See also
Biopolis
Notes
References
External links
Singapore Science Park
1982 establishments in Singapore
Places in Singapore
Science and technology in Singapore
Scientific organisations based in Singapore
Queenstown, Singapore
Business parks of Singapore |
In logic, a four-valued logic is any logic with four truth values. Several types of four-valued logic have been advanced.
Belnap
Nuel Belnap considered the challenge of question answering by computer in 1975. Noting human fallibility, he was concerned with the case where two contradictory facts were loaded into memory, and then a query was made. "We all know about the fecundity of contradictions in two-valued logic: contradictions are never isolated, infecting as they do the whole system." Belnap proposed a four-valued logic as a means of containing contradiction.
He called the table of values A4: Its possible values are true, false, both (true and false), and neither (true nor false). Belnap's logic is designed to cope with multiple information sources such that if only true is found then true is assigned, if only false is found then false is assigned, if some sources say true and others say false then both is assigned, and if no information is given by any information source then neither is assigned. These four values correspond to the elements of the power set based on {T, F}.
T is the supremum and F the infimum in the logical lattice where None and Both are in the wings. Belnap has this interpretation: "The worst thing is to be told something is false simpliciter. You are better off (it is one of your hopes) in either being told nothing about it, or being told both that it is true and also that it is false; while of course best of all is to be told that it is true." Belnap notes that "paradoxes of implication" (A&~A)→B and A→(B∨~B) are avoided in his 4-valued system.
Logical connectives
Belnap addressed the challenge of extending logical connectives to A4. Since it is the power set on {T, F}, the elements of A4 are ordered by inclusion making it a lattice with Both at the supremum and None at the infimum, and T and F on the wings. Referring to Dana Scott, he assumes the connectives are Scott-continuous or monotonic functions. First he expands negation by deducing that ¬Both = Both and ¬None = None. To expand And and Or the monotonicity goes only so far. Belnap uses equivalence (a&b = a iff avb = b) to fill out the tables for these connectives. He finds None & Both = F while None v Both = T.
The result is a second lattice L4 called the "logical lattice", where A4 is the "approximation lattice" determining Scott continuity.
Implementation using two bits
Let one bit be assigned for each truth value: 01=T and 10=F with 00=N and 11=B.
Then the subset relation in the power set on {T, F} corresponds to order ab<cd iff a<c and b<d in two-bit representation. Belnap calls the lattice associated with this order the "approximation lattice".
The logic associated with two-bit variables can be incorporated into computer hardware.
Matrix machine
There are sixteen logical matrices that are 2x2, and four logical vectors that act as inputs and outputs of the matrix transformation:
X = {A, B, C, D } = {(0,1), (1, 0), (0, 0), (1, 1} }.
When C is input, the output is always C. Four of the sixteen have zero in one corner only, so the output of vector-matrix multiplication with Boolean arithmetic is always D, except for C input.
Nine further logical matrices need description to fill out the finite state machine represented by logical matrices acting on X. Excluding C, inputs A, B, and D are considered in order and the output in X expressed as a triple, for example ABD for commonly known as the identity matrix.
The asymmetric matrices differ in their action on row versus column vectors. The row convention is used here:
has code BBB, code AAA
has code CDB, code DCA.
The remaining operations on X are expressed with matrices with three zeros, so outputs include C for a third of the inputs. The codes are CAA, BCA, ACA, and CBB in these cases.
Applications
A four-valued logic was established by IEEE with the standard IEEE 1364: It models signal values in digital circuits. The four values are 1, 0, Z and X. 1 and 0 stand for boolean true and false, Z stands for high impedance or open circuit and X stands for don't care (e.g., the value has no effect). This logic is itself a subset of the 9-valued logic standard called IEEE 1164 and implemented in Very High Speed Integrated Circuit Hardware Description Language, VHDL's std_logic.
One should not confuse four-valued mathematical logic (using operators, truth tables, syllogisms, propositional calculus, theorems and so on) with communication protocols built using binary logic and displaying responses with four possible states implemented with boolean-like type of values : for instance, the SAE J1939 standard, used for CAN data transmission in heavy road vehicles, which has four logical (boolean) values: False, True, Error Condition, and Not installed (represented by values 0–3). Error Condition means there is a technical problem obstructing data acquisition. The logics for that is for example True and Error Condition=Error Condition. Not installed is used for a feature that does not exist in this vehicle, and should be disregarded for logical calculation. On CAN, usually fixed data messages are sent containing many signal values each, so a signal representing a not-installed feature will be sent anyway.
Split bit proposed gate
Creation of carbon nanotubes for logical gates has used carbon nanotube field-effect transistors (CNFETs). An anticipated demand for data storage in the Internet of Things (IoT) provides a motivation. A proposal has been made for 32 nm process application using a split bit-gate: "By using CNFET technology in 32 nm node by the proposed SQI gate, two split bit-lines QSRAM architectures have been suggested to address the issue of increasing demand for storage capacity in IoT/IoVT applications. Peripheral circuits such as a novel quaternary to binary decoder for QSRAM have been offered."
References
Further reading
Hardware description languages
Many-valued logic |
The Darmstadt University of Applied Sciences (), also known as h_da, is a University of Applied Sciences located in Darmstadt, Germany.
h_da is part of the IT cluster Rhine-Main-Neckar, the "Silicon Valley of Germany" and ATHENE, the largest research institute for IT security in Europe.
History
The roots of University of Applied Sciences Darmstadt go back to 1876 along with Technische Universität Darmstadt (the first electrical engineering chair and inventions fame), when both these Universities were a single, integrated entity from the early 1930s. Over the years a need for an independent educational institution focused on industry-oriented research was felt, and the University of Applied Sciences emerged as a spun-off, separate institution for industry-oriented research in 1971. It is the largest University of Applied Sciences in Hesse (German: Hessen) with about 11,000 students.
In 1971 when Hochschule Darmstadt was established, other regions of the Hesse also felt the need of such industry based educational institutions. In later years a large number of Hochschule were established all over Germany. As a result of this, today the German industry's engineering workforce is propelled by students of the Hochschule.
The Darmstadt University of Applied Sciences () is one of the eight holders of the European university of technology, EUt+, with the Riga Technical University (Latvia), the Cyprus University of Technology (Cyprus), the Technical University of Sofia (Bulgaria), the Technological University Dublin (Ireland), the Polytechnic University of Cartagena (Spain), the University of Technology of Troyes (France) and the Technical University of Cluj-Napoca (Romania).
The European University of Technology alliance, EUt+, is the result of the cooperation of eight European partners who share in common the "Think Human First" vision towards a human-centred approach to technology and the ambition to establish a new type of institution on a confederal basis. Through EUt+, the partners are committed to creating a sustainable future for students and learners in European countries, for the staff of each of the institutions and for the territories and regions where each campus is anchored.
Campus
The Main campus of the Hochschule Darmstadt lies at the Haardtring office, but, the campus is evenly distributed all across the city of Darmstadt at different locations. A cluster of Old and Modern university buildings are visible across the city of Darmstadt.
The media campus is in Dieburg.
Departments
Architecture
Chemical Technology
Civil Engineering
Computer Science
Design
Media
Economics
Electrical Engineering and Information Technology
Mathematics and Science
Mechanical Engineering
Plastics engineering
Social and Cultural Studies
Social Education
Mechatronics
Reputation and Rankings
Hochschule Darmstadt is a well-reputed institute to businesses in the German industry. It has consistently ranked very high on the DAAD ranking closely rivaled by Hochschule Karlsruhe. Maintaining its reputation in the specializations of Microelectronics and Robotics, Hochschule Darmstadt has contributed to some major industrial developments in Germany, including REIS and Mitsubishi Robot modules.
Research
Incorporated close ties with
Max Planck Society
EUA European University Association
Institutes
Institute of Communication and Media (ikum)
Institute of Local Economics and Environmental Planning
See also
Education in Germany
List of universities in Germany
References
DAAD information on Hochschule and Fachhochschule
ranking
External links
University of Applied Sciences
Universities of Applied Sciences in Germany
Universities and colleges in Hesse |
The Open Graphics Project (OGP) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards, primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card.
OGD1
The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card could not compete with graphics cards on the market at the time in terms of performance or functionality, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, some six years after the project began and 3 years after the appearance of the first prototypes.
Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses. The RTL (in Verilog) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL).
It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC, EDID, DPMS and VBE VESA standards. TV-out is also planned.
Versioning schema
Versioning schema for OGD1 will go like this:
{Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed}
OGD1 components
Main components of OGD1 graphics card (shown on the picture)
A) DVI transmitter pair A
B) DVI transmitter pair B
C) 330MHz triple 10-bit DAC (behind)
D) TV chip
E) 2x4 256 megabit DDR SDRAM (front, behind)
F) Xilinx 3S4000 FPGA (main chip)
G) Lattice XP10 FPGA (host interface)
H) SPI PROM 1 Mibit
J) SPI PROM 16 Mibit
K) 3x 500 MHz DACs (optional)
L) 64-bit PCI-X edge connector
M) DVI-I connector A and connector B
N) S-Video connector
O) 100-pin expansion bus connector
Divisions/terms related to OGP
Open Graphics Project (OGP)The group of people developing OGA, its written documentation, and its products.
Open Graphics Architecture (OGA)The trade name for open graphics architectures specified by the Open Graphics Project.
Open Graphics Development (OGD)The initial FPGA-based experimentation board used as a test platform for TRV ASICs.
Traversal Technology (TRV)The commercial name for the first ASIC products, based on the Open Graphics Architecture.
Open Graphics Card (OGC)Graphics cards based on TRV chips.
Open Hardware Foundation (OHF)A non-profit corporation whose charter is to promote the design and production of open-source and open-documentation hardware.
See also
Graphics hardware and FOSS
Open-source hardware
Open system (computing)
RISC-V
References
External links
The official Open Graphics wiki
Project VGA – another free graphics core project, aiming at cheaper hardware
Manticore – an older FPGA-based free graphics core implementation. As of 2009-05-04 no source is available.
The master thesis "An FPGA-based 3D Graphics System" illustrates very well the design decisions to make, while developing a FPGA-based 3D graphics core.
The master thesis "A performance-driven SoC architecture for video synthesis" gives a more complete and hands-on approach of some aspects.
Graphics hardware
Information technology projects
Open hardware electronic devices
Open-source hardware
Graphics cards |
Electronic system level (ESL) design and verification is an electronic design methodology, focused on higher abstraction level concerns. The term Electronic System Level or ESL Design was first defined by Gartner Dataquest, an EDA-industry-analysis firm, on February 1, 2001. It is defined in ESL Design and Verification as: "the utilization of appropriate abstractions in order to increase comprehension about a system, and to enhance the probability of a successful implementation of functionality in a cost-effective manner."
The basic premise is to model the behavior of the entire system using a high-level language such as C, C++, or using graphical "model-based" design tools. Newer languages are emerging that enable the creation of a model at a higher level of abstraction including general purpose system design languages like SysML as well as those that are specific to embedded system design like SMDL and SSDL. Rapid and correct-by-construction implementation of the system can be automated using EDA tools such as high-level synthesis and embedded software tools, although much of it is performed manually today. ESL can also be accomplished through the use of SystemC as an abstract modeling language.
ESL is an established approach at many of the world’s leading System-on-a-chip (SoC) design companies, and is being used increasingly in system design. From its genesis as an algorithm modeling methodology with 'no links to implementation', ESL is evolving into a set of complementary methodologies that enable embedded system design, verification, and debugging through to the hardware and software implementation of custom SoC, system-on-FPGA, system-on board, and entire multi-board systems.
Design and verification are two distinct disciplines within this methodology. Some practices are to keep the two elements separate, while others advocate for closer integration between design and verification.
Design
Whether ESL or other systems, design refers to "the concurrent design of the hardware and software parts of an electronic product."
Tools
There are various types of EDA tool used for ESL design. The key component is the Virtual Platform which is essentially a simulator. The Virtual Platform most commonly supports Transaction-level modeling (TLM), where operations of one component on another are modelled with a simple method call between the objects modelling each component. This abstraction gives a considerable speed up over cycle-accurate modelling, since thousands of net-level events in the real system can be represented by simply passing a pointer, e.g. to model that an Ethernet packet has been received, SystemC is often used.
Other tools support import and export or intercommunication with components modelled at other levels of abstraction. For instance, an RTL component be converted into a SystemC model using VtoC or Verilator. And High Level Synthesis can be used to convert C models of a component into an RTL implementation.
Verification
In ESL design and verification, verification testing is used to prove the integrity of the design of the system or device. Numerous verification techniques may be applied; these test methods are usually modified or customized to better accommodate the system or device under test. Common ESL verification methods include, but are not limited to:
Modular architecture
Constrained random stimulus generation
Error injection
Complete simulation environments
Verification is often provided by the system/device designer, but in many instances, additional independent verification is required
Challenges and criticism
Some criticisms of ESL design and verification have been raised. These include too much focus on C-based languages and challenges in representing parallel processes. It can also be argued that ESL design and verification is a subset of verification and validation.
See also
High-level synthesis
High-level verification
Electronic design automation
Platform-based design
Integrated circuit design
Register-transfer level
Property Specification Language
Virtual prototyping
SystemC
SystemC AMS
Systems engineering
SystemVerilog
Transaction-level modeling (TLM)
References
Further reading
Electronic design automation |
In engineering, double-subscript notation is notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically.
Electronic usage
IEEE standard 255-1963, "Letter Symbols for Semiconductor Devices", defined eleven original quantity symbols expressed as abbreviations.
This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this:
represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying "the voltage drop from C to B", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121.
would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying "the current in the direction going from C to E".
Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning the voltage from a collector pin to collector pin; the repetition avoids confusion as such an expression would not exist.
The table above shows only the originally denoted letters; others have found their way into use over time, such as S and D for the Source and Drain of a FET, respectively.
References
Notation
Electronic engineering |
Spectrum Software was a software company based in California, whose main focus is electrical simulation and analysis tools, most notably the circuit simulator Micro-Cap. It was founded in February 1980 by Andy Thompson. Initially, the company concentrated on providing software for Apple II systems.
One of the earliest products was Logic Designer and Simulator. Released in June 1980, this product was the first integrated circuit editor and logic simulation system available for personal computers. In many ways it was the forerunner of the Micro-Cap products. Its primary goal was to provide a “circuit creation and simulation” environment for digital simulation.
In August 1981, the analog equivalent of the first program, Circuit Designer and Simulator, was released. Its integrated text editor created circuit descriptions for a simple, linear, analog simulator. September 1982 saw the release of the first Micro-Cap package as a successor to the Circuit Designer and Simulator. The name Micro-Cap was derived from the term Microcomputer Circuit Analysis Program.
As of July 4, 2019, the company has closed and the software is now free. In early 2023, their website went offline, though it was previously backed up at archive.org.
References
Micro-CAP: An Analog Circuit Design System for Personal Computers
Spice Programs: Computerized Circuit Analysis For Analog EEs
Analysis of digital filters via SPICE-family programs
Modeling IIj Noise in HEMTS with SPICE-Based Micro-Cap
AC Analysis of Idealized Switched-Capacitor Circuits in Spice-Compatible Programs
White Paper: A New Zobel Network for Audio
Get more power with a boosted triode
External links
Software companies based in California
Companies based in Sunnyvale, California
Defunct software companies of the United States |
The Cockrell School of Engineering is one of the eighteen colleges within the University of Texas at Austin. It has more than 8,000 students enrolled in eleven undergraduate and thirteen graduate programs. The college is ranked 10th in the world according to the Academic Ranking of World Universities, 9th nationally for undergraduate programs and 6th nationally for graduate programs by U.S. News & World Report. Nine of the ten undergraduate programs and seven of the eleven graduate programs are ranked in the top ten nationally. Annual research expenditures are over $180 million and the school has the fourth-largest number of faculty in the National Academy of Engineering.
Previously known as the College of Engineering, on July 11, 2007, the University of Texas at Austin renamed the College after 1936 graduate Ernest Cockrell Jr., whose family has over the past 30 years helped to build a $140 million endowment for the College.
Undergraduate departments
Rankings, in parentheses, taken from the 2023 edition of U.S. News & World Report.
Overall: 9th
Petroleum Engineering (1st)
Environmental Engineering (7th)
Civil Engineering (5th)
Computer Engineering (8th)
Aerospace/Aeronautical Engineering (8th)
Chemical Engineering (8th)
Electrical/Electronic Engineering (11th)
Mechanical Engineering (10th)
Biomedical Engineering (16th)
Graduate departments
Rankings, in parentheses, taken from the 2023 edition of U.S. News & World Report.
Overall: 6th
Petroleum Engineering (1st)
Environmental Engineering (3rd)
Chemical Engineering (5th)
Civil Engineering (6th)
Aerospace/Aeronautical Engineering (8th)
Computer Engineering (9th)
Electrical/Electronic Engineering (9th)
Mechanical Engineering (10th)
Materials Engineering (14th)
Nuclear Engineering (17th)
Industrial/Manufacturing/Systems Engineering (19th)
Biomedical Engineering (22nd)
Traditions
The Ramshorn
The Ramshorn is one of the most prominent symbols associated with the College of Engineering. Its origins as such can be traced back to over a century ago, when T.U. Taylor, the first engineering faculty member and first dean of the College, began drawing the elaborate checkmark on students' work. A mark reserved for perfect papers, Taylor overheard a student remark he had received a "ramshorn" in 1905, from which the symbol took on its current interpretation and significance.
Alexander Frederick Claire
Alec's beginnings as the patron saint of the College came as the byproduct of the efforts of a group of sophomore engineers back in 1908.
Joe H. Gill and his engineering friends thoughtfully considered how to make a holiday of April Fool's Day. After an unsuccessful attempt involving tying cans around dogs' tails and releasing them to disrupt class, the group of students saw a wooden statue about five feet high while getting refreshments, which they requested to borrow. The next day, Gill presented the statue as their patron saint and traced his ancestry back to ancient times between classes. The presentation successfully broke up classes, and led to his christening as Alexander Frederick Claire, patron saint of UT engineers, exactly one year later. Alec was at the center of a friendly rivalry between law and engineering students for many years, and was subject to numerous escapades such as kidnappings and amputations. Today, what is left of the original wooden statue is safely preserved in the engineering library.
Every year, engineering groups on campus build new Alecs which are then voted on by the students. The winner is announced on April 1 during Alec's birthday party.
Notable faculty
John B. Goodenough, recipient of 2019 Nobel Prize in Chemistry for research leading to creation of lithium-ion battery
Hans Mark, former Secretary of the Air Force and Deputy Administrator of NASA
Yale Patt, inventor of the WOS module, the first complex logic gate implemented on a single piece of silicon
Alan Bovik, Primetime Emmy Award-winning engineer whose video quality tools pervade television, social media and home cinema
Ilya Prigogine, recipient of 1977 Nobel Prize in Chemistry for his contributions to non-equilibrium thermodynamics
Robert Metcalfe, co-inventor of Ethernet
Willis Adcock, worked on the first atomic bomb and assisted with the invention of the silicon transistor, as well as the integrated circuit
Edith Clarke, first woman faculty member of electrical engineering in the US and inventor of Clarke Calculator and method of symmetrical components
Research centers
The Cockrell School of Engineering has formal organized research units that coordinate and promote faculty and student research. These units provide and maintain specialized research facilities for faculty within a designated field.
Advanced Manufacturing Center
Center for Aeromechanics Research
Center for Energy & Environmental Resources
Energy Institute
Advanced Research in Software Engineering
Center for Mechanics of Solids, Structures & Materials
Center for Petroleum & Geosystems Engineering
Center for Research in Water Resources
Center for Space Research
Center for Transportation Research UT Austin
Computer Engineering Research Center
Construction Industry Institute
Phil M. Ferguson Structural Engineering Laboratory
Microelectronics Research Center
Offshore Technology Research Center
Texas Materials Institute
Wireless Networking & Communications Group
Applied Research Laboratories
Institute for Computational Engineering and Sciences
Center for Subsurface Energy and the Environment
Center for Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT)
Center for Electromechanics
Center for Additive Manufacturing and Design Innovation
The Center for Predictive Engineering and Computational Sciences (PECOS)
Center for Perceptual Systems
Institute for Cellular and Molecular Biology
Spark Research Lab
Student organizations
The Cockrell School of Engineering is home to over 80 student organizations under the supervision of the Engineering Student Life Office. These organizations offer a wide variety of student groups that provide academic, professional development, service, and social opportunities. The majority are student chapters of national and international professional engineering organizations. Among the organizations are:
The Student Engineering Council (SEC) is the umbrella organization of all the engineering student organizations, with over thirty engineering organizations affiliated. The SEC is responsible for acting as the official voice of all engineering students in the school and putting on events that benefit the engineering students including the Fall Engineering EXPO, which is the 2nd largest student-run career fair in the United States.
Omega Chi Epsilon (OXE) is the Chemical Engineering honor society. Candidates are invited each semester to undergo a pledge process which involves service events, social events, and faculty firesides. OXE's meetings feature high-profile industry partners and are open to all engineering students.
The American Institute of Chemical Engineers (AICHE) is the primary professional student organization within the Chemical Engineering Department at the University.
The American Society of Civil Engineers (ASCE) is the primary professional student organization within the Civil Engineering Department at the University.
The Institute of Transportation Engineers (ITE), the Intelligent Transportation Society of America (ITS America), and the Women's Transportation Seminar (WTS) are the primary professional student organizations for transportation students at the University.
The American Society of Mechanical Engineers (ASME) is the primary professional student organization within the Mechanical Engineering Department at the University.
The Institute of Electrical and Electronics Engineers (IEEE) is the primary professional student organization within the Electrical and Computer Engineering Department at the University.
Eta Kappa Nu (HKN) is the honor society of the IEEE and serves electrical engineering, computer engineering, computer science, and other IEEE fields of interest. The University's Psi Chapter of HKN was chartered in 1928 as the 22nd chapter within HKN.
The Society of Petroleum Engineers (SPE) is the primary professional student organization within the Hildebrand Department of Petroleum and Geosystems Engineering at the University.
The Society of Hispanic Professional Engineers (SHPE), the Society of Asian Scientists and Engineers (SASE), and the National Society of Black Engineers (NSBE) are three national professional student organizations who represent and develop minority student engineers at the University.
The Society of Women Engineers (SWE) is a professional student organization who represents women engineers at the University.
Engineers for a Sustainable World (ESW) is a professional student organization whose aim is to improve the sustainability at the University.
The Business Engineering Association (BEA) is Cockrell School of Engineering's newest professional student organization. It aims to connect business and engineering students interested in working in industries where business and engineering people work together.
Longhorn Racing (LHR) builds two Formula SAE cars each year, combustion and electric, and the Solar Vehicles Team builds a new solar-powered car every two years.
References
External links
The University of Texas at Austin Cockrell School of Engineering
Engineering schools and colleges in the United States
Engineering universities and colleges in Texas
University of Texas at Austin schools, colleges, and departments
Educational institutions established in 1894
1894 establishments in Texas |
An astronomical interferometer or telescope array is a set of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation, called baseline, between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array.
Interferometry is most widely used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry (VLBI) radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter. At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring very precise optics. Practical infrared and optical astronomical interferometers have only recently been developed, and are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics.
Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, and image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths.
One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a partially complete reflecting telescope but with a "sparse" or "dilute" aperture. In fact, the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner (focus) are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, and Labeyrie's hypertelescope will use a spherical geometry.
History
One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars. The red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array.
Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson, Betz and Townes (1974) in the infrared and by Labeyrie (1975) in the visible. In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer.
In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first very high resolution images of nearby stars. In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and allowing even higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images. The same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Precision Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array. A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer. The Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar; then a first-ever six-way synthesis of Eta Virginis in 2002; and most recently "closure phase" as a step to the first synthesized images produced by geostationary satellites.
Modern astronomical interferometry
Astronomical interferometry is principally conducted using Michelson (and sometimes other type) interferometers. The principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, and CHARA.
Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star (as used by the Palomar Testbed Interferometer and the VLTI), through the use of nulling (as will be used by the Keck Interferometer and Darwin) or through direct imaging (as proposed for Labeyrie's Hypertelescope).
Engineers at the European Southern Observatory ESO designed the Very Large Telescope VLT so that it can also be used as an interferometer. Along with the four unit telescopes, four mobile 1.8-metre auxiliary telescopes (ATs) were included in the overall VLT concept to form the Very Large Telescope Interferometer (VLTI). The ATs can move between 30 different stations, and at present, the telescopes can form groups of two or three for interferometry.
When using interferometry, a complex system of mirrors brings the light from the different telescopes to the astronomical instruments where it is combined and processed. This is technically demanding as the light paths must be kept equal to within 1/1000 mm (the same order as the wavelength of light) over distances of a few hundred metres. For the Unit Telescopes, this gives an equivalent mirror diameter of up to , and when combining the auxiliary telescopes, equivalent mirror diameters of up to can be achieved. This is up to 25 times better than the resolution of a single VLT unit telescope.
The VLTI gives astronomers the ability to study celestial objects in unprecedented detail. It is possible to see details on the surfaces of stars and even to study the environment close to a black hole. With a spatial resolution of 4 milliarcseconds, the VLTI has allowed astronomers to obtain one of the sharpest images ever of a star. This is equivalent to resolving the head of a screw at a distance of .
Notable 1990s results included the Mark III measurement of diameters of 100 stars and many accurate stellar positions, COAST and NPOI producing many very high resolution images, and Infrared Stellar Interferometer measurements of stars in the mid-infrared for the first time. Additional results include direct measurements of the sizes of and distances to Cepheid variable stars, and young stellar objects.
High on the Chajnantor plateau in the Chilean Andes, the European Southern Observatory (ESO), together with its international partners, is building ALMA, which will gather radiation from some of the coldest objects in the Universe. ALMA will be a single telescope of a new design, composed initially of 66 high-precision antennas and operating at wavelengths of 0.3 to 9.6 mm. Its main 12-meter array will have fifty antennas, 12 metres in diameter, acting together as a single telescope – an interferometer. An additional compact array of four 12-metre and twelve 7-meter antennas will complement this. The antennas can be spread across the desert plateau over distances from 150 metres to 16 kilometres, which will give ALMA a powerful variable "zoom". It will be able to probe the Universe at millimetre and submillimetre wavelengths with unprecedented sensitivity and resolution, with a resolution up to ten times greater than the Hubble Space Telescope, and complementing images made with the VLT interferometer.
Optical interferometers are mostly seen by astronomers as very specialized instruments, capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures; this is only true in the limited sense of angular resolution. The amount of light gathered—and hence the dimmest object that can be seen—depends on the real aperture size, so an interferometer would offer little improvement as the image is dim (the thinned-array curse). The combined effects of limited aperture area and atmospheric turbulence generally limits interferometers to observations of comparatively bright stars and active galactic nuclei. However, they have proven useful for making very high precision measurements of simple stellar parameters such as size and position (astrometry), for imaging the nearest giant stars and probing the cores of nearby active galaxies.
For details of individual instruments, see the list of astronomical interferometers at visible and infrared wavelengths.
At radio wavelengths, interferometers such as the Very Large Array and MERLIN have been in operation for many years. The distances between telescopes are typically , although arrays with much longer baselines utilize the techniques of Very Long Baseline Interferometry. In the (sub)-millimetre, existing arrays include the Submillimeter Array and the IRAM Plateau de Bure facility. The Atacama Large Millimeter Array has been fully operational since March 2013.
Max Tegmark and Matias Zaldarriaga have proposed the Fast Fourier Transform Telescope which would rely on extensive computer power rather than standard lenses and mirrors. If Moore's law continues, such designs may become practical and cheap in a few years.
Progressing quantum computing might eventually allow more extensive use of interferometry, as newer proposals suggest.
See also
Event Horizon Telescope (EHT) and Laser Interferometer Space Antenna (LISA)
ExoLife Finder, a proposed hybrid interferometric telescope
Hypertelescope
Cambridge Optical Aperture Synthesis Telescope, an optical interferometer
Navy Precision Optical Interferometer, a Michelson Optical Interferometer
Radio astronomy#Radio interferometry
Radio telescope#Radio interferometry
List
4C Array
Akeno Giant Air Shower Array (AGASA)
Allen Telescope Array (ATA), formerly known as the One Hectare Telescope (1hT)
Antarctic Muon And Neutrino Detector Array (AMANDA)
Atacama Large Millimeter Array (ALMA)
Australia Telescope Compact Array
CHARA array
Cherenkov Telescope Array (CTA)
Chicago Air Shower Array (CASA)
Infrared Optical Telescope Array (IOTA)
Interplanetary Scintillation Array (IPS array) also called the Pulsar Array
LOFAR (LOw Frequency ARray)
Modular Neutron Array (MoNA)
Murchison Widefield Array (MWA)
Nuclear Spectroscopic Telescope Array (NuSTAR)
Square Kilometre Array (SKA)
Submillimeter Array (SMA)
Sunyaev-Zel'dovich Array (SZA)
Telescope Array Project
Very Large Array (VLA)
Very Long Baseline Array (VLBA)
Very Small Array
References
Further reading
M. Ryle & D. Vonberg, 1946 Solar radiation on 175Mc/s, Nature 158 pp 339
Govert Schilling, New Scientist, 23 February 2006 The hypertelescope: a zoom with a view
External links
How to combine the light from multiple telescopes for astrometric measurements
at NPOI... Why an Optical Interferometer?
Remote Sensing the potential and limits of astronomical interferometry
The Antoine Labeyrie's hypertelescope project's website
pt:Interferômetro de Michelson |
Andhra University College of Engineering, also known as AU College of Engineering, is an autonomous college and extension campus of the Andhra University located at Visakhapatnam, India. It is the first Indian institution to have a Department of Chemical Engineering.
History
The Andhra University College of Engineering was established in 1955 as the Department of Engineering by Prof. Devaguptapu Seethapathi Rao (Electrical) and Prof. Kalavapudi Krishnamacharyulu (Civil Engineering) under the administrations of Vice Chancellor V.S. Krishna and further support by Vice Chancellor A L Narayana. Civil Engineering, Mechanical Engineering and Electrical Engineering were the main branches in the department at this time. Prof. D. Seetahpati Rao (Electrical) also headed the Department of Engineering until 1966, supported by senior professors K. Krishnamacharyulu (Civil), Prof. P.V.B. Bushana Rao (Mechanical), Prof. M.S. Raju, and Prof. L.B.K. Sastry (Electrical), and Prof. Venkateswaralu (Chemical), and Prof. T. Venugopal Rao (Mechanical).
In 1960, the Department of Engineering was shifted to the present North campus spread over . The Department of Chemical Technology, instituted in 1933, was shifted to the same campus in 1962, ading to the existing engineering branches. In 1966, the Department of Engineering was converted into the College of Engineering (Autonomous), and became a constituent of the Andhra University.
Structure
The College of Engineering consists of 12 engineering and four basic science departments, offering 15 undergraduate Engineering full-time programs and four undergraduate engineering part-time programs. 28 postgraduate engineering programs, an MCA program and three M.Sc. programs are also offered. All the departments run PhD programs in research. The college has Centres of Excellence carrying out research in specialized areas.
The college is graded along with the Andhra University by the National Assessment and Accreditation Council (NAAC), and has been awarded a rating of A+ (85%).
The college is one of the four lead institutions selected in the state of Andhra Pradesh for World Bank aid.
Admission
Students are admitted into the undergraduate programs based on their score in the Engineering Agricultural and Medical Common Entrance Test (EAMCET) conducted by the Government of Andhra Pradesh.
Students can also be admitted into undergraduate courses through a test called Andhra University Engineering Entrance Test (AUEET) for six-year Integrated Dual Degree courses and twinning programs. In this course, both Bachelor of Technology and Master of Technology degrees will be completed in 5 years.
Students are admitted into postgraduate programs based on their Graduate Aptitude Test in Engineering (GATE) scores and rankings or their ranking in the Post Graduate Engineering Common Entrance Test (PGECET) conducted by the Government of Andhra Pradesh.
Academics
Engineering Curriculum Development
The college follows a four-year duration (one year + six semesters) with external mode of examination for B.E./B.Tech./B.Arch./B.Pharm. programmes. It follows a four semester course for the M.E./M.Tech. programmes. It also offers five-year integrated courses that combine B.Tech. and M.Tech degrees.
The college follows a two-year duration with an external mode of examination for its M.Sc (Computer Science) program.
Research and Consultancy
The faculty of the college is involved in research projects and schemes granted by national level funding agencies such as UGC, AICTE, Department of Atomic Energy and the Department of Telecommunications. The college has projects with Defence Research and Development Organisation, Indian Space Research Organisation, Bhabha Atomic Research Centre, and private companies as well.
The college collaborates on two-year programs in M.Tech/M.Sc (Software Engineering), MS (Signal Processing) and M.Tech/M.Sc (Telecommunications) with Blekinge Institute of Technology, Sweden in a three-year B.Engg. (Aircraft Engineering) program with Perth College, the UK, and in a four-year B.E. (Electromechanical Engineering) program with Group-T International University, Belgium.
Engineering departments
Architecture
Biotechnology
Chemical Engineering
Chemical Petro Engineering
Civil Engineering
Civil & Environmental Engineering
Computer Science and Systems Engineering
Electrical Engineering
Electronics and Communication Engineering
Geo Informatics
Instrument Technology
Marine Engineering
Mechanical Engineering
Metallurgical Engineering
Naval Architecture
Pharmaceutical Sciences
Ceramic Technology
Department of Information Technology and Computer Applications
Basic sciences departments
Engineering Chemistry
Engineering Mathematics
Engineering Physics
Humanities and Social Sciences
Centres and institutes
Centre for Biomedical Engineering
Centre for Technology Forecasting
International Centre for Bioinformatics
Advanced Centre for Nanotechnology
Centre for Biotechnology in the Department of Chemical Engineering
Centre for Phase Equilibrium Thermodynamics in the Department of Chemical Engineering
Centre for Energy Systems in the Department of Mechanical Engineering
Centre for Condition Monitoring and Vibration Diagnostics in the Department of Mechanical Engineering
Centre for Remote Sensing and Information Systems in the Department of Geo-Engineering
Centre for Research on Off-Shore Structures in the Department of Civil Engineering
Affiliations
This autonomous engineering college is a constituent of and affiliated to the Andhra University, Visakhapatnam. It is the first general university in the country to get ISO 9001: 2000 Certification in 2006. Andhra University college of engineering is also affiliated by AICTE and UGC.
Facilities
It is the first college in AP to launch a 4G WiFi facility for students on a commercial basis. The campus has a number of basketball, volleyball, tennis courts and two cricket grounds.
The college was part of the International Fleet Review (IFR) 2016, with the IFR village and exhibition located on the campus.
Notable alumni
Notable alumni include:
Satya N. Atluri, Mechanical Engineering (1959-1963), Recipient of a Padma Bhushan from the President of India in 2013.
Anumolu Ramakrishna, Civil Engineering (1959-1963), Recipient of Padma Bhushan from the President of India in 2014.
B. S. Daya Sagar, Geoengineering (1988–1994), only Asian recipient of Georges Matheron Lectureship Award from International Association for Mathematical Geosciences.
N. S. Raghavan, Electrical Engineering 1959–1964, co-founder of Infosys (one of the first two, who started Infosys)
S. Rao Kosaraju, Computer Science (1959–1964), Founder of the Kosaraju's algorithm, which finds the strongly connected components of a directed graph
Grandhi Mallikarjuna Rao, Mechanical Engineering, Founder and Chairman of the GMR Group, which is one of the fastest-growing infrastructure enterprises in India with interests in Airports, Energy, Highways and Urban Infrastructure sectors.
Kambhampati Hari Babu, Electronics and Communications Engineering, a member of parliament to the 16th Lok Sabha from Visakhapatnam.
Rankings
Andhra University College of Engineering was ranked 77 among engineering colleges by the National Institutional Ranking Framework (NIRF) in 2019.
References
External links
Official website of AU College of Engineering.
Engineering colleges in Andhra Pradesh
1955 establishments in India
Colleges affiliated to Andhra University
Universities and colleges established in 1955 |
In the x86 architecture, the CPUID instruction (identified by a CPUID opcode) is a processor supplementary instruction (its name derived from CPU Identification) allowing software to discover details of the processor. It was introduced by Intel in 1993 with the launch of the Pentium and SL-enhanced 486 processors.
A program can use the CPUID to determine processor type and whether features such as MMX/SSE are implemented.
History
Prior to the general availability of the CPUID instruction, programmers would write esoteric machine code which exploited minor differences in CPU behavior in order to determine the processor make and model. With the introduction of the 80386 processor, EDX on reset indicated the revision but this was only readable after reset and there was no standard way for applications to read the value.
Outside the x86 family, developers are mostly still required to use esoteric processes (involving instruction timing or CPU fault triggers) to determine the variations in CPU design that are present.
In the Motorola 680x0 family — that never had a CPUID instruction of any kind — certain specific instructions required elevated privileges. These could be used to tell various CPU family members apart. In the Motorola 68010 the instruction MOVE from SR became privileged. This notable instruction (and state machine) change allowed the 68010 to meet the Popek and Goldberg virtualization requirements. Because the 68000 offered an unprivileged MOVE from SR the 2 different CPUs could be told apart by a CPU error condition being triggered.
While the CPUID instruction is specific to the x86 architecture, other architectures (like ARM) often provide on-chip registers which can be read in prescribed ways to obtain the same sorts of information provided by the x86 CPUID instruction.
Calling CPUID
The CPUID opcode is 0F A2.
In assembly language, the CPUID instruction takes no parameters as CPUID implicitly uses the EAX register to determine the main category of information returned. In Intel's more recent terminology, this is called the CPUID leaf. CPUID should be called with EAX = 0 first, as this will store in the EAX register the highest EAX calling parameter (leaf) that the CPU implements.
To obtain extended function information CPUID should be called with the most significant bit of EAX set. To determine the highest extended function calling parameter, call CPUID with EAX = 80000000h.
CPUID leaves greater than 3 but less than 80000000 are accessible only when the model-specific registers have IA32_MISC_ENABLE.BOOT_NT4 [bit 22] = 0 (which is so by default). As the name suggests, Windows NT 4.0 until SP6 did not boot properly unless this bit was set, but later versions of Windows do not need it, so basic leaves greater than 4 can be assumed visible on current Windows systems. , basic valid leaves go up to 14h, but the information returned by some leaves are not disclosed in the publicly available documentation, i.e. they are "reserved".
Some of the more recently added leaves also have sub-leaves, which are selected via the ECX register before calling CPUID.
EAX=0: Highest Function Parameter and Manufacturer ID
This returns the CPU's manufacturer ID string a twelve-character ASCII string stored in EBX, EDX, ECX (in that order). The highest basic calling parameter (the largest value that EAX can be set to before calling CPUID) is returned in EAX.
Here is a list of processors and the highest function implemented.
The following are known processor manufacturer ID strings:
"AMDisbetter!" early engineering samples of AMD K5 processor
"AuthenticAMD" AMD
"CentaurHauls" IDT WinChip/Centaur (Including some VIA and Zhaoxin CPUs)
"CyrixInstead" Cyrix/early STMicroelectronics and IBM
"GenuineIntel" Intel
"TransmetaCPU" Transmeta
"GenuineTMx86" Transmeta
"Geode by NSC" National Semiconductor
"NexGenDriven" NexGen
"RiseRiseRise" Rise
"SiS SiS SiS " SiS
"UMC UMC UMC " UMC
"VIA VIA VIA " VIA
"Vortex86 SoC" DM&P Vortex86
" Shanghai " Zhaoxin
"HygonGenuine" Hygon
"Genuine RDC" RDC Semiconductor Co. Ltd.
"E2K MACHINE" MCST Elbrus
The following are ID strings used by open source soft CPU cores:
"MiSTer AO486" ao486 CPU
"GenuineIntel" v586 core (this is identical to the Intel ID string)
The following are known ID strings from virtual machines:
"bhyve bhyve " bhyve
"KVMKVMKVM\0\0\0" KVM, \0 denotes an ASCII NUL character
"TCGTCGTCGTCG" QEMU
"Microsoft Hv" Microsoft Hyper-V or Windows Virtual PC
"MicrosoftXTA" – Microsoft x86-to-ARM
" lrpepyh vr" Parallels (it possibly should be "prl hyperv ", but it is encoded as " lrpepyh vr" due to an endianness mismatch)
"VMwareVMware" VMware
"XenVMMXenVMM" Xen HVM
"ACRNACRNACRN" Project ACRN
" QNXQVMBSQG " QNX Hypervisor
"GenuineIntel" Apple Rosetta 2
"VirtualApple" – Newer versions of Apple Rosetta 2
For instance, on a GenuineIntel processor values returned in EBX is 0x756e6547, EDX is 0x49656e69 and ECX is 0x6c65746e. The following example code displays the vendor ID string as well as the highest calling parameter that the CPU implements.
.intel_syntax noprefix
.text
.m0: .string "CPUID: %x\n"
.m1: .string "Largest basic function number implemented: %i\n"
.m2: .string "Vendor ID: %s\n"
.globl main
main:
push r12
mov eax, 1
sub rsp, 16
cpuid
lea rdi, .m0[rip]
mov esi, eax
call printf
mov eax, 0
cpuid
lea rdi, .m1[rip]
mov esi, eax
mov r12d, edx
mov ebp, ecx
call printf
mov 3[rsp], ebx
lea rsi, 3[rsp]
lea rdi, .m2[rip]
mov 7[rsp], r12d
mov 11[rsp], ebp
call printf
add rsp, 16
pop r12
ret
.section .note.GNU-stack,"",@progbits
EAX=1: Processor Info and Feature Bits
This returns the CPU's stepping, model, and family information in register EAX (also called the signature of a CPU), feature flags in registers EDX and ECX, and additional feature info in register EBX.
Stepping ID is a product revision number assigned due to fixed errata or other changes.
The actual processor model is derived from the Model, Extended Model ID and Family ID fields. If the Family ID field is either 6 or 15, the model is equal to the sum of the Extended Model ID field shifted left by 4 bits and the Model field. Otherwise, the model is equal to the value of the Model field.
The actual processor family is derived from the Family ID and Extended Family ID fields. If the Family ID field is equal to 15, the family is equal to the sum of the Extended Family ID and the Family ID fields. Otherwise, the family is equal to the value of the Family ID field.
The meaning of the Processor Type field is given in the table below.
As of October 2023, the following x86 processor family IDs are known:
The processor info and feature flags are manufacturer specific but usually, the Intel values are used by other manufacturers for the sake of compatibility.
Reserved fields should be masked before using them for processor identification purposes.
EAX=2: Cache and TLB Descriptor information
This returns a list of descriptors indicating cache and TLB capabilities in EAX, EBX, ECX and EDX registers.
On processors that support this leaf, calling CPUID with EAX=2 will cause the bottom byte of EAX to be set to 01h and the remaining 15 bytes of EAX/EBX/ECX/EDX to be filled with 15 descriptors, one byte each. These descriptors provide information about the processor's caches, TLBs and prefetch. This is typically one cache or TLB per descriptor, but some descriptor-values provide other information as well - in particular, 00h is used for an empty descriptor, FFh indicates that the leaf does not contain valid cache information and that leaf 4h should be used instead, and FEh indicates that the leaf does not contain valid TLB information and that leaf 18h should be used instead. The descriptors may appear in any order.
For each of the four registers (EAX,EBX,ECX,EDX), if bit 31 is set, then the register should not be considered to contain valid descriptors (e.g. on Itanium in IA-32 mode, CPUID(EAX=2) returns 80000000h in EDX - this should be interpreted to mean that EDX contains no valid information, not that it contains a 512K L2 cache.)
The table below provides, for known descriptor values, a condensed description of the cache or TLB indicated by that descriptor value (or other information, where that applies). The suffixes used in the table are:
K,M,G : binary kilobyte, megabyte, gigabyte (capacity for caches, page-size for TLBs)
E : entries (for TLBs; e.g. 64E = 64 entries)
p : page-size (e.g. 4Kp for TLBs where each entry describes one 4KByte page, 4K/2Mp for TLBs where each entry can describe either one 4Kbyte page or one 2MByte hugepage)
L : cache-line size (e.g. 32L = 32-byte cache line size)
S : cache sector size (e.g. 2S means that the cache uses sectors of 2 cache-lines each)
A : associativity (e.g. 6A = 6-way set-associative, FA = fully-associative)
EAX=3: Processor Serial Number
This returns the processor's serial number. The processor serial number was introduced on Intel Pentium III, but due to privacy concerns, this feature is no longer implemented on later models (the PSN feature bit is always cleared). Transmeta's Efficeon and Crusoe processors also provide this feature. AMD CPUs however, do not implement this feature in any CPU models.
For Intel Pentium III CPUs, the serial number is returned in the EDX:ECX registers. For Transmeta Efficeon CPUs, it is returned in the EBX:EAX registers. And for Transmeta Crusoe CPUs, it is returned in the EBX register only.
Note that the processor serial number feature must be enabled in the BIOS setting in order to function.
EAX=4 and EAX=Bh: Intel thread/core and cache topology
These two leaves are used for processor topology (thread, core, package) and cache hierarchy enumeration in Intel multi-core (and hyperthreaded) processors. AMD does not use these leaves but has alternate ways of doing the core enumeration.
Unlike most other CPUID leaves, leaf Bh will return different values in EDX depending on which logical processor the CPUID instruction runs; the value returned in EDX is actually the x2APIC id of the logical processor. The x2APIC id space is not continuously mapped to logical processors, however; there can be gaps in the mapping, meaning that some intermediate x2APIC ids don't necessarily correspond to any logical processor. Additional information for mapping the x2APIC ids to cores is provided in the other registers. Although the leaf Bh has sub-leaves (selected by ECX as described further below), the value returned in EDX is only affected by the logical processor on which the instruction is running but not by the subleaf.
The processor(s) topology exposed by leaf Bh is a hierarchical one, but with the strange caveat that the order of (logical) levels in this hierarchy doesn't necessarily correspond to the order in the physical hierarchy (SMT/core/package). However, every logical level can be queried as an ECX subleaf (of the Bh leaf) for its correspondence to a "level type", which can be either SMT, core, or "invalid". The level id space starts at 0 and is continuous, meaning that if a level id is invalid, all higher level ids will also be invalid. The level type is returned in bits 15:08 of ECX, while the number of logical processors at the level queried is returned in EBX. Finally, the connection between these levels and x2APIC ids is returned in EAX[4:0] as the number of bits that the x2APIC id must be shifted in order to obtain a unique id at the next level.
As an example, a dual-core Westmere processor capable of hyperthreading (thus having two cores and four threads in total) could have x2APIC ids 0, 1, 4 and 5 for its four logical processors. Leaf Bh (=EAX), subleaf 0 (=ECX) of CPUID could for instance return 100h in ECX, meaning that level 0 describes the SMT (hyperthreading) layer, and return 2 in EBX because there are two logical processors (SMT units) per physical core. The value returned in EAX for this 0-subleaf should be 1 in this case, because shifting the aforementioned x2APIC ids to the right by one bit gives a unique core number (at the next level of the level id hierarchy) and erases the SMT id bit inside each core. A simpler way to interpret this information is that the last bit (bit number 0) of the x2APIC id identifies the SMT/hyperthreading unit inside each core in our example. Advancing to subleaf 1 (by making another call to CPUID with EAX=Bh and ECX=1) could for instance return 201h in ECX, meaning that this is a core-type level, and 4 in EBX because there are 4 logical processors in the package; EAX returned could be any value greater than 3, because it so happens that bit number 2 is used to identify the core in the x2APIC id. Note that bit number 1 of the x2APIC id is not used in this example. However, EAX returned at this level could well be 4 (and it happens to be so on a Clarkdale Core i3 5x0) because that also gives a unique id at the package level (=0 obviously) when shifting the x2APIC id by 4 bits. Finally, you may wonder what the EAX=4 leaf can tell us that we didn't find out already. In EAX[31:26] it returns the APIC mask bits reserved for a package; that would be 111b in our example because bits 0 to 2 are used for identifying logical processors inside this package, but bit 1 is also reserved although not used as part of the logical processor identification scheme. In other words, APIC ids 0 to 7 are reserved for the package, even though half of these values don't map to a logical processor.
The cache hierarchy of the processor is explored by looking at the sub-leaves of leaf 4. The APIC ids are also used in this hierarchy to convey information about how the different levels of cache are shared by the SMT units and cores. To continue our example, the L2 cache, which is shared by SMT units of the same core but not between physical cores on the Westmere is indicated by EAX[26:14] being set to 1, while the information that the L3 cache is shared by the whole package is indicated by setting those bits to (at least) 111b. The cache details, including cache type, size, and associativity are communicated via the other registers on leaf 4.
Beware that older versions of the Intel app note 485 contain some misleading information, particularly with respect to identifying and counting cores in a multi-core processor; errors from misinterpreting this information have even been incorporated in the Microsoft sample code for using CPUID, even for the 2013 edition of Visual Studio, and also in the sandpile.org page for CPUID, but the Intel code sample for identifying processor topology has the correct interpretation, and the current Intel Software Developer’s Manual has a more clear language. The (open source) cross-platform production code from Wildfire Games also implements the correct interpretation of the Intel documentation.
Topology detection examples involving older (pre-2010) Intel processors that lack x2APIC (thus don't implement the EAX=Bh leaf) are given in a 2010 Intel presentation. Beware that using that older detection method on 2010 and newer Intel processors may overestimate the number of cores and logical processors because the old detection method assumes there are no gaps in the APIC id space, and this assumption is violated by some newer processors (starting with the Core i3 5x0 series), but these newer processors also come with an x2APIC, so their topology can be correctly determined using the EAX=Bh leaf method.
EAX=6: Thermal and power management
This returns feature bits in the EAX register and additional information in the EBX, ECX and EDX registers.
EAX=7, ECX=0: Extended Features
This returns extended feature flags in EBX, ECX, and EDX. Returns the maximum ECX value for EAX=7 in EAX.
EAX=7, ECX=1: Extended Features
This returns extended feature flags in EAX, EBX, and EDX. ECX is reserved.
EAX=7, ECX=2: Extended Features
This returns extended feature flags in EDX.
EAX, EBX and ECX are reserved.
EAX=0Dh: XSAVE features and state-components
This leaf is used to enumerate XSAVE features and state-components.
The XSAVE instruction set extension is designed to save/restore CPU extended state (typically for the purpose of context switching) in a manner that can be extended to cover new instruction set extensions without the OS context-switching code needing to understand the specifics of the new extensions. This is done by defining a series of state-components, each with a size and offset within a given save area, and each corresponding to a subset of the state needed for one CPU extension or another. The EAX=0Dh CPUID leaf is used to provide information about which state-components the CPU supports and what their sizes/offsets are, so that the OS can reserve the proper amount of space and set the associated enable-bits.
The state-components can be subdivided into two groups: user-state (state-items that are visible to the application, e.g. AVX-512 vector registers), and supervisor-state (state items that affect the application but are not directly user-visible, e.g. user-mode interrupt configuration). The user-state items are enabled by setting their associated bits in the XCR0 control register, while the supervisor-state items are enabled by setting their associated bits in the IA32_XSS (0DA0h) MSR - the indicated state items then become the state-components that can be saved and restored with the XSAVE/XRSTOR family of instructions.
The XSAVE mechanism can handle up to 63 state-components in this manner. State-components 0 and 1 (x87 and SSE, respectively) have fixed offsets and sizes - for state-components 2 to 62, their sizes, offsets and a few additional flags can be queried by executing CPUID with EAX=0Dh and ECX set to the index of the state-component. This will return the following items in EAX, EBX and ECX (with EDX being reserved):
Attempting to query an unsupported state-component in this manner results in EAX,EBX,ECX and EDX all being set to 0.
Sub-leaves 0 and 1 of CPUID leaf 0Dh are used to provide feature information:
As of July 2023, the XSAVE state-components that have been architecturally defined are:
EAX=12h: SGX capabilities
This leaf provides information about the supported capabilities of the Intel Software Guard Extensions (SGX) feature. The leaf provides multiple sub-leaves, selected with ECX.
Sub-leaf 0 provides information about supported SGX leaf functions in EAX and maximum supported SGX enclave sizes in EDX; ECX is reserved. EBX provides a bitmap of bits that can be set in the MISCSELECT field in the SECS (SGX Enclave Control Structure) - this field is used to control information written to the MISC region of the SSA (SGX Save State Area) when an AEX (SGX Asynchronous Enclave Exit) occurs.
Sub-leaf 1 provides a bitmap of which bits can be set in the 128-bit ATTRIBUTES field of SECS in EDX:ECX:EBX:EAX (this applies to the SECS copy used as input to the ENCLS[ECREATE] leaf function). The top 64 bits (given in EDX:ECX) are a bitmap of which bits can be set in the XFRM (X-feature request mask) - this mask is a bitmask of which CPU state-components (see leaf 0Dh) will be saved to the SSA in case of an AEX; this has the same layout as the XCR0 control register. The other bits are given in EAX and EBX, as follows:
Sub-leaves 2 and up are used to provide information about which physical memory regions are available for use as EPC (Enclave Page Cache) sections under SGX.
EAX=14h, ECX=0: Processor Trace
This sub-leaf provides feature information for Intel Processor Trace (also known as Real Time Instruction Trace).
The value returned in EAX is the index of the highest sub-leaf supported for CPUID with EAX=14h. EBX and ECX provide feature flags, EDX is reserved.
EAX=19h: AES Key Locker features
This leaf provides feature information for AES Key Locker in EAX, EBX and ECX. EDX is reserved.
EAX=24h, ECX=0: AVX10 Features
This returns a maximum supported sub-leaf in EAX and AVX10 feature information in EBX. (ECX and EDX are reserved.)
EAX=80000000h: Get Highest Extended Function Implemented
The highest calling parameter is returned in EAX.
EBX/ECX/EDX return the manufacturer ID string (same as EAX=0) on AMD but not Intel CPUs.
EAX=80000001h: Extended Processor Info and Feature Bits
This returns extended feature flags in EDX and ECX.
Many of the bits in EDX (bits 0 through 9, 12 through 17, 23, and 24) are duplicates of EDX from the EAX=1 leaf - these bits are highlighted in light yellow. (These duplicated bits are present on AMD but not Intel CPUs.)
AMD feature flags are as follows:
EAX=80000002h,80000003h,80000004h: Processor Brand String
These return the processor brand string in EAX, EBX, ECX and EDX. CPUID must be issued with each parameter in sequence to get the entire 48-byte ASCII processor brand string. It is necessary to check whether the feature is present in the CPU by issuing CPUID with EAX = 80000000h first and checking if the returned value is not less than 80000004h.
The string is specified in Intel/AMD documentation to be null-terminated, however this is not always the case (e.g. DM&P Vortex86DX3 and AMD Ryzen 7 6800HS are known to return non-null-terminated brand strings in leaves 80000002h-80000004h), and software should not rely on it.
#include <stdio.h>
#include <string.h>
#include <cpuid.h>
int main()
{
unsigned int regs[12];
char str[sizeof(regs)+1];
__cpuid(0x80000000, regs[0], regs[1], regs[2], regs[3]);
if (regs[0] < 0x80000004)
return 1;
__cpuid(0x80000002, regs[0], regs[1], regs[2], regs[3]);
__cpuid(0x80000003, regs[4], regs[5], regs[6], regs[7]);
__cpuid(0x80000004, regs[8], regs[9], regs[10], regs[11]);
memcpy(str, regs, sizeof(regs));
str[sizeof(regs)] = '\0';
printf("%s\n", str);
return 0;
}
EAX=80000005h: L1 Cache and TLB Identifiers
This function contains the processor’s L1 cache and TLB characteristics.
EAX=80000006h: Extended L2 Cache Features
Returns details of the L2 cache in ECX, including the line size in bytes (Bits 07 - 00), type of associativity (encoded by a 4 bits field; Bits 15 - 12) and the cache size in KB (Bits 31 - 16).
#include <stdio.h>
#include <cpuid.h>
int main()
{
unsigned int eax, ebx, ecx, edx;
unsigned int lsize, assoc, cache;
__cpuid(0x80000006, eax, ebx, ecx, edx);
lsize = ecx & 0xff;
assoc = (ecx >> 12) & 0x07;
cache = (ecx >> 16) & 0xffff;
printf("Line size: %d B, Assoc. type: %d, Cache size: %d KB.\n", lsize, assoc, cache);
return 0;
}
EAX=80000007h: Processor Power Management Information and RAS Capabilities
This function provides information about power management, power reporting and RAS (Reliability, availability and serviceability) capabilities of the CPU.
EAX=80000008h: Virtual and Physical address Sizes
EAX=8000000Ah: Secure Virtual Machine features
This leaf returns information about AMD SVM (Secure Virtual Machine) features in EAX, EBX and EDX.
EAX=8000001Fh: Encrypted Memory Capabilities
EAX=80000021h: Extended Feature Identification 2
EAX=8FFFFFFFh: AMD Easter Egg
Several AMD CPU models will, for CPUID with EAX=8FFFFFFFh, return an Easter Egg string in EAX, EBX, ECX and EDX. Known Easter Egg strings include:
EAX=C0000000h: Get Highest Centaur Extended Function
Returns index of highest Centaur leaf in EAX. If the returned value in EAX is less than C0000001h, then Centaur extended leaves are not supported.
Present in CPUs from VIA and Zhaoxin.
On IDT WinChip CPUs (CentaurHauls Family 5), the extended leaves C0000001h-C0000005h do not encode any Centaur-specific functionality but are instead aliases of leaves 80000001h-80000005h.
EAX=C0000001h: Centaur Feature Information
This leaf returns Centaur feature information (mainly VIA PadLock) in EDX. (EAX, EBX and ECX are reserved.)
CPUID usage from high-level languages
Inline assembly
This information is easy to access from other languages as well. For instance, the C code for gcc below prints the first five values, returned by the cpuid:
#include <stdio.h>
#include <cpuid.h>
int main()
{
unsigned int i, eax, ebx, ecx, edx;
for (i = 0; i < 5; i++) {
__cpuid(i, eax, ebx, ecx, edx);
printf ("InfoType %x\nEAX: %x\nEBX: %x\nECX: %x\nEDX: %x\n", i, eax, ebx, ecx, edx);
}
return 0;
}
In MSVC and Borland/Embarcadero C compilers (bcc32) flavored inline assembly, the clobbering information is implicit in the instructions:
#include <stdio.h>
int main()
{
unsigned int a, b, c, d, i = 0;
__asm {
/* Do the call. */
mov EAX, i;
cpuid;
/* Save results. */
mov a, EAX;
mov b, EBX;
mov c, ECX;
mov d, EDX;
}
printf ("InfoType %x\nEAX: %x\nEBX: %x\nECX: %x\nEDX: %x\n", i, a, b, c, d);
return 0;
}
If either version was written in plain assembly language, the programmer must manually save the results of EAX, EBX, ECX, and EDX elsewhere if they want to keep using the values.
Wrapper functions
GCC also provides a header called <cpuid.h> on systems that have CPUID. The __cpuid is a macro expanding to inline assembly. Typical usage would be:
#include <stdio.h>
#include <cpuid.h>
int main()
{
unsigned int eax, ebx, ecx, edx;
__cpuid(0 /* vendor string */, eax, ebx, ecx, edx);
printf("EAX: %x\nEBX: %x\nECX: %x\nEDX: %x\n", eax, ebx, ecx, edx);
return 0;
}
But if one requested an extended feature not present on this CPU, they would not notice and might get random, unexpected results. Safer version is also provided in <cpuid.h>. It checks for extended features and does some more safety checks. The output values are not passed using reference-like macro parameters, but more conventional pointers.
#include <stdio.h>
#include <cpuid.h>
int main()
{
unsigned int eax, ebx, ecx, edx;
/* 0x81234567 is nonexistent, but assume it exists */
if (!__get_cpuid (0x81234567, &eax, &ebx, &ecx, &edx)) {
printf("Warning: CPUID request 0x81234567 not valid!\n");
return 1;
}
printf("EAX: %x\nEBX: %x\nECX: %x\nEDX: %x\n", eax, ebx, ecx, edx);
return 0;
}
Notice the ampersands in &a, &b, &c, &d and the conditional statement. If the __get_cpuid call receives a correct request, it will return a non-zero value, if it fails, zero.
Microsoft Visual C compiler has builtin function __cpuid() so the cpuid instruction may be embedded without using inline assembly, which is handy since the x86-64 version of MSVC does not allow inline assembly at all. The same program for MSVC would be:
#include <stdio.h>
#ifdef
#include <intrin.h>
#endif
int main()
{
unsigned int regs[4];
int i;
for (i = 0; i < 4; i++) {
__cpuid(regs, i);
printf("The code %d gives %d, %d, %d, %d", regs[0], regs[1], regs[2], regs[3]);
}
return 0;
}
Many interpreted or compiled scripting languages are capable of using CPUID via an FFI library. One such implementation shows usage of the Ruby FFI module to execute assembly language that includes the CPUID opcode.
.NET 5 and later versions provide the System.Runtime.Intrinsics.X86.X86base.CpuId method. For instance, the C# code below prints the processor brand if it supports CPUID instruction:using System.Runtime.InteropServices;
using System.Runtime.Intrinsics.X86;
using System.Text;
namespace X86CPUID {
class CPUBrandString {
public static void Main(string[] args) {
if (!X86Base.IsSupported) {
Console.WriteLine("Your CPU does not support CPUID instruction.");
} else {
Span<int> raw = stackalloc int[12];
(raw[0], raw[1], raw[2], raw[3]) = X86Base.CpuId(unchecked((int)0x80000002), 0);
(raw[4], raw[5], raw[6], raw[7]) = X86Base.CpuId(unchecked((int)0x80000003), 0);
(raw[8], raw[9], raw[10], raw[11]) = X86Base.CpuId(unchecked((int)0x80000004), 0);
Span<byte> bytes = MemoryMarshal.AsBytes(raw);
string brand = Encoding.UTF8.GetString(bytes).Trim();
Console.WriteLine(brand);
}
}
}
}
CPU-specific information outside x86
Some of the non-x86 CPU architectures also provide certain forms of structured information about the processor's abilities, commonly as a set of special registers:
ARM architectures have a CPUID coprocessor register which requires EL1 or above to access.
The IBM System z mainframe processors have a Store CPU ID (STIDP) instruction since the 1983 IBM 4381 for querying the processor ID.
The IBM System z mainframe processors also have a Store Facilities List Extended (STFLE) instruction which lists the installed hardware features.
The MIPS32/64 architecture defines a mandatory Processor Identification (PrId) and a series of daisy-chained Configuration Registers.
The PowerPC processor has the 32-bit read-only Processor Version Register (PVR) identifying the processor model in use. The instruction requires supervisor access level.
DSP and transputer-like chip families have not taken up the instruction in any noticeable way, in spite of having (in relative terms) as many variations in design. Alternate ways of silicon identification might be present; for example, DSPs from Texas Instruments contain a memory-based register set for each functional unit that starts with identifiers determining the unit type and model, its ASIC design revision and features selected at the design phase, and continues with unit-specific control and data registers. Access to these areas is performed by simply using the existing load and store instructions; thus, for such devices, there is no need for extending the register set for device identification purposes.
See also
CPU-Z, a Windows utility that uses CPUID to identify various system settings
CPU-X, an alternative of CPU-Z for Linux and FreeBSD
Spectre (security vulnerability)
Speculative Store Bypass (SSB)
, a text file generated by certain systems containing some of the CPUID information
References
Further reading
External links
Intel Processor Identification and the CPUID Instruction (Application Note 485), last published version. Said to be incorporated into the Intel® 64 and IA-32 Architectures Software Developer’s Manual in 2013, but the manual still directs the reader to note 485.
Contains some information that can be and was easily misinterpreted though, particularly with respect to processor topology identification.
The big Intel manuals tend to lag behind the Intel ISA document, available at the top of this page, which is updated even for processors not yet publicly available, and thus usually contains more CPUID bits. For example, as of this writing, the ISA book (at revision 19, dated May 2014) documents the CLFLUSHOPT bit in leaf 7, but the big manuals although apparently more up-to-date (at revision 51, dated June 2014) don't mention it.
AMD64 Architecture Programmer’s Manual Volume 3: General-Purpose and System Instructions
cpuid command-line program for Linux
cpuprint.com, cpuprint.exe, cpuprint.raw command-line programs for Windows
instlatx64 - collection of x86/x64 Instruction Latency, Memory Latency and CPUID dumps
X86 architecture
Machine code
X86 instructions |
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.
Overview
This method usually increases the global contrast of many images, especially when the image is represented by a narrow range of intensity values. Through this adjustment, the intensities can be better distributed on the histogram utilizing the full range of intensities evenly. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the highly populated intensity values which are used to degrade image contrast.
The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are either over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique adaptive to the input image and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.
In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal-to-noise ratio usually hampers visual detections.
Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images to which one would apply false-color. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit gray-scale images.
There are two ways to think about and implement histogram equalization, either as image change or as palette change. The operation can be expressed as P(M(I)) where I is the original image, M is histogram equalization mapping operation and P is a palette. If we define a new palette as P'=P(M) and leave image I unchanged then histogram equalization is implemented as palette change or mapping change. On the other hand, if palette P remains unchanged and image is modified to I'=M(I) then the implementation is accomplished by image change. In most cases palette change is better as it preserves the original data.
Modifications of this method use multiple histograms, called subhistograms, to emphasize local contrast, rather than overall global contrast. Examples of such methods include adaptive histogram equalization, contrast limiting adaptive histogram equalization or CLAHE, multipeak histogram equalization (MPHE), and multipurpose beta optimized bihistogram equalization (MBOBHE). The goal of these methods, especially MBOBHE, is to improve the contrast without producing brightness mean-shift and detail loss artifacts by modifying the HE algorithm.
A signal transform equivalent to histogram equalization also seems to happen in biological neural networks so as to maximize the output firing rate of the neuron as a function of the input statistics. This has been proved in particular in the fly retina.
Histogram equalization is a specific case of the more general class of histogram remapping methods. These methods seek to adjust the image to make it easier to analyze or improve visual quality (e.g., retinex)
Back projection
The back projection (or "project") of a histogrammed image is the re-application of the modified histogram to the original image, functioning as a look-up table for pixel brightness values.
For each group of pixels taken from the same position from all input single-channel images, the function puts the histogram bin value to the destination image, where the coordinates of the bin are determined by the values of pixels in this input group. In terms of statistics, the value of each output image pixel characterizes the probability that the corresponding input pixel group belongs to the object whose histogram is used.
Implementation
Consider a discrete grayscale image {x} and let ni be the number of occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is
being the total number of gray levels in the image (typically 256), n being the total number of pixels in the image, and being in fact the image's histogram for pixel value i, normalized to [0,1].
Let us also define the cumulative distribution function corresponding to i as
,
which is also the image's accumulated normalized histogram.
We would like to create a transformation of the form to produce a new image {y}, with a flat histogram. Such an image would have a linearized cumulative distribution function (CDF) across the value range, i.e.
for
for some constant . The properties of the CDF allow us to perform such a transform (see Inverse distribution function); it is defined as
where is in the range .
Notice that maps the levels into the range [0,1], since we used a normalized histogram of {x}. In order to map the values back into their original range, the following simple transformation needs to be applied on the result:
.
A more detailed derivation is provided here.
is a real value while has to be an integer. An intuitive and popular method is applying the round operation:
.
However, detailed analysis results in slightly different formulation. The mapped value should be 0 for the range of . And for , for , ...., and finally for . Then the quantization formula from to should be
.
(Note: when , however, it does not happen just because means that there is no pixel corresponding to that value.)
Of color images
The above describes histogram equalization on a grayscale image. However it can also be used on color images by applying the same method separately to the Red, Green and Blue components of the RGB color values of the image. However, applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
There are several histogram equalization methods in 3D space. Trahanias and Venetsanopoulos applied histogram equalization in 3D color space However, it results in "whitening" where the probability of bright pixels are higher than that of dark ones. Han et al. proposed to use a new cdf defined by the iso-luminance plane, which results in uniform gray distribution.
Examples
For consistency with statistical usage, "CDF" (i.e. Cumulative distribution function) should be replaced by "cumulative histogram", especially since the article links to cumulative distribution function which is derived by dividing values in the cumulative histogram by the overall amount of pixels. The equalized CDF is defined in terms of rank as .
Small image
The 8-bit grayscale image shown has the following values:
The histogram for this image is shown in the following table. Pixel values that have a zero count are excluded for the sake of brevity.
{| class="wikitable"
|-
! Value !! Count
! Value !! Count
! Value !! Count
! Value !! Count
! Value !! Count
|-
| 52 || 1
| 64 || 2
| 72 || 1
| 85 || 2
| 113 || 1
|-
| 55 || 3
| 65 || 3
| 73 || 2
| 87 || 1
| 122 || 1
|-
| 58 || 2
| 66 || 2
| 75 || 1
| 88 || 1
| 126 || 1
|-
| 59 || 3
| 67 || 1
| 76 || 1
| 90 || 1
| 144 || 1
|-
| 60 || 1
| 68 || 5
| 77 || 1
| 94 || 1
| 154 || 1
|-
| 61 || 4
| 69 || 3
| 78 || 1
| 104 || 2
|rowspan="3" colspan="2"|
|-
| 62 || 1
| 70 || 4
| 79 || 2
| 106 || 1
|-
| 63 || 2
| 71 || 2
| 83 || 1
| 109 || 1
|}
The cumulative distribution function (cdf) is shown below. Again, pixel values that do not contribute to an increase in the cdf are excluded for brevity.
{| class="wikitable"
|-
! v, Pixel Intensity !! cdf(v) !! h(v), Equalized v
|-
| 52||1||0
|-
| 55||4||12
|-
| 58||6||20
|-
| 59||9||32
|-
| 60||10||36
|-
| 61||14||53
|-
| 62||15||57
|-
| 63||17||65
|-
| 64||19||73
|-
| 65||22||85
|-
| 66||24||93
|-
| 67||25||97
|-
| 68||30||117
|-
| 69||33||130
|-
| 70||37||146
|-
| 71||39||154
|-
| 72||40||158
|-
| 73||42||166
|-
| 75||43||170
|-
| 76||44||174
|-
| 77||45||178
|-
| 78||46||182
|-
| 79||48||190
|-
| 83||49||194
|-
| 85||51||202
|-
| 87||52||206
|-
| 88||53||210
|-
| 90||54||215
|-
| 94||55||219
|-
| 104||57||227
|-
| 106||58||231
|-
| 109||59||235
|-
| 113||60||239
|-
| 122||61||243
|-
| 126||62||247
|-
| 144||63||251
|-
| 154||64||255
|} (Please note that version is not illustrated yet.)
This cdf shows that the minimum value in the subimage is 52 and the maximum value is 154. The cdf of 64 for value 154 coincides with the number of pixels in the image. The cdf must be normalized to . The general histogram equalization formula is:
where cdfmin is the minimum non-zero value of the cumulative distribution function (in this case 1), M × N gives the image's number of pixels (for the example above 64, where M is width and N the height) and L is the number of grey levels used (in most cases, like this one, 256).
Note that to scale values in the original data that are above 0 to the range 1 to L-1, inclusive, the above equation would instead be:
where cdf(v) > 0. Scaling from 1 to 255 preserves the non-zero-ness of the minimum value.
The equalization formula for the example scaling data from 0 to 255, inclusive, is:
For example, the cdf of 78 is 46. (The value of 78 is used in the bottom row of the 7th column.) The normalized value becomes
Once this is done then the values of the equalized image are directly taken from the normalized cdf to yield the equalized values:
Notice that the minimum value (52) is now 0 and the maximum value (154) is now 255.
{|
|-
|
|
|-
|align="center"| Original
|align="center"| Equalized
|}
{|
|-
|
|
|-
|align="center"| Histogram of Original image
|align="center"| Histogram of Equalized image
|}
Full-sized image
See also
Histogram matching
Adaptive histogram equalization
Normalization (image processing)
Notes
References
Acharya and Ray, Image Processing: Principles and Applications, Wiley-Interscience 2005
Russ, The Image Processing Handbook: Fourth Edition, CRC 2002
"Histogram Equalization" at Generation5 (archive)
Image processing |
The National Research Foundation of Korea (NRF) was established in 2009 as a merger of Korea Science and Engineering Foundation (KOSEF), Korea Research Foundation (KRF), and Korea Foundation for International Cooperation of Science and Technology (KICOS). It provides support for research into new theories for the advancement of science, the arts, and the Korean culture in general. The foundation was first established in 1981. Its offices are located in 25 Heolleung-ro, Seocho-gu, Seoul and 201 Gajeong-ro, Yuseong-gu, Daejeon.
Budget
Total: $6.427 million (1 USD = 1,100 KRW)
Basic Research in Science and Engineering ($1.864 million), Humanities & Social Sciences ($234 million), National Strategic R&D Programs ($2.032 million), Academic Research & University Funding ($2.071 million), International Cooperation ($67 million), Others Areas ($159 million)
Organization
7 directorates, 2 centers, 18 divisions, 20 offices, 46 Teams
President
Board of Directors
Policy Advisory Committee
Research Ethics Committee
Audit
Office of Audits and Inspections
Directorate for Basic Research in Science and Engineering
Division of Natural Sciences
Division of Life Sciences
Division of Medical Sciences
Division of Engineering
Division of ICT and Convergence Research
Office of Basic Research Planning
Office of Basic Research Management
Directorate for Humanities & Social Sciences
Division of Humanities
Division of Social Sciences
Division of Arts, Culture and Convergence
Office of Humanities & Social Sciences Planning
Office of Humanities & Social Sciences Management
Directorate for National Strategic R&D Programs
Division of Drug Discovery and Development
Division of Next Generation Biotechnology
Division of Neuroscience and Advanced Medical Technology
Division of Nano-Semiconductor Technology
Division of Material-Part Technology
Division of ICT & Convergence Technology
Division of Energy and Environment Technology
Division of Space Technology
Division of Nuclear Technology
Division of Public Technology
Office of National Strategic R&D Planning
Office of Fundamental R&D Programs
Office of Big Science Programs
Directorate for Academic Research
Office of Academic Research Affairs
Office of HR Development
Office of University Education Management
Office of University-Industry Cooperation
Directorate for International Affairs
Office of International Cooperation Planning
Office of International Cooperation Framework
Office of International Networks
Directorate for Digital Innovation
Office of R&D Policy and Strategy
Office of Data & Information
Information Security Task Force
Directorate for Management and Operations
Office of Administration
Office of Financial Management
Office of Research Ethics
Office of Planning and Coordination
Office of Public Relations
Main Activities
Support for academic research and development activities
Support for the cultivation and utilization of researchers in academic research and development
Promotion of international cooperation for academic research and development activities
Support for collecting, investigating, analyzing, assessing, managing and using the materials and information necessary for academic research and development and the formulation of related policies
Support for research and operation of organizations related to academic research and development
Support for exchange and cooperation among domestic and overseas organizations related to academic research and development
Other matters necessary for academic research and development
Presidents
Park Chan-Mo (June 26, 2009–January 19, 2011)
Oh Se-Jung (January 20, 2011–January 5, 2012)
Lee Seung-Jong (January 6, 2012–January 2, 2014)
Chung Min-Keun (January 3, 2014–August 22, 2016)
Cho Moo-je (August 23, 2016–July 8, 2018)
Roe Jung-hye (July 9, 2018–September 26, 2021)
Lee Kwang-bok (September 27, 2021–present)
See also
Korea Citation Index
Korean studies
Government of South Korea
Notes
External links
Official English-language site
Scientific organizations based in South Korea
Scholarships in South Korea
Korean studies |
Sensaura, a division of Creative Technology, was a company that provided 3D audio effect technology for the interactive entertainment industry.
Sensaura technology was shipped on more than 24 million game consoles and 150 million PCs (on soundcards, motherboards and external USB audio devices).
Formed in 1991, Sensaura developed a range of technologies for incorporating 3D audio into PC's and consoles.
History
Following its origin as a research project at Thorn EMI's Central Research Laboratories ("CRL", based in Hayes, United Kingdom) in 1991, Sensaura become a supplier of 3D audio technology. By 1998, Sensaura had licensed its technology to the audio chip manufacturers (ESS Technology, Crystal Semiconductor/Cirrus Logic and Yamaha), who at that time supplied 70% of the PC audio market. Subsequent licensees included NVIDIA, Analog Devices, VIA Technologies (expired, replaced by QSound) and C-Media Electronics.
In 1993, Sensaura released a CD sampler disc 'beyond stereo...' containing four tracks;
1. Roadside
2. Railway Station
3. RAF Band
4. Falla: Final Dance from "The Three-Cornered Hat"
These tracks, recorded live, were intended to illustrate what could be achieved in terms of 3D sound from a two-channel stereo set-up.
Some commercial recordings followed:
Milla Jovovich, The Divine Comedy (1994)
Gustav Mahler: Symphony No. 9, Benjamin Zander, Philharmonia Orchestra (1999)
The MacRobert Award was presented to Sensaura by the Royal Academy of Engineering in 2001.
Sensaura technology was shipped on more than 24 million game consoles and 150 million PCs (on soundcards, motherboards and external USB audio devices). As well as being licensed directly for the first Microsoft Xbox hardware, the technology was also available as a middleware product, GameCODA, for the Xbox, PlayStation 2, and GameCube.
In 2000, Sensaura developed a spatial audio plugin for the WinAmp media player which was downloaded 18 million times.
In December 2003, the Sensaura business and IP portfolio was bought by Creative Technology. Sensaura continued to operate as an R&D division within Creative, however following a major reduction in staff numbers in March 2007, it ceased supplying audio technologies for PC sound cards, game consoles but focused on other product areas, including involvement with the OpenSL ES standard. Following further headcount reductions in 2008, the remaining Sensaura engineers were absorbed into Creative's 3DLabs subsidiary.
Prior to the acquisition of Sensaura by Creative Technology in 2003, some employees left to form Sonaptic Ltd. Licensing Sensaura's technology, Sonaptic specialized in 3D positional audio for mobile devices. In 2007, Wolfson Microelectronics acquired Sonaptic, wanting to expand their reach within the audio market.
Technology
Sensaura 3D Positional Audio (S-3DPA)
Sensaura's 3D positional audio technology was designed to build upon the industry standard Microsoft DirectSound3D API, which allowed games to have high quality audio in three dimensions.
HRTF 3D audio positioning with low CPU usage.
Virtual Ear features common HRTF profiles (libraries) that can be selected by the end-user.
Digital Ear is a process of tuning HRTF filter libraries to the individual's ear shape by creating a CAD model with physical implementation.
MacroFX simulates 'near-field proximity effects' when objects move very close to the listener.
ZoomFX to simulate sounds of a specific size instead of a point source.
3D speaker technology
By using MultiDrive 5.1 and XTC cross-talk cancellation, Sensaura's 3D speaker technology can create accurate 3D audio within a normal 5.1 surround sound system.
XTC cross-talk cancellation for 3D from speakers (as opposed to from headphones).
Independent HRTF calculation for surround speakers to give full 3D audio from 5.1
MultiDrive 5.1 integrates front and rear sound hemispheres on 5.1 speaker setups.
MultiDrive simulates 3D sound on 4 speaker setups
gameCODA (audio middleware)
For more information, see gameCODA.
Further reading
iXBT Labs - Which Sound Card is right for you?
Sensaura - VirtualEar technology
Sensaura - DigitalEar technology
Sensaura - MacroFX Algorithms
Sensaura - MacroFX 2.0
Sensaura - ZoomFX for 3D Sound
Sensaura - XTC cross-talk cancellation
Sensaura - MultiDrive 5.1
Sensaura - MultiDrive (4 speaker)
Sensaura - EnvironmentFX
GameCODA - About
GameCODA - Concepts Guide - Issue 2.0 (All Platforms)
GameCODA - Introductory FAQ - Issue 2.0 (All Platforms)
SoundMAX Technical Notes
Compatible hardware
Consoles & PC's (gameCODA)
GameCODA is able to run on virtually any x86 PC with basic sound support.
Personal computers (PC's)
Microsoft Xbox
Sony PlayStation 2
Nintendo GameCube
Sound cards (S-3DPA)
Sound cards that support S-3DPA can also be utilized to accelerate gameCODA.
Audiotrak Prodigy 7.1
Diamond Monster Sound MX400
M-audio Revolution 7.1
Turtle Beach
Turtle Beach Catalina
Turtle Beach Santa Cruz
Hercules (Guillemot)
Guillemot Maxi Sound Muse
Hercules Game Theater XP
Hercules Gamesurround Muse 5.1 DVD
Hercules Gamesurround Fortissimo III 7.1
Hercules Digifire 7.1
Yamaha YMF7x4 series
YMF724C-V
YMF724F-V
YMF730
YMF738
YMF744
YMF744B-R
YMF754 DS-1E
Terratec
Terratec Aureon 7.1 Space
Terratec Aureon 7.1 Universe
Terratec DMX 6Fire
Terratec Promedia SoundSystem DMX
Motherboards with semiconductors
ASUS
ASUS P4S800 series
ASUS P4B533-X
ASUS A7V266-MX
ASUS A7V8X-X (on audio models only)
Semiconductors
Analog Devices Inc: AD1881A, AD1885, AD1886, AD1887, AD1980, AD1985 (SoundMAX)
C-Media: CMI 8768 (SoundPro)
Realtek: ALC658
VIA: VT1616, VT1618, (Vinyl Audio, Vinyl Tremor)
See also
AC'97 (Audio Codec)
Aureal Semiconductor
Creative Technology
GameCODA
OpenSL ES
Sound card
Sonaptic Ltd
(NVIDIA) SoundStorm
References
External links
Creative Technology
Institute of Professional Sound
Modern Audio Technologies in Games
Sound cards
Creative Technology
Creative Technology acquisitions |
Bandwidth expansion is a technique for widening the bandwidth or the resonances in an LPC filter. This is done by moving all the poles towards the origin by a constant factor . The bandwidth-expanded filter can be easily derived from the original filter by:
Let be expressed as:
The bandwidth-expanded filter can be expressed as:
In other words, each coefficient in the original filter is simply multiplied by in the bandwidth-expanded filter. The simplicity of this transformation makes it attractive, especially in CELP coding of speech, where it is often used for the perceptual noise weighting and/or to stabilize the LPC analysis. However, when it comes to stabilizing the LPC analysis, lag windowing is often preferred to bandwidth expansion.
References
P. Kabal, "Ill-Conditioning and Bandwidth Expansion in Linear Prediction of Speech", Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, pp. I-824-I-827, 2003.
Signal processing |
Bannari Amman Institute of Technology (Autonomous) is an engineering college located in Sathyamangalam, Erode, Tamil Nadu, India. It was founded by the Bannari Amman Group in 1996 and is affiliated to Anna University. The institute offers 21 undergraduate, 10 postgraduate programmes in Engineering, Technology and Management studies. All the departments of Engineering and Technology are recognized by Anna University, Chennai to offer Ph.D. programmes.
Programmes offered
Undergraduate Programmes
Bachelor of Engineering in
Agriculture Engineering
Automobile Engineering
Biomedical Engineering
Civil Engineering
Computer Science & Engineering
Electrical & Electronics Engineering
Electronics & Instrumentation Engineering
Information Science & Engineering
Mechanical Engineering
Mechatronics
Bachelor of Technology in
Artificial Intelligence and Data Science
Artificial Intelligence and Machine Learning
Biotechnology
Computer Science and Business Systems
Computer Technology
Food Technology
Fashion Technology
Information Technology
Textile Technology
Postgraduate Programmes
Master of Engineering in
Communication Systems
Computer Science & Engineering
Industrial Automation & Robotics
Industrial Safety & Engineering
Software Engineering
Structural Engineering
Master of Technology in
Biotechnology
Master of Business Administration
Ph.D. / M.S. (by research) Programmes
Aeronautical Engineering
Agriculture Engineering
Automobile Engineering
Biotechnology
Civil Engineering
Computer Science & Engineering
Electrical & Electronics Engineering
Electronics & Communication Engineering
Electronics & Instrumentation Engineering
Fashion Technology
Information Technology
Mechanical Engineering
Mechatronics
Textile Technology
Physics
Chemistry
Admissions
Admissions is through Tamil Nadu Engineering Admission (TNEA) ranking based on 12th standard exam results facilitated by Directorate Of Technical Education (DoTE). ME/M.Tech admissions are based on ranking in TANCET examination conducted by Anna University. The Counselling code of the Institution is 2702.Management seats are available after the final round of counselling.
Infrastructure
Library
The five-storeyed, air-conditioned and computerized library is well-stacked. 83000 Volumes, 400 National and International Journals, 6500 CD-ROMs, a Digital Library with 6000 e-journals, 274 NPTEL and 166 NITTTR video courses are part of the resources. BIT is an Institutional member of the British Council Library, Chennai, DELNET, New Delhi and INDEST Consortium, New Delhi.
Hostels
The institute has four hostels for male students and five hostels for female students. All hostels are fully furnished and single, double and four occupancy rooms are available
Hostel Details:
Gents Hostel - 3931 inmates
Ladies Hostel - 2191 inmates
Other Facilities in Hostel: Dining Halls, Mini Cine Theatres, Indoor Courts for Shuttle & Table Tennis.
Sports
All necessary sports facilities with state-of-the-art technology are available in the campus. The Existing sports facilities for students includes 400 m standard athletic track with 8 lanes including a Long Jump & Triple Jump Pit, Sectors for Shot-put, Discus & Javelin throws,
In additional, it has also a standard bed for High jump and Pole vault events, a football field and two kho- kho courts, a Hockey field with kerb, 65 m radius cricket field with two net practice pitches & one portable nets, two Volleyball courts, two Ball Badminton Courts,
one Handball Courts and two Kabaddi Courts. Total area of the BIT Play Field is 5,74,580 sq. ft.
Auditorium
A fully air-conditioned indoor Vedanayagam auditorium with a capacity to seat 750 students.
Town Hall
A fully air-conditioned main auditorium with a capacity to seat 2500 students.
Rankings
BIT was ranked 96 among private engineering colleges in India by Outlook India in 2022.
References
External links
Engineering colleges in Tamil Nadu
Colleges affiliated to Anna University
Universities and colleges in Erode district
Universities and colleges established in 1996
1996 establishments in Tamil Nadu |
Kamailio, formerly OpenSER (and sharing some common history with SIP Express Router (SER)), is a SIP server licensed under the GPL-2.0-or-later license. It can be configured to act as a SIP registrar, proxy or redirect server, and features presence support, RADIUS/syslog accounting and authorization, XML-RPC and JSON-RPC-based remote control, SQL and NoSQL backends, IMS/VoLTE extensions and others.
Kamailio is a Hawaiian word. Kama'ilio means talk, to converse. "It was chosen for its special flavour."
Features
Kamailio is written in pure C with architecture-specific optimizations; it can be configured for many scenarios including small-office use, enterprise PBX replacements and carrier services—it is SIP signaling server—a proxy—aiming to be used for large real-time communication services. Features include:
SIP telephony system
SIP load balancer
SIP security firewall
Least cost routing engine
IMS/VoLTE platform
Instant messaging and presence services
SIP IPv4-IPv6 gateway
MSRP relay
SIP-WebRTC gateway
Usage
Kamailio is used by large Internet Service Providers to provide public telephony service. The largest public announced deployment with several million of users is in operation at the German ISP 1&1. Another large deployment is in operation at the provider sipgate.
Forks
OpenSIPS
OpenSIPS, a fork of SER which has diverged—deciding to "go their own way" from the SER and OpenSER codebases—is a free software implementation of SIP for voice over IP (VoIP) that can be used to handle voice, text and video communication. OpenSIPS is intended for installations serving thousands of calls and is IETF RFC 3261 compliant. The software was recognized by Google in 2017 with their Open Source Peer Bonus award.
History
Kamailio's roots go back to 2001, when the first line of SIP Express Router (SER) was written; at the time, the working group published results at iptel.org—in September 2002 the code itself was published under the GPL. The first fork of SER came in 2005—OpenSER—which would later merge back into the code that would become Kamailio. The codebases of SER and OpenSER (by then known as Kamailio) converged in December 2012, and it was decided to continue to use Kamailio as the main name of the project, which remains open source.
During the first years of development, serweb—a web-based user provisioning—was available.
Timeline
2001
SIP Express Router (SER) is initially developed by the Fraunhofer Institute for Open Communication Systems (FOKUS)
2002
First third-party contribution (ENUM module)
September
Code is GPL'd and first published
2003
Adoption by the general public begins; additional free and open source code is contributed by independent third parties
2004
Part of the FOKUS team moves, with the SER copyrights, to the newly created company iptel.org
Two of the five SER core developers and one main contributor start a new free and open source software project named OpenSER.
2005
The company IPtel.org is bought by TEKELEC, and is responsible for the TEKELEC session router and CSCF.
2007
May 12
SER 2.0 RC-1 (Ottendorf) is made available
2008
August
OpenSER is renamed Kamailio to avoid conflict with similar trademarks
November 4
Kamailio developers sketch and announce a plan to team up with the SER developers to create the future sip-router project
2013
FOKUS and the Kamailio community organize the first iteration of the annual 'Kamailio World' conference in Berlin, Germany.
References
External links
Kamailio Homepage
Telephony software
Free VoIP software
Free server software
Free routing software |
Soitec is an international company, based in France, that manufactures high performance substrates used in the manufacture of semiconductors.
Soitec's semiconductor materials are used to manufacture chips which equip smartphones, tablets, computers, IT servers, and data centres. Soitec's products are also found in electronic components used in cars, connected objects (Internet of Things), as well as industrial and medical equipment.
Soitec's flagship product is silicon on insulator (SOI). Materials produced by Soitec come in the form of substrates (also called "wafers"). These are produced as ultra-thin disks that are 200 to 300 mm in diameter and are less than 1 mm thick. These wafers are then etched and cut to be used for microchips in electronics.
History
Soitec was founded in 1992 near Grenoble in France by two researchers from CEA Leti, an institute for micro- and nanotechnologies research created by the French Commission for Atomic Energy and Alternative Energies (CEA). The pair developed Smart Cut™ technology to industrialize Silicon-On-Insulator (SOI) wafers, and built their first production unit in Bernin, in the Isère department of France.
Soitec's offering initially targeted the electronics market. At the end of the 2000s, Soitec launched into the solar energy and lighting markets, exploiting new openings for its materials and technologies. In 2015, the company announced that it would be refocusing its efforts on its core business: electronics.
Soitec employs about 2000 people throughout the world and currently has production units in France and in Singapore. The company also has R&D centers and commercial offices in France, the United States (Arizona and California), China, South Korea, Japan and Taiwan.
Key dates
1963: SOS is invented at North American Aviation.
1965: The first MEMS device is invented at Westinghouse.
1978: Hewlett-Packard develops SPER.
1979: NOSC starts researching thin-film SOS.
1988: NOSC publishes their SOS findings.
1989: IBM starts researching SOI.
1990: Peregrine Semiconductor is founded.
1991: Peregrine Semiconductor launches the commercialization of SOS (UltraCMOS).
1992: Creation of Soitec by researchers from CEA Leti in Grenoble, France.
1995: Peregrine Semiconductor delivers its first product.
1995: IBM commercializes SOI.
1997: CEA Leti spins off Tronics Microsystems to commercialize SOI MEMS.
1997: Soitec shifts to mass production after the signature of a Smart Cut™ technology licensing agreement with Shin Etsu Handotai (SEH).
1999: Construction of Soitec's first production site in Bernin (Bernin 1), and launch of Soitec's initial public offering.
2001: IBM unveils RF-SOI technology
2001: CEA Leti and Motorola start collaborating on the development of high aspect ratio SOI MEMS.
2002: OKI ships the first commercial FD-SOI LSI.
2002: Inauguration of Bernin 2, a Soitec manufacturing unit dedicated to 300-mm diameter wafers.
2003: Soitec acquires Picogiga International, a company specializing in technologies for III-V composite materials, and the first foray into materials other than SOI.
2005: Freescale introduces HARMEMS.
2006: Soitec acquires Tracit Technologies, a company specializing in molecular adhesion and mechanical and chemical thinning processes, enabling diversification into new applications for Smart Cut™ technology.
2008: Soitec opens a production unit in Asia, in Singapore. In 2012, this unit housed the SOI wafer recycling business. In 2013, production stopped at the unit to prepare for the company's new Fully Depleted Silicon on Insulator (FD-SOI) technology.
2009: Soitec acquires Concentrix Solar, a German supplier of concentrator photovoltaic (CPV) systems, Soitec thus entering the solar energy market.
2011: Soitec acquires Altatech Semiconductor, a company specialized in developing equipment for producing semiconductors.
2012: Soitec opens a production unit for CPV modules in San Diego, California, with a capacity of 140 MW, upgradeable to 280 MW.
2012: GlobalFoundries and STMicroelectronics sign a sourcing agreement for 28 nm and 20 nm FD-SOI devices. GlobalFoundries agreed to manufacture wafers for STMicroelectronics using the latter's CMOS28FDSOI. The FD-SOI technology originates from the cooperation between Soitec, ST and CEA Leti.
2013: Soitec signs a Smart Cut™ licensing agreement with Sumitomo Electric to develop the gallium nitride (GaN) wafer market for LED lighting applications. Signature of another agreement, with GT Advanced Technologies, to develop and commercialize equipment for producing wafers for manufacturing LEDs and other industrial applications.
2014: Samsung and STMicroelectronics sign a foundry and license agreement. It enables Samsung to use the FD-SOI technology to produce 28 nm integrated circuits. Soitec solar energy division also inaugurates the first 50% of South African Touwsrivier solar plant, which will have a final total capacity of 44 MWp. The plant was never completed.
2015: Samsung qualifies their 28FDS process.
2015: After the stoppage of some important solar projects in the United States, Soitec announces a strategic shift toward its electronics business and a plan to leave the solar energy business.
2015: Peregrine Semiconductor and GlobalFoundries announce in July the first 300mm RF-SOI platform (130 nm).
2015: GlobalFoundries announces in July the implementation of a technological platform for producing 22-nm FD-SOI chips (22FDX).
2015: Soitec and Simgui announce the first Chinese production of 200mm SOI wafers.
2015: CEA Leti demos MEMS on 300mm SOI wafers.
2016: Soitec starts volume manufacturing of 300mm RF-SOI wafers.
2017: GlobalFoundries announces 45RFSOI.
2017: Samsung announces 18FDS.
2017: IBM and GlobalFoundries announce a custom FinFET-on-SOI process (14HP).
2017: Soitec signs a five-year agreement to supply GlobalFoundries with FD-SOI wafers.
2018: STMicroelectronics adopts GlobalFoundries' 22FDX.
2018: Soitec and MBDA acquire the Dolphin Integration assets.
2019: Soitec signs a high-volume agreement to supply Samsung with FD-SOI wafers.
2019: Soitec acquires EpiGaN
2019: Soitec signs high-volume agreements to supply GlobalFoundries with SOI wafers.
2020: GlobalFoundries announces the 22FDX+ platform.
2020: Soitec signs a multi-year agreement to supply GlobalFoundries with 300mm RF-SOI wafers.
2022: start of building of the Bernin 4 facility for SiC wafers, intended to start production in 2024.
Operations
Historically, Soitec has marketed Silicon on Insulator (SOI) as a high performance material for manufacturing electronic chips for computers, game consoles and servers, as well as the automotive industry.
With the explosion of mobile products (tablets, smartphones, etc.) on the consumer electronics market, Soitec has also developed new materials for radio-frequency components, multimedia processors, and power electronics.
With the rapid growth of the Internet of Things, wearables, and other mobile devices, new needs have arisen in terms of performance and energy efficiency of electronic components. For this market, Soitec offers materials that help reduce the energy consumed by chips, improve their information processing speed, and support the needs of high-speed Internet.
In the solar energy market Soitec acquired Concentrix Solar, then manufactured and supplied Concentrator Photovoltaic (CPV) systems from 2009 to 2015. Research to create a new generation of four-junction solar cells led Soitec to set a world record in December 2014 with a cell capable of converting 46% of solar rays into electricity. Soitec announced in January 2015 that it would be leaving the solar market after several important solar plant projects ended.
In the lighting industry, Soitec operates upstream and downstream of the LED value chain.
Upstream, the company uses its expertise in semiconductor materials to develop substrates made from gallium nitride (GaN), the base material used in LEDs.
Downstream, Soitec is developing a range of industrial partnerships to commercialize new professional lighting solutions (urban, office and transport infrastructure lighting).
Technologies
Soitec is developing numerous technologies for its different sectors of activity.
Smart Cut™
Developed by CEA-Leti in collaboration with Soitec, this technology has been patented by researcher Michel Bruel. It makes possible the transfer of a thin layer of monocristalline material from a donor substrate to another by combining ion implantation and bonding by molecular adhesion. Soitec uses Smart Cut™ technology to mass-produce SOI wafers. Compared with classic bulk silicon, SOI enables a significant reduction in energy leakage in the substrate, and improves the performance of the circuit in which it is used.
Smart Stacking™
The technology involves the transfer of partially or fully processed wafers onto other wafers. It can be adapted to wafer diameters of 150 mm to 300 mm and is compatible with a wide variety of substrates, such as silicon, glass and sapphire.
Smart Stacking™ technology is used for back-side illuminated image sensors, where it improves sensitivity and enables a smaller pixel size, as well as in smartphone radio-frequency circuits. It also opens new doors to 3D integration.
Epitaxy
Soitec has epitaxy expertise in III-IV materials across the following fields: molecular beam epitaxy, metal organic vapor phase epitaxy and hydride vapor phase epitaxy. The company manufactures wafers of gallium arsenide (GaAs) and gallium nitride (GaN) for developing and manufacturing compound semiconductor systems.
These materials are used in Wi-Fi and high-frequency electronic devices (mobile telecommunications, infrastructure networks, satellite communications, fiber optic networks and radar detection), as well as in energy management and optoelectronic systems, such as LEDs.
Capital increases
Soitec has carried out three capital increases:
The first in July 2011 to finance investments, especially for developing its solar energy and LED businesses.
The second in July 2013 to contribute to the refinancing of bonds convertible and/or exchangeable into new or existing shares (“OCEANEs”) due in 2014 and strengthen the company's financial structure. In addition, Soitec opened a further bond issue in September 2013.
The third in June 2014 to strengthen Soitec's financial profile and its cash position and support the FD-SOI substrates industrial mass production.
External links
Website of Advanced Substrate News, news and information on the micro-electronic industry, especially SOI
References
Electronics companies of France
Companies listed on Euronext Paris
Silicon wafer producers
Companies based in Auvergne-Rhône-Alpes
Technology companies of France
French brands |
The Taiwan Miracle () or Taiwan Economic Miracle refers to Taiwan's rapid economic development to a developed, high-income country during the latter half of the twentieth century.
As it developed alongside South Korea, Singapore, and Hong Kong, Taiwan became known as one of the "Four Asian Tigers". Taiwan was the first developing country to adopt an export-oriented trade strategy after World War II.
Background
After a period of hyperinflation in the late 1940s when the Kuomintang-led government of the Republic of China military regime of Chen Yi overprinted the Taiwanese dollar against the previous Taiwanese yen in the Japanese era, it became clear that a new and stable currency was needed. Along with the $4 billion in financial aid and soft credit provided by the US (as well as the indirect economic stimulus of US food and military aid) over the 1945–1965 period, Taiwan had the necessary capital to restart its economy. Further, the Kuomintang government instituted many laws and land reforms that it had never effectively enacted on mainland China.
A land reform law, inspired by the same one that the Americans were enacting in occupied Japan, removed the landlord class (similar to what happened in Japan), and created a higher number of peasants who, with the help of the state, increased the agricultural output dramatically. This was the first excedent accumulation source. It inverted capital creation, and liberated the agricultural workforce to work in the urban sectors. However, the government imposed on the peasants an unequal exchange with the industrial economy, with credit and fertilizer controls and a non monetary exchange to trade agrarian products (machinery) for rice. With the control of the banks (at the time, being the property of the government), and import licenses, the state oriented the Taiwanese economy to import substitution industrialization, creating initial capitalism in a fully protected market.
It also, with the help of USAID, created a massive industrial infrastructure, communications, and developed the educational system. Several government bodies were created and four-year plans were also enacted. Between 1952 and 1982, economic growth was on average 8.7%, and between 1983 and 1986 at 6.9%. The gross national product grew by 360% between 1965 and 1986. The percentage of global exports was over 2% in 1986, over other recently industrialized countries, and the global industrial production output grew a further 680% between 1965 and 1986. The social gap between the rich and the poor fell (Gini: 0.558 in 1953, 0.303 in 1980), even lower than some Western European countries, but it grew a little in the 80's. Health care, education, and quality of life also improved. The flexibility of the productive system and the industrial structure meant that Taiwanese companies had more chances to adapt themselves to the changing international situation and the global economy.
The economist S. C. Tsiang played an influential role in shifting towards an export-oriented trade strategy. In 1954, he called for Taiwan to deal with its chronic shortage of foreign exchange by increasing exports rather than reduce imports. In 1958, the policymaker K. Y. Yin pushed for the adoption of Tsiang's ideas.
In 1959, a 19-point program of Economic and Financial Reform, liberalized market controls, stimulated exports and designed a strategy to attract foreign companies and foreign capital. An exports processing area was created in Kaohsiung and in 1964, General Instruments pioneered in externalizing electronic assembly in Taiwan. Japanese companies moved in, reaping the benefits of low salaries, the lack of environmental laws and controls, a well-educated and capable workforce, and the support of the government. But the nucleus of the industrial structure was national, and it was composed by a large number of small and medium-sized enterprises, created within families with the family savings, and savings cooperatives nets called hui (; Pha̍k-fa-sṳ: Fi). They had the support of the government in the form of subsidies and credits loaned by the banks.
Most of these societies appeared for the first time in rural zones near metropolitan areas, where families shared work (in the parcels they owned and in the industrial workshops at the same time). For instance, in 1989 in Changhua, small enterprises produced almost 50% of the world's umbrellas. The State attracted foreign companies in order to obtain more capital and to get access to foreign markets, but the big foreign companies got contracts with this huge net of small sized, familiar and national companies, which were a very important percentage of the industrial output.
Foreign investment never represented an important component in the Taiwanese economy, with the notable exception of the electronic market. For instance, in 1981, direct foreign investment was a mere 2% of the GNP, foreign companies employed 4.8% of the total workforce, their production was 13.9% of the total production and their exports were 25.6% of nationwide exports. Access to the global markets was facilitated by the Japanese companies and by the American importers, who wanted a direct relationship with the Taiwanese brands. No big multinational corporations were created (like in Singapore), or huge national conglomerates (like South Korean chaebols), but some industrial groups, with the support of the government, grew, and became in the 90's huge companies totally internationalized. Most of the development was thanks to the flexibility of family businesses which produced for foreign traders established in Taiwan and for international trade nets with the help of intermediaries.
After retreating to Taiwan, Chiang learned from his mistakes and failures in the mainland and blamed them for failing to pursue Sun Yat-sen's ideals of Tridemism and welfarism. Chiang's land reform more than doubled the land ownership of Taiwanese farmers. It removed the rent burdens on them, with former land owners using the government compensation to become the new capitalist class. He promoted a mixed economy of state and private ownership with economic planning. Chiang also promoted a 9-years compulsory education and the importance of science in Taiwanese education and values. These measures generated great success with consistent and strong growth and the stabilization of inflation.
Era of globalization
In the 1970s, protectionism was on the rise, and the United Nations switched recognition from the government of the Republic of China to the government of the People's Republic of China as the sole legitimate representative of mainland China. It was expelled by General Assembly Resolution 2758 and replaced in all UN organs with the PRC. The Kuomintang began a process of enhancement and modernization of the industry, mainly in high technology (such as microelectronics, personal computers and peripherals). One of the biggest and most successful Technology Parks was built in Hsinchu, near Taipei.
Many Taiwanese brands became important suppliers of worldwide known firms such as DEC or IBM, while others established branches in Silicon Valley and other places inside the United States and became known. The government also recommended the textile and clothing industries to enhance the quality and value of their products to avoid restrictive import quotas, usually measured in volume. The decade also saw the beginnings of a genuinely independent union movement after decades of repression. Some significant events occurred in 1977, which gave the new unions a boost.
One was the formation of an independent union at the Far East Textile Company after a two-year effort discredited the former management-controlled union. This was the first union that existed independently of the Kuomintang in Taiwan's post-war history (although the Kuomintang retained a minority membership on its committee). Rather than prevailing upon the state to use martial law to smash the union, the management adopted the more cautious approach of buying workers' votes at election times. However, such attempts repeatedly failed and, by 1986, all of the elected leaders were genuine unionists. Another, and, historically, the most important, was the now called "Zhongli incident".
In the 1980s, Taiwan had become an economic power, with a mature and diversified economy, solid presence in international markets and huge foreign exchange reserves. Its companies were able to go abroad, internationalize their production, investing massively in Asia (mainly in People's Republic of China) and in other Organisation for Economic Co-operation and Development countries, mainly in the United States.
Higher salaries and better organized trade unions in Taiwan, together with the reduction of the Taiwanese export quotas meant that the bigger Taiwanese companies moved their production to China and Southeast Asia. The civil society in a now developed country, wanted democracy, and the rejection of the KMT dictatorship grew larger. A major step occurred when Lee Teng-hui, a native from Taiwan, became President, and the KMT started a new path searching for democratic legitimacy.
Two aspects must be remembered: the KMT was on the center of the structure and controlled the process, and that the structure was a net made of relations between the enterprises, between the enterprises and the State, between the enterprises and the global market thanks to trade companies and the international economic exchanges. Native Taiwanese were largely excluded from the mainlanders dominated government, so many went into the business world.
In 1952, Taiwan had a per capita gross national product (GNP) of $170, placing the island's economy squarely between Zaire and Congo. But, by 2018 Taiwan's per capita GNP, adjusted for purchasing power parity (PPP), had soared to $53,074, around or above some developed West European economies and Japan.
According to economist Paul Krugman, the rapid growth was made possible by increases in capital and labor but not an increase in efficiency. In other words, the savings rate increased and work hours were lengthened, and many more people, such as women, entered the work force.
Dwight Perkins and others cite certain methodological flaws in Krugman and Alwyn Young's research, and suggest that much of Taiwan's growth can be attributed to increases in productivity. These productivity boosts were achieved through land reform, structural change (urbanization and industrialization), and an economic policy of export promotion rather than import substitution.
Future growth
Economic growth has become much more modest since the late 1990s. A key factor to understand this new environment is the rise of China, offering the same conditions that made possible, 40 years ago, the Taiwan Miracle (a quiet political and social environment, cheap and educated workers, absence of independent trade unions). To keep growing, the Taiwanese economy must abandon its workforce intensive industries, which cannot compete with China, Vietnam or other sub-developed countries, and keep innovating and investing in information technology. Since the 1990s, Taiwanese companies have been permitted to invest in China, and a growing number of Taiwanese businessmen are demanding easier communication between the two sides of the Taiwan Strait.
One major difference with Taiwan is the focus on English education. Mirroring Hong Kong and Singapore, the ultimate goal is to become a country fluent in three languages (Taiwanese; Mandarin, the national language of China, and Taiwan; and English, becoming a bridge between East and West).
According to western financial markets, consolidation of the financial sector remains a concern as it continues at a slow pace, with the market split so small that no bank controls more than 10% of the market, and the Taiwanese government is obligated, by the WTO accession treaty, to open this sector between 2005 and 2008.
However, many financial analysts estimate such concerns are based upon mirror-imaging of the Western model and do not take into account the already proven Asian Tiger model. Yet, recently, credit card debt has become a major problem, as the ROC does not have an individual bankruptcy law. Taiwan also remains undeveloped in some sectors, such as the lack of a bond market, a role that has been filled by small entrepreneur-oriented investment or direct investment by foreign persons.
Generally, transportation infrastructure is very good and continues to be improved, mainly in the west side of the island. Many infrastructure improvements are currently being pursued, such as the first rapid transit lines opening in Kaohsiung in 2008 and a doubling in size of Taipei's rapid transit system by 2013 now underway; the country's highways are very highly developed and in good maintenance and continue to be expanded, especially on the less developed and less populated east coast, and a controversial electronic toll system has recently been implemented.
The completion of the Taiwan High Speed Rail service connecting all major cities on the western coast, from Taipei to Kaohsiung is considered to be a major addition to Taiwan's transportation infrastructure. The ROC government has chosen to raise private financing in the building of these projects, going the build-operate-transfer route, but significant public financing has still been required and several scandals have been uncovered. Nevertheless, it is hoped that the completion of these projects will be a big economic stimulus, just as the subway in Taipei has revived the real estate market there.
Technology sector
Taiwan continues to rely heavily on its technology sector, a specialist in manufacturing outsourcing. Recent developments include moving up the food chain in brand building and design. LCD manufacturing and LED lights are two newer sectors in which Taiwanese companies are moving. Taiwan also wants to move into the biotechnology sector, the creation of fluorescent pet fish and a research-useful fluorescent pig being two examples. Taiwan is also a leading grower of orchids.
Taiwan's information technology (IT) and electronics sector has been responsible for a vast supply of products since the 1980s. The Industrial Technology Research Institute (ITRI) was created in the 1973 to meet new demands from the burgeoning tech industry. This led to start-up companies like Taiwan Semiconductor Manufacturing Company (TSMC) and the construction of the Hsinchu Science and Industrial Park (HSP), which includes around 520 high-tech companies and 150,000 employees. By 2015, a bulk of the global market share of motherboards (89.9 percent), Cable CPE (84.5 percent), and Notebook PCs (83.5 percent) comprise both offshore and domestic production. It placed second in producing Transistor-Liquid Crystal Display (TFT-LCD panels) (41.4 percent) and third for LCD monitors (27 percent) and LED (19 percent). Nonetheless, Taiwan is still heavily reliant on offshore capital and technologies, importing up to US$25 billion worth of machinery and electrical equipment from Mainland China, US$16 billion from Japan, and US$10 billion from the U.S.
In fact, the TFT-LCD industry in Taiwan grew primarily from state-guided personnel recruitment from Japan and inter-firm technology diffusion to fend off Korean competitors. This is due to Taiwan's unique trend of export-oriented small and medium enterprises (SME) – a direct result of domestic-market prioritization by state-owned enterprises (SOE) in its formative years. While the development of SMEs allowed better market adaptability and inter-firm partnerships, most companies in Taiwan remained original equipment manufacturers (OEM) and did not – other than firms like Acer and Asus – expand to original design manufacturing (OBM). These SMEs provide "incremental innovation" with regard to industrial manufacturing but do not, according to Dieter Ernst of the East–West Centre, a think-tank in Honolulu, surpass the "commodity trap", which stifles investment in branding and R&D projects.
The Taiwanese president Tsai Ing-wen, of the Democratic Progressive Party (DPP) enacted policies building on the continued global influence of Taiwan's IT industry. To revamp and reinvigorate Taiwan's slowing economy, her "5+2" innovative industries initiative aims to boost key sectors such as biotech, sustainable energy, national defense, smart machinery, and the "Asian Silicon Valley" project. President Tsai herself was the chairperson for TaiMed Biologics, a state-led start-up company for biopharmaceutical development with Morris Chang, the CEO of TSMC, as an external adviser. On 10 November 2016, the Executive Yuan formally endorsed a biomedical promotion plan with a budget of NT$10.94 billion (US$346.32 million).
At the opening ceremony for the Asia Silicon Valley Development Agency (ASVDA) in December 2016, Vice President Chen Chien-jen emphasized the increasing importance of enhancing not only local R&D capabilities, but also appealing to foreign investment. For example, the HSP now focuses 40 percent of its total workforce on "R&D and technology development". R&D expenditures have been gradually increasing: In 2006, it amounted to NT$307 billion, but it increased to NT$483.5 billion (US$16 billion) in 2014, approximately 3 percent of the GDP. The World Economic Forum's Global Competitiveness Report 2017–2018 profiled up to 140 countries, listing Taiwan as 16th place in university-industry collaboration in R&D, 10th place in company spending on R&D, and 22nd place in capacity for innovation. Approved overseas Chinese and foreign investment totaled US$11 billion in 2016, a massive increase from US$4.8 billion in 2015. However, the Investment Commission of the Ministry of Economic Affairs' (MOEAIC) monthly report from October 2017 estimated a decline in total foreign direct investment (between January and October 2017) to US$5.5 billion, which is a 46.09 percent decrease from the same time period of 2016 (US$10.3 billion).
Cross-strait relations
Debate on opening "Three Links" with the People's Republic of China were completed in 2008, with the security risk of economic dependence on PR China being the biggest barrier. By decreasing transportation costs, it was hoped that more money will be repatriated to Taiwan and that businesses will be able to keep operations centers in Taiwan while moving manufacturing and other facilities to mainland China.
A law forbidding any firm investing in the PR China more than 40% of its total assets on the mainland was dropped in June 2008, when the new Kuomintang government relaxed the rules to invest in the PR China. Dialogue through semi-official organisations (the SEF and the ARATS) reopened on 12 June 2008 on the basis of the 1992 Consensus, with the first meeting held in Beijing. Taiwan hopes to become a major operations center in East Asia.
Regional free trade agreements
While China already has international free trade agreements (FTA) with numerous countries through bilateral relations and regional organizations, the "Beijing factor" has led to the deliberate isolation of Taiwan from potential FTAs. In signing the Economic Cooperation Framework Agreement (ECFA) with China on 29 June 2010 – which permitted trade liberalization and an "early harvest" list of tariff cuts – former president Ma Ying-jeou wanted to not only affirm a stable economic relationship with China, but also to assuage its antagonism towards Taiwan's involvement in other FTAs. Taiwan later signed FTAs with two founding members of the Trans-Pacific Partnership (TPP) in 2013: New Zealand (ANZTEC) and Singapore (ASTEP). Exports to Singapore increased 5.6 percent between 2013 and 2014, but decreased 22 percent by 2016.
In 2013, a follow-up bilateral trade agreement to the ECFA, the Cross-Strait Service Trade Agreement (CSSTA), faced large student-led demonstrations – the Sunflower Movement – in Taipei and an occupation of the Legislative Yuan. The opposition contended that the trade pact would hinder the competency of SMEs, which encompassed 97.73 percent of total enterprises in Taiwan in 2016. The TPP, on the other hand, still presents an opportunity for Taiwan. After the APEC economic leaders' meeting in November 2017, President Tsai expressed deep support for the advancements made regarding the TPP – given that U.S. President Donald Trump pulled out of the trade deal earlier in the year. President Tsai has also promoted the "New Southbound Policy", mirroring the "go south" policies upheld by former presidents Lee Teng-hui in 1993 and Chen Shui-Bian in 2002, focusing on partners in the Asia-Pacific region such as the Association of Southeast Asian Nations (ASEAN), Australia and New Zealand.
See also
Made in Taiwan
Taiwanese Wave
Japanese economic miracle
Miracle on the Han River
References
External links
Official Website of Taiwan for WTO affairs, Documents
Official Website of Taiwan for WTO affairs
Separate Customs Territory of Taiwan, Penghu, Kinmen and Matsu (Chinese Taipei) and the WTO
Cross-Strait Relations between China and Taiwan
A New Era in Cross-Strait Relations? Taiwan and China in the WTO
China's Economic Leverage and Taiwan's Security Concerns with Respect to Cross-Strait Economic Relations
Economic booms
Taiwan under Republic of China rule
Economic history of Taiwan
Post–World War II economic booms |
In electronics, a decoupling capacitor is a capacitor used to decouple (i.e. prevent electrical energy from transferring to) one part of a circuit from another. Noise caused by other circuit elements is shunted through the capacitor, reducing its effect on the rest of the circuit. For higher frequencies, an alternative name is bypass capacitor as it is used to bypass the power supply or other high-impedance component of a circuit.
Discussion
Active devices of an electronic system (e.g. transistors, integrated circuits, vacuum tubes) are connected to their power supplies through conductors with finite resistance and inductance. If the current drawn by an active device changes, the voltage drop from the power supply to the device will also change due to these impedances. If several active devices share a common path to the power supply, changes in the current drawn by one element may produce voltage changes large enough to affect the operation of others – voltage spikes or ground bounce, for example – so the change of state of one device is coupled to others through the common impedance to the power supply. A decoupling capacitor provides a bypass path for transient currents, instead of flowing through the common impedance. <ref name=TTL75> Don Lancaster, TTL Cookbook', Howard W. Sams, 1975, no ISBN, pp.23-24 </ref>
The decoupling capacitor works as the device’s local energy storage. The capacitor is placed between the power line and the ground to the circuit the current is to be provided. According to the capacitor current–voltage relation
a voltage drop between power line and ground results in current draw out from the capacitor to the circuit. When capacitance is large enough, sufficient current is supplied to maintain an acceptable range of voltage drop. The capacitor stores a small amount of energy that can compensate for the voltage drop in the power supply conductors to the capacitor. To reduce undesired parasitic effective series inductance, small and large capacitors are often placed in parallel, adjacent to individual integrated circuits (see ).
In digital circuits, decoupling capacitors also help prevent radiation of electromagnetic interference from relatively long circuit traces due to rapidly changing power supply currents.
Decoupling capacitors alone may not suffice in such cases as a high-power amplifier stage with a low-level pre-amplifier coupled to it. Care must be taken in layout of circuit conductors so that heavy current at one stage does not produce power supply voltage drops that affect other stages. This may require re-routing printed circuit board traces to segregate circuits, or the use of a ground plane to improve stability of power supply.
Decoupling
A bypass capacitor is often used to decouple a subcircuit from AC signals or voltage spikes on a power supply or other line. A bypass capacitor can shunt energy from those signals, or transients, past the subcircuit to be decoupled, right to the return path. For a power supply line, a bypass capacitor from the supply voltage line to the power supply return (neutral) would be used.
High frequencies and transient currents can flow through a capacitor to circuit ground instead of to the harder path of the decoupled circuit, but DC cannot go through the capacitor and continues on to the decoupled circuit.
Another kind of decoupling is stopping a portion of a circuit from being affected by switching that occurs in another portion of the circuit. Switching in subcircuit A may cause fluctuations in the power supply or other electrical lines, but you do not want subcircuit B, which has nothing to do with that switching, to be affected. A decoupling capacitor can decouple subcircuits A and B so that B doesn't see any effects of the switching.
Switching subcircuits
In a subcircuit, switching will change the load current drawn from the source. Typical power supply lines show inherent inductance, which results in a slower response to change in current. The supply voltage will drop across these parasitic inductances for as long as the switching event occurs. This transient voltage drop would be seen by other loads as well if the inductance between two loads is much lower compared to the inductance between the loads and the output of the power supply.
To decouple other subcircuits from the effect of the sudden current demand, a decoupling capacitor can be placed in parallel with the subcircuit, across its supply voltage lines. When switching occurs in the subcircuit, the capacitor supplies the transient current. Ideally, by the time the capacitor runs out of charge, the switching event has finished, so that the load can draw full current at normal voltage from the power supply and the capacitor can recharge. The best way to reduce switching noise is to design a PCB as a giant capacitor by sandwiching the power and ground planes across a dielectric material.
Sometimes parallel combinations of capacitors are used to improve response. This is because real capacitors have parasitic inductance, which causes the impedance to deviate from that of an ideal capacitor at higher frequencies.
Transient load decoupling
Transient load decoupling as described above is needed when there is a large load that gets switched quickly. The parasitic inductance in every (decoupling) capacitor may limit the suitable capacity and influence appropriate type if switching occurs very fast.
Logic circuits tend to do sudden switching (an ideal logic circuit would switch from low voltage to high voltage instantaneously, with no middle voltage ever observable). So logic circuit boards often have a decoupling capacitor close to each logic IC connected from each power supply connection to a nearby ground. These capacitors decouple every IC from every other IC in terms of supply voltage dips.
These capacitors are often placed at each power source as well as at each analog component in order to ensure that the supplies are as steady as possible. Otherwise, an analog component with poor power supply rejection ratio (PSRR) will copy fluctuations in the power supply onto its output.
In these applications, the decoupling capacitors are often called bypass capacitors to indicate that they provide an alternate path for high-frequency signals that would otherwise cause the normally steady supply voltage to change. Those components that require quick injections of current can bypass'' the power supply by receiving the current from the nearby capacitor. Hence, the slower power supply connection is used to charge these capacitors, and the capacitors actually provide the large quantities of high-availability current.
Placement
A transient load decoupling capacitor is placed as close as possible to the device requiring the decoupled signal. This minimizes the amount of line inductance and series resistance between the decoupling capacitor and the device. The longer the conductor between the capacitor and the device, the more inductance is present.
Since capacitors differ in their high-frequency characteristics, decoupling ideally involves the use of a combination of capacitors. For example in logic circuits, a common arrangement is ~100 nF ceramic per logic IC (multiple ones for complex ICs), combined with electrolytic or tantalum capacitor(s) up to a few hundred μF per board or board section.
Example uses
These photos show old printed circuit boards with through-hole capacitors, where as modern boards typically have tiny surface-mount capacitors.
See also
Ceramic capacitor
Equivalent series inductance
Equivalent series resistance
Film capacitor
E-series of preferred numbers
References
External links
Choosing and Using Bypass Capacitors – application note from Intersil
Decoupling – decoupling guide for various frequencies by Henry W. Ott
Power Supply Noise Reduction – how to design effective supply bypassing and decoupling networks by Ken Kundert
ESR and Bypass Capacitor Self Resonant Behavior: How to Select Bypass Caps – article written by Douglas Brooks
Circuit Board Decoupling Information – decoupling guidelines for various types of circuit boards
Basic Principles of Signal Integrity – Altera whitepaper
Bypass Capacitors, an Interview With Todd Hubing – by Douglas Brooks
Capacitors |
Parexel International is an American provider of biopharmaceutical services. It conducts clinical trials on behalf of its pharmaceutical clients to expedite the drug approval process. It is the second largest clinical research organization in the world and has helped develop approximately 95% of the 200 top-selling biopharmaceuticals on the market today. The company publishes the annual Parexel R&D Statistical Sourcebook, operates the Parexel-Academy, and councils all of the top 50 biopharmaceutical and top 30 biotechnology companies.
Parexel was founded in 1982 by Josef von Rickenbach and organic chemist Anne B. Sayigh initially to advise Japanese and German firms on how to navigate the FDA approval process. The firm has grown organically over the years and through 40 acquisitions. Josef von Rickenbach is credited with establishing Parexel's culture and practices based on the principles he experienced as a researcher at Schering-Plough in Lucerne, Switzerland, before leaving the company upon retiring in 2018.
In 1990, the firm expanded internationally and established new practice areas. By 1999 it had a staff of 4,500 and 45 offices. In the 2000s, it grew to over 18,000 employees. Parexel's consulting and clinical trial work has helped establish many household drug brands and contributed to numerous successes in modern pharmacology.
The company was acquired by private equity firm Pamplona Capital Management for approximately $5.0 billion. The deal closed in September 2017. On July 2, 2021, Parexel announced a merger agreement under which it would be acquired by EQT IX fund and Goldman Sachs for $8.5 billion. EQT and Goldman Sachs completed the acquisition on November 15, 2021.
Acquisition history
June 1996: Parexel acquires in separate transactions:
Caspard Consultants, a Paris-based contract research organization, and
Sitebase Clinical Systems, Inc., a provider of remote data entry technology designed to enhance the quality and timeliness of clinical trial data.
August 1996: Parexel acquires in separate transactions for a combined 1,008,304 own shares of common stock:
Lansal Clinical Pharmaceutics, Limited, a contract research organization located in Israel, and
State and Federal Associates, Inc., a Washington, D.C.-based provider of medical marketing and related consulting services to the health care and pharmaceutical industries.
March 1997: Parexel acquires in separate transactions for a combined 210,000 own shares of common stock:
RESCON, Inc. a medical marketing business located in the Washington, D.C. area, and
Sheffield Statistical Services, Ltd., a company located in the United Kingdom that specializes in biostatistical analysis.
November 1997: Parexel acquires substantially all of the assets of Hayden Image Processing Group, a Colorado corporation developing software for analyzing and measuring high resolution medical images, and announces its strategic alliance with The IRIS Group S.A. based in Belgium, specializing in intelligent optical character recognition technology.
December 1997: Parexel acquires Kemper-Masterson, Inc, a management consulting firm on FDA and other regulatory matters to the pharmaceutical, biotechnology and medical device industries based in Massachusetts.
March 1998: Parexel acquires four companies:
PPS Europe Limited, subsequently renamed Parexel MMS Europe Limited, a medical marketing firm based in the United Kingdom, and Genesis Pharma Strategies Limited, a physician-focused marketing and clinical communications firm servicing the international pharmaceutical industry, for $113.1 million in own common stock;
MIRAI B.V., a full service, pan-European contract research organization based in the Netherlands, for $26 million in own common stock;
LOGOS GmbH, a provider of regulatory services to pharmaceutical manufacturers, for $3.9 million in own common stock.
March 1999: Parexel acquires Groupe PharMedicom S.A., a French provider of post-regulatory services to pharmaceutical manufacturers, based in Paris and Orléans and employing approximately 70 people, in exchange for approximately 199,600 shares of the company's common stock.
April 1999: Covance Inc. announced it would acquire its competitor Parexel for $612.7 million in stock and combine the two drug research and development companies under the name Covance Parexel Inc., only to call off the agreement two months later.
September 1999: Parexel acquires CEMAF S.A., a Phase I clinical research and bioanalytical laboratory located in Poitiers, France for an initial cash payment of approximately $3.0 million.
September 2000: Parexel acquires a majority interest in FARMOVS, a clinical pharmacology research business and bioanalytical laboratory located in Bloemfontein, South Africa for approximately $3.0 million. A few weeks later the company acquired a clinical pharmacology unit located in Northwick Park Hospital in Harrow, U.K. from British pharmaceutical company GlaxoWellcome Inc.
July 2001: Parexel acquires Edyabe, a clinical research organization in Latin America with offices in Argentina and Brazil, for approximately $1.6 million in cash.
October 2002: Parexel acquires Invantage Inc, a privately held company based in Cambridge, MA providing software and services to the pharmaceutical and biotechnology industries, including DataWeb Enterprise Edition, a web-based data repository of potential clinical investigators combined with a decision support environment for initiating clinical trials.
October 2002: Parexel acquires Pracon & HealthIQ (a division of Excerpta Medica which was then a subsidiary of Reed Elsevier plc), a provider of specialized sales and marketing services based in Reston, VA and Orange, CA with approximately employees, for approximately $1.7 million in cash.
January 2003: Parexel acquires FWPS Group Limited, a provider of software for clinical trial management systems in Birmingham, UK, for approximately $11.9 million in cash and shares.
March 2004: Parexel buys the remaining majority of outstanding shares of 3Clinical Research AG, a clinical research organization with expertise in Phase I and Phase IIa Proof-Of-Concept studies in Berlin, Germany, for $11.7 million in cash.
October 2004: Parexel acquires Integrated Marketing Concepts (IMC), a privately held company based in Whitehall, PA, provider of professional marketing and communication services specialized in clinical trial patient recruitment and retention, sales lead management, teleservices, call center services, fulfillment, focus group and database management, and market research.
July 2005: Parexel acquires Qdot Pharma, a Phase I and IIa Proof of Concept clinical pharmacology business located in George, South Africa for approximately $3.0 million plus additional payments of up to approximately $3.0 million in contingent purchase price if Qdot achieves certain established financial targets through September 28, 2008.
August 2005: Parexel buys the remaining 2.2% of its information technology subsidiary Perceptive Informatics Inc., for $4.8 million in cash.
October 2006: Parexel forms a joint venture arrangement (taking 75% equity interest) with Synchron Research Services Private Limited, under which Synchron transferred its clinical trial business operations located in Bangalore, India to a newly formed entity, Parexel International Synchron Private Limited.
October 2006: Parexel acquires California Clinical Trials Medical Group, Inc. and Behavioral and Medical Research, LLC, both headquarters in San Diego, CA, providing a broad range of specialty Phase I – IV clinical research services through four clinical sites in California, for $65 million.
October 2007: Parexel acquires Apex International Clinical Research Co. Ltd (in which Parexel has held a minority stake since April 2003), a Taiwan-based privately held contract research organization whose business spans mainland China, Hong Kong, India, Taiwan, Singapore, Indonesia, South Korea, Malaysia, Thailand, the Philippines, New Zealand, and Australia, for around $50.9 million.
August 2008: Parexel acquires ClinPhone plc, a provider of telephone and web-based systems used in clinical trials, headquartered in Nottingham, England, with 731 employees, for $182 million.
December 2012: Parexel acquires LIQUENT Inc., a global provider of regulatory information management solutions, headquartered in Horsham, PA with additional offices in the United Kingdom, Germany and India, employing nearly 300 individuals, for approximately $72 million.
April 2013: Parexel acquires Heron Group, a life sciences consultancy that provides commercialization services for biopharmaceutical companies, headquartered in Luton, U.K., with additional offices in India, Sweden, and the U.S., for up to $38.2 million.
July 2014: Parexel acquires ATLAS Medical Services, a clinical research service provider in Turkey, the Middle East and North Africa headquartered in Istanbul with 35 employees.
October 2014: Parexel acquires ClinIntel Limited, a provider of clinical randomization and trial supply management services headquartered in Crawley, UK.
April 2015: Parexel acquires Quantum Solutions India, a provider of outsourced safety management solutions (pharmacovigilance) with approximately 1500 employees.
February 2016: Parexel acquires Health Advances LLC, an independent life sciences strategy consulting firm with 120 employees.
September 2016: Parexel acquires ExecuPharm Inc, a global functional service provider headquartered in King of Prussia, PA.
September 2020: Parexel completes purchase of Roam Analytics, integrating its natural language processing and software development capabilities into the AI Labs.
February 2020: Parexel acquires Model Answers, a consultancy firm based in Brisbane, Queensland, Australia.
July 2021: Parexel is acquired by EQT IX fund and the Private Equity business within Goldman Sachs Asset Management from Pamplona Capital Management LP for $8.5 billion.
January 2021: Following the strategic separation of the Parexel Informatics business from Parexel International, a spin-off, Calyx, a provider of medical imaging, eClinical and regulatory solutions and services to solve complex challenges in clinical research, is launched as an independent company.
November 2021: EQT and Goldman Sachs completed the acquisition of Parexel on November 15, 2021.
TGN1412 clinical trial
In March 2006, a Parexel-run trial on behalf of TeGenero, the now bankrupt German biotechnology firm, on its anti-inflammatory drug TGN1412 to treat rheumatoid arthritis, multiple sclerosis or leukaemia, caused severe inflammation and multiple organ failure in six healthy volunteers at a facility based at Northwick Park Hospital in London. The drug had been tested on animals but this was the first test on humans.
Parexel became the target of legal proceedings from lawyers representing the injured volunteers after the insurance policy of TeGenero was unable to provide sufficient compensation. When the liable company subsequently declared bankruptcy, lawyers for the volunteers initiated legal proceeding against Parexel and the two parties later entered into talks; the results of this meeting have not been made public.
A documentary shown in the UK on 28 September 2006 featuring journalist Brian Deer as part of Channel 4's Dispatches series exposed uncertainty about the existence of data that should mandatorily have been submitted by TeGenero to the Medicines and Healthcare products Regulatory Agency (MHRA) prior to the trial indicating whether TGN1412 had been adequately tested on human blood in vitro. Concerns were also raised about whether a safe human dosage was properly obtained by TeGenero. The MHRA however concluded that none of the companies involved could be held responsible for the outcome of the test and that the adverse events that occurred were most likely caused by an unpredicted biological action of the drug in humans.
References
External links
Official website
Biotechnology companies of the United States
Companies based in Massachusetts
Contract research organizations
Companies formerly listed on the Nasdaq |
was a Japanese engineer and inventor. He is known for his electronic inventions since the 1950s, including the PIN diode, static induction transistor, static induction thyristor, SIT/SITh. His inventions contributed to the development of internet technology and the information age.
He was a professor at Sophia University. He is considered the "Father of Japanese Microelectronics".
Biography
Nishizawa was born in Sendai, Japan, on September 12, 1926. He earned a B.S. in 1948, and a Doctor of Engineering degree in 1960, from Tohoku University.
In 1953, he joined the Research Institute of Electrical Communication at Tohoku University.
He became a professor there and was appointed director to two research institutes.
From 1990 to 1996, Nishizawa served as the President of Tohoku University.
He became the president of Iwate Prefectural University in 1998.
Research
In 1950, the static induction transistor was invented by Jun-ichi Nishizawa and Y. Watanabe. The PIN photodiode was also invented by Nishizawa and his colleagues in 1950.
In 1952, he invented the avalanche photodiode. He then invented a solid-state maser in 1955. This was followed by his proposal for a semiconductor optical maser in 1957, a year before Schawlow and Townes's first paper on optical masers.
While working at Tohoku University, he proposed fiber-optic communication, the use of optical fibers for optical communication, in 1963. Nishizawa other invented technologies in the 1960s that contributed to the development of optical fiber communications, such as the graded-index optical fiber as a channel for transmitting light from semiconductor lasers. He patented the graded-index optical fiber in 1964.
In 1971, he invented the static induction thyristor.
Recognition
Nishizawa was a Life Fellow of the IEEE. He is a Fellow of several other institutions, including the Physical Society, the Russian Academy of Sciences, and the Polish Academy of Sciences. Nishizawa was decorated with Order of Culture by the emperor of Japan in 1989. He also received the Japan Academy Prize (1974), IEEE Jack A. Morton Award (1983), the Honda Prize and the Laudise Prize of the International Organization for Crystal Growth (1989).
IEEE conferred the Edison Medal on him in 2000, and introduced the IEEE Jun-ichi Nishizawa Medal in 2002. He has more than a thousand patents registered under his name.
References
External links
Jun-ichi Nishizawa – Biographical article on IEEE Global History Network.
1926 births
2018 deaths
People from Sendai
Japanese physicists
Japanese inventors
Fellow Members of the IEEE
IEEE Edison Medal recipients
Tohoku University alumni
Academic staff of Tohoku University
Academic staff of Sophia University
Recipients of the Order of Culture
Recipients of the Order of the Sacred Treasure, 1st class
Foreign Members of the USSR Academy of Sciences
Foreign Members of the Russian Academy of Sciences |
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space.
The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group.
The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates.
Definition
The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix with ad − bc = 1, the corresponding integral transform from a function to is defined as
Special cases
Many classical transforms are special cases of the linear canonical transform:
Scaling
Scaling, , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks):
Fourier transform
The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix
Fractional Fourier transform
The fractional Fourier transform corresponds to rotation by an arbitrary angle; they are the elliptic elements of SL2(R), represented by the matrices
The Fourier transform is the fractional Fourier transform when The inverse Fourier transform corresponds to
Fresnel transform
The Fresnel transform corresponds to shearing, and are a family of parabolic elements, represented by the matrices
where is distance, and is wave length.
Laplace transform
The Laplace transform corresponds to rotation by 90° into the complex domain and can be represented by the matrix
Fractional Laplace transform
The fractional Laplace transform corresponds to rotation by an arbitrary angle into the complex domain and can be represented by the matrix
The Laplace transform is the fractional Laplace transform when The inverse Laplace transform corresponds to
Chirp multiplication
Chirp multiplication, , corresponds to :
Composition
Composition of LCTs corresponds to multiplication of the corresponding matrices; this is also known as the additivity property of the Wigner distribution function (WDF). Occasionally the product of transforms can pick up a sign factor due to picking a different branch of the square root in the definition of the LCT. In the literature, this is called the metaplectic phase.
If the LCT is denoted by , i.e.
then
where
If is the , where is the LCT of , then
LCT is equal to the twisting operation for the WDF and the Cohen's class distribution also has the twisting operation.
We can freely use the LCT to transform the parallelogram whose center is at (0, 0) to another parallelogram which has the same area and the same center:
From this picture we know that the point (−1, 2) transform to the point (0, 1), and the point (1, 2) transform to the point (4, 3). As the result, we can write down the equations
Solve these equations gives (a, b, c, d) = (2, 1, 1, 1).
In optics and quantum mechanics
Paraxial optical systems implemented entirely with thin lenses and propagation through free space and/or graded-index (GRIN) media, are quadratic-phase systems (QPS); these were known before Moshinsky and Quesne (1974) called attention to their significance in connection with canonical transformations in quantum mechanics. The effect of any arbitrary QPS on an input wavefield can be described using the linear canonical transform, a particular case of which was developed by Segal (1963) and Bargmann (1961) in order to formalize Fock's (1928) boson calculus.
In quantum mechanics, linear canonical transformations can be identified with the linear transformations which mix the momentum operator with the position operator and leave invariant the canonical commutation relations.
Applications
Canonical transforms are used to analyze differential equations. These include diffusion, the Schrödinger free particle, the linear potential (free-fall), and the attractive and repulsive oscillator equations. It also includes a few others such as the Fokker–Planck equation. Although this class is far from universal, the ease with which solutions and properties are found makes canonical transforms an attractive tool for problems such as these.
Wave propagation through air, a lens, and between satellite dishes are discussed here. All of the computations can be reduced to 2×2 matrix algebra. This is the spirit of LCT.
Electromagnetic wave propagation
Assuming the system looks like as depicted in the figure, the wave travels from the (, ) plane to the (, ) plane. The Fresnel transform is used to describe electromagnetic wave propagation in free space:
where
is the wave number,
is the wavelength,
is the distance of propagation,
is the imaginary unit.
This is equivalent to LCT (shearing), when
When the travel distance () is larger, the shearing effect is larger.
Spherical lens
With the lens as depicted in the figure, and the refractive index denoted as , the result is
where is the focal length, and Δ is the thickness of the lens.
The distortion passing through the lens is similar to LCT, when
This is also a shearing effect: when the focal length is smaller, the shearing effect is larger.
Spherical mirror
The spherical mirror—e.g., a satellite dish—can be described as a LCT, with
This is very similar to lens, except focal length is replaced by the radius of the dish. A spherical mirror with radius curvature of is equivalent to a thin lens with the focal length (by convention, for concave mirror, for convex mirror). Therefore, if the radius is smaller, the shearing effect is larger.
Joint free space and spherical lens
The relation between the input and output we can use LCT to represent
If , it is reverse real image.
If , it is Fourier transform+scaling
If , it is fractional Fourier transform+scaling
Basic properties
In this part, we show the basic properties of LCT
Given a two-dimensional column vector we show some basic properties (result) for the specific input below:
Example
The system considered is depicted in the figure to the right: two dishes – one being the emitter and the other one the receiver – and a signal travelling between them over a distance D.
First, for dish A (emitter), the LCT matrix looks like this:
Then, for dish B (receiver), the LCT matrix similarly becomes:
Last, for the propagation of the signal in air, the LCT matrix is:
Putting all three components together, the LCT of the system is:
Relation to particle physics
It has been shown that it may be possible to establish a relation between some properties of the elementary fermion in the Standard Model of particle physics and spin representation of linear canonical transformations. In this approach, the electric charge, weak hypercharge and weak isospin of the particles are expressed as linear combinations of some operators defined from the generators of the Clifford algebra associated with the spin representation of linear canonical transformations.
See also
Segal–Shale–Weil distribution, a metaplectic group of operators related to the chirplet transform
Other time–frequency transforms:
Fractional Fourier transform
Continuous Fourier transform
Chirplet transform
Applications:
Focus recovery based on the linear canonical transform
Ray transfer matrix analysis
Notes
References
J.J. Healy, M.A. Kutay, H.M. Ozaktas and J.T. Sheridan, "Linear Canonical Transforms: Theory and Applications", Springer, New York 2016.
J.J. Ding, "Time–frequency analysis and wavelet transform course note", the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007.
K.B. Wolf, "Integral Transforms in Science and Engineering", Ch. 9&10, New York, Plenum Press, 1979.
S.A. Collins, "Lens-system diffraction integral written in terms of matrix optics," J. Opt. Soc. Amer. 60, 1168–1177 (1970).
M. Moshinsky and C. Quesne, "Linear canonical transformations and their unitary representations," J. Math. Phys. 12, 8, 1772–1783, (1971).
B.M. Hennelly and J.T. Sheridan, "Fast Numerical Algorithm for the Linear Canonical Transform", J. Opt. Soc. Am. A 22, 5, 928–937 (2005).
H.M. Ozaktas, A. Koç, I. Sari, and M.A. Kutay, "Efficient computation of quadratic-phase integrals in optics", Opt. Let. 31, 35–37, (2006).
Bing-Zhao Li, Ran Tao, Yue Wang, "New sampling formulae related to the linear canonical transform", Signal Processing '87', 983–990, (2007).
A. Koç, H.M. Ozaktas, C. Candan, and M.A. Kutay, "Digital computation of linear canonical transforms", IEEE Trans. Signal Process., vol. 56, no. 6, 2383–2394, (2008).
Ran Tao, Bing-Zhao Li, Yue Wang, "On sampling of bandlimited signals associated with the linear canonical transform", IEEE Transactions on Signal Processing, vol. 56, no. 11, 5454–5464, (2008).
D. Stoler, "Operator methods in Physical Optics", 26th Annual Technical Symposium. International Society for Optics and Photonics, 1982.
Tian-Zhou Xu, Bing-Zhao Li, " Linear Canonical Transform and Its Applications ", Beijing, Science Press, 2013.
Raoelina Andriambololona, R. T. Ranaivoson, H.D.E Randriamisy, R. Hanitriarivo, "Dispersion Operators Algebra and Linear Canonical Transformations",Int. J. Theor. Phys., 56, 4, 1258–1273, (2017)
R.T. Ranaivoson et al, "Linear Canonical Transformations in Relativistic Quantum Physics", Phys. Scr. 96, 065204, (2021).
Tatiana Alieva., Martin J. Bastiaans. (2016) The Linear Canonical Transformations: Definition and Properties. In: Healy J., Alper Kutay M., Ozaktas H., Sheridan J. (eds) Linear Canonical Transforms. Springer Series in Optical Sciences, vol 198. Springer, New York, NY
Time–frequency analysis
Integral transforms
Fourier analysis
Signal processing
Hamiltonian mechanics
Quantum mechanics |
Georges François Paul Marie Matheron (2 December 1930 – 7 August 2000) was a French mathematician and civil engineer of mines, known as the founder of geostatistics and a co-founder (together with Jean Serra) of mathematical morphology. In 1968, he created the Centre de Géostatistique et de Morphologie Mathématique at the Paris School of Mines in Fontainebleau. He is known for his contributions on Kriging and mathematical morphology. His seminal work is posted for study and review to the Online Library of the Centre de Géostatistique, Fontainebleau, France.
Early career
Matheron graduated from École Polytechnique and later Ecole des Mines de Paris, where he studied mathematics, physics and probability theory (as a student of Paul Lévy).
From 1954 to 1963, he worked with the French Geological Survey in Algeria and France, and was influenced by the works of Krige, Sichel, and de Wijs, from the South African school, on the gold deposits of the Witwatersrand. This influence led him to develop the major concepts of the theory for estimating resources he named Geostatistics.
Geostatistics
Matheron’s [Formule des Minerais Connexes] became his Note Statistique No 1. In this paper of 25 November 1954, Matheron derived the degree of associative dependence between lead and silver grades of core samples. In his Rectificatif of 13 January 1955, he revised the arithmetic mean lead and silver grades because his core samples varied in length. He did derive the length-weighted average lead and silver grades but failed to derive the variances of his weighted averages. Neither did he derive the degree of associative dependence between metal grades of ordered core samples as a measure for spatial dependence between ordered core samples. He did not disclose his primary data set and worked mostly with symbols rather than real measured values such test results for lead and silver in Matheron's core samples. Matheron's Interprétations des corrélations entre variables aléatoires lognormales of 29 November 1954 was marked Note statistisque No 2. In this paper, Matheron explored lognormal variables and set the stage for statistics by symbols. Primary data would have allowed him to assess whether or not lead and silver grades departed from the lognormal distribution, or displayed spatial dependence along core samples in his borehole.
Matheron coined the eponym krigeage (Kriging) for the first time in his 1960 Krigeage d’un Panneau Rectangulaire par sa Périphérie. In this Note géostatistique No 28, Matheron derived k*, his estimateur and a precursor to the kriged estimate or kriged estimator. In mathematical statistics, Matheron’s k* is the length-weighted average grade of a single panneau in his set. What Matheron failed to derive in this paper was var(k*), the variance of his estimateur. Matheron presented his Stationary Random Function at the first colloquium on geostatistics in the USA. He called on Brownian motion to conjecture the continuity of his Riemann integral but did not explain what Brownian motion and ore deposits have in common. Matheron, unlike John von Neumann in 1941 and Anders Hald in 1952, never worked with Riemann sums. It was not Professor Dr Georges Matheron but Dr Frederik P Agterberg who derived the distance-weighted average of a set of measured values determined in samples selected at positions with different coordinates in a sample space. What Agterberg did not do was derive the variance of this function.
Matheron did indeed derive length-weighted average grades of core samples and ore blocks but did not derive the variance of these functions. In time, the length-weighted average grade for Matheron's three-dimensional block grade was replaced with the distance-weighted average grade for Agterberg's zero-dimensional point. Both central values turned into honorific kriged estimates or kriged estimators. An infinite set of Agterberg's zero dimensional points fits within any ore block, along any borehole, or inside any sampling unit or sample space. Matheron's block grades and Agterberg's point grades are unique because both are functions without variances.
Mathematical morphology
In 1964, Matheron was supervising the PhD thesis of Jean Serra, dedicated to quantifying the ore properties of the iron deposit of Lorraine. Serra came up with the idea of using structuring elements for the analysis, which led to the concept of hit-or-miss transform. The theoretical analysis of this transform led Matheron to derive and investigate the concepts of erosion, dilation, opening and closing, which became known later as the basic morphological operators. He also developed a tool for granulometry, i.e., the computation of a "size distribution", where he mathematically characterizes the concept of size. In December 1964, Matheron and Serra, together with Philippe Formery, named this approach mathematical morphology. It has since evolved into a theory and method that is applied in a variety of image processing problems and tasks, and is researched worldwide (main article: Mathematical morphology). Matheron continued to contribute to mathematical morphology during the years, his best-known contribution being the morphological filtering theory, which he developed with Serra in the 1980s.
The Georges Matheron Lectureship was established by the International Association for Mathematical Geosciences (IAMG). This award was named after Georges Matheron. Matheron Lecturers will be selected by a small committee chaired by the IAMG Vice President. The Georges Matheron Lectures will be held annually during IAMG Conferences and during International Geological Congresses. Each year IAMG selects a Georges Matheron Lecturer who is a scientist with proven research ability in the field of spatial statistics or mathematical morphology. Beginning at IAMG’2006 in Liège, Jean Serra was the first recipient of this award in 2006, delivered the first Georges Matheron
Lecture.
The Centre de Géostatistique et de Morphologie Mathématique
In 1968, the Paris School of Mines created the Centre de Morphologie Mathématique, located in Fontainebleau, France, and named Matheron its first director. In 1979, the center was renamed Centre de Géostatistique et de Morphologie Mathématique, and, in 1986, the latter was split into two separate centers: Centre de Géostatistique, directed by Matheron, and Centre de Morphologie Mathématique, directed by Serra.
Books by Matheron
Traité de géostatistique appliquée, Editions Technip, France, 1962–63, where Matheron lays the fundamental tools of linear geostatistics: variography, variances of estimation and dispersion, and kriging.
His doctoral thesis: Les variables régionalisées et leur estimation: une application de la théorie des fonctions aléatoires aux sciences de la nature, published in 1965 by Masson, Paris.
Elements pour une théorie des milieux poreux, Masson, Paris, 1967, which includes Matheron's work on hydrodynamics.
The theory of regionalised variables and its applications, 1971, a reference book on geostatistics for students and researchers. Published 2019 by Oxford University Press: https://global.oup.com/academic/product/matherons-theory-of-regionalised-variables-9780198835660?cc=es&lang=en&
Random sets and integral geometry, John Wiley & Sons, 1975, , conveying his contribution to the theory of random sets.
Estimating and Choosing: An Essay on Probability in Practice, Springer, 1989, , a newer reference book on geostatistics.
Notes
References
Matheron at the Annales des Mines (French)
Mathematical Morphology and Its Applications to Image Processing, J. Serra and P. Soille (Eds.), proceedings of the 2nd international symposium on mathematical morphology (ISMM'93), (1994)
Image Analysis and Mathematical Morphology by Jean Serra, (1982)
Image Analysis and Mathematical Morphology, Volume 2: Theoretical Advances by Jean Serra, (1988)
An Introduction to Morphological Image Processing by Edward R. Dougherty, (1992)
Morphological Image Analysis; Principles and Applications by Pierre Soille, (1999)
External links
Georges Matheron at the Centre de Géostatistique
Obituary by Dominique Jeulin (Centre de Morphologie Mathématique Ecole des Mines de Paris, October 2000) from Vol. 19, No. 3. of the Image Analysis & Stereology.
Georges Matheron – Founder of Spatial Statistics by Frederik P. Agterberg (Proceedings of the International Association for Mathematical Geology, 2003)
A chronology of Matheron's seminal work
History of Mathematical Morphology, by Georges Matheron and Jean Serra
1930 births
2000 deaths
20th-century French mathematicians
Geostatistics
20th-century French geologists
Mathematical morphology
École Polytechnique alumni
Spatial statisticians |
The US–Taiwan Business Council (Traditional Chinese: 美台商業協會; Pinyin: Měi Tái Shāngyè Xíehùi) is a membership-based, non-profit organization founded in 1976 to foster trade and business relations between the United States and Taiwan. Council members consist of private companies with business interests in Taiwan, and range in size from one-person consulting firms to large multinational corporations. Because the organization reflects the views and concerns of an extensive group of U.S. businesses, the Council is generally considered to be one of the most influential private organizations playing a part in the unofficial relationship between the two economies. The organization is particularly well known in the Defense & Security community, as it is the host of an annual US-Taiwan Defense Industry Conference. The inaugural conference in St. Petersburg, Florida in 2002 brought Taiwan's Minister of National Defense to the U.S. for the first time since 1979.
Mandate
The mission of the US–Taiwan Business Council – as defined in its bylaws – is to develop private economic, commercial and financial relationships, to foster investment, trade, and commerce between the United States and Taiwan.
Services
The organization provides a variety of services to its members, including:
Consulting
Tactical and strategic business advice to companies looking to get established, or to expand, in the Taiwan market. Advocacy work – on behalf of individual companies or on behalf of groups of members – can range from dealing with market access issues and equipment sales, to resolving contractual difficulties or attempting to change Taiwan government policies.
eBulletins
Weekly email bulletins examine recent US-Taiwan-China business, economic, and political developments. Issues covered include General Business, Finance & Banking, Defense & Security, Semiconductors, PCs, Intellectual property, and Biotechnology, among others.
Analysis reports
The Council occasionally publishes reports providing up-to-date analysis on events and developments that affect businesses operating within the triangular U.S.-Taiwan-China relationship. These papers and statements, which include editorials and research reports, are distributed to members when situations warrant.
Events
The Council holds conferences, seminars, receptions, and other events throughout the U.S. and Taiwan each year. The two largest events are the annual US-Taiwan Defense Industry Conference (美台國防工業會議) and the Taiwan + China Semiconductor Industry Conference. Tang Yiau-ming, Taiwan's Minister of National Defense, U.S. Deputy Secretary of Defense Paul Wolfowitz and U.S. Assistant Secretary of State James Kelly attended the 2002 Defense Industry Conference in St. Petersburg, Florida. Taiwan Minister of National Defense Chen Chao-min attended the 2008 Defense Industry Conference on Amelia Island, Florida.
Relationship building
The organization cultivates an extensive network of contacts and relationships within the U.S. and Taiwan governments, as well as within the private sector and among non-government organizations with an interest in Taiwan. Both the United States Congress and the Executive Branch frequently call upon the organization to express its views on the US-Taiwan business relationship.
Organizational structure
ChairmanExecutive CommitteeBoard of DirectorsPresidentVice PresidentStaff in functional departments
Current staff
Chairman: Michael R. SplinterPresident: Rupert Hammond-ChambersVice President: Lotta Danielsson
History
1970s
In 1976, David M. Kennedy formed the organization in Chicago, Illinois as the “USA-ROC Economic Council.” For the next 14 years he served as Chairman of the organization and helped guide its development. William Morell was elected President the same year.
The organization quickly got involved with the commercial and political spheres in both the United States and Taiwan, and it played an important role in the drafting and passing of the Taiwan Relations Act (TRA) in 1979, the legislation that guides US-Taiwan relations in lieu of official diplomatic recognition.
1980s
The 1980s brought the Council an expanded membership base and deeper ties with business and political leaders in Taiwan. Since U.S. diplomatic recognition had been transferred from Taipei to Beijing, the Council took on greater importance as it continued to work towards strengthening trade and communications with Taiwan on behalf of American companies.
1990s
In 1990, Caspar Weinberger ("Cap") succeeded David Kennedy as Chairman of the organization and David Laux was elected to take the place of outgoing President William Morell. The change in leadership also brought with it a change in venue, as the Council relocated its office from Chicago to Washington, D.C.
In 1991, the Council formed its “Chairman’s Circle,” a group of member companies with heavily vested business interests in Taiwan. In 1995, Dan Tellep was elected Chairman as Caspar Weinberger rotated out of the post. Soon afterwards the “USA-ROC Economic Council” changed its name to the “US-ROC (Taiwan) Business Council.”
William P. Clark was elected Chairman in January 1997. Later that year, Senators Frank H. Murkowski of Alaska and John D. Rockefeller, IV of West Virginia became Honorary Co-chairmen. The Council also moved once more to Arlington, Virginia to share a floor with the American Institute in Taiwan (AIT), the entity that – under contract to the U.S. State Department – manages America’s unofficial relationship with the island. Frank C. Carlucci replaced William Clark as Chairman of the organization in 1999.
2000s
In 2000, David Laux stepped down and Rupert Hammond-Chambers was elected as the youngest-ever president. In 2001, the name of the organization was again changed to “US-Taiwan Business Council.” In 2003, William Cohen was elected as chairman, and Senator Conrad Burns of Montana took the place of Frank Murkowski as honorary co-chairman.
In 2005, the council gained new leadership with the election of Senator William Brock to the chairmanship and Vance D. Coffman to the position of vice chairman. In 2007, Senator Lisa Murkowski of Alaska took over the honorary co-chairman seat vacated by Conrad Burns. In 2008, Paul D. Wolfowitz joined the council as its chairman, succeeding Bill Brock.
2010s
In 2016, Senator Bob Menendez of New Jersey took over the honorary co-chairman seat vacated by Jay Rockefeller, who had held the post since 1997. In 2018, Michael R. Splinter joined the council as its chairman, succeeding Paul D. Wolfowitz.
Miscellaneous
The council is incorporated in the District of Columbia and is designated by the Internal Revenue Service as a tax-exempt organization under section 501(c)(6) of the Internal Revenue Code.
The council's "sister organization" in Taiwan is the ROC-USA Business Council. During the 1980s and 1990s, the two organizations co-hosted conferences – called "Annual Joint Business Conferences" – which were held on alternate years in either Taiwan or the US. The conferences served as opportunities for high-level bilateral dialogue, and often brought United States Cabinet officers to Taiwan. This tradition ended in 2004.
References
External links
Official site
US-Taiwan Defense Industry Conference
About the first Defense Industry Conference in 2002
Taiwan + China Semiconductor Outlook Conference
ROC-USA Business Council
Foreign trade of Taiwan
Non-profit organizations based in Washington, D.C.
Non-profit organizations based in Chicago
Non-profit organizations based in Arlington, Virginia
501(c)(6) nonprofit organizations |
Sir Michael Pepper (born 10 August 1942) is a British physicist notable for his work in semiconductor nanostructures.
Early life
Pepper was born on 10 August 1942 to Morris and Ruby Pepper. He was educated at St Marylebone Grammar School, a grammar school in the City of Westminster, London that has since closed. He then went on to study physics at the University of Reading and graduated Bachelor of Science (BSc) in 1963. He remained at Reading to undertake postgraduate studies and completed his Doctor of Philosophy (PhD) degree in 1967.
In 1987, while an academic of the University of Cambridge, he was granted the status of Master of Arts (MA Cantab). He was awarded a higher doctorate, Doctor of Science (ScD), by Cambridge.
Career
Sir Michael was a physicist at the Plessey Research Laboratories when he formed a collaboration with Sir Nevill Mott, (Nobel Laureate, 1977) which resulted in his commencing research in the Cavendish Laboratory in 1973 on localisation in semiconductor structures. He subsequently joined the GEC Hirst Research Centre where he set up joint Cambridge-GEC projects. He was one of three authors on the paper that eventually brought a Nobel prize for the quantum Hall effect to Klaus von Klitzing. Sir Michael formed the Semiconductor Physics research group at the Cavendish Laboratory in 1984, and following a period as Royal Society Warren Research Fellow was appointed to his current role, Professorship of Physics, at the Cavendish Laboratory in 1987. In 1991, he was appointed managing director of the newly established Toshiba Cambridge Research Centre, now known as the Cambridge Research Laboratory (CRL) of Toshiba Research Europe. The following year, 2001, he was appointed Scientific Director of TeraView, a company formed by spinning off the terahertz research arm of CRL. He became an honorary Professor of Pharmaceutical Science in the University of Otago, New Zealand in 2003. He left his Cambridge Chair to take up the Pender Chair of Nanoelectronics at University College London in 2009 and has been associated with many developments in Semiconductor Physics and applications of terahertz radiation. He sits on the Scientific Advisory Committee of Australia's ARC Centre of Excellence in Future Low-Energy Electronics Technologies.
Honors
He was elected a Fellow of the Royal Society in 1983 and was elected a Fellow of Trinity College, Cambridge, in 1982. In 1987 he received the Hughes Medal. Previously he had received the Europhysics Prize of the European Physical Society, and the Guthrie Prize of the Institute of Physics both in 1985. The Institute of Physics awarded Sir Michael the first Mott Prize in 2000. He had previously given the first Mott Lecture in 1985. He was awarded the Royal Medal in 2005 for his "work which has had the highest level of influence in condensed matter physics and has resulted in the creation of the modern field of semiconductor nanostructures," gave the Royal Society's Bakerian Prize Lecture in 2004 and received a knighthood in the 2006 New Year's Honours list for services to physics. He was appointed a fellow of the Royal Academy of Engineering. In 2010 he won the Swan Medal and Prize. He has been awarded the 2013 Faraday Medal of the IET. In 2019 he was awarded the Institute of Physics Isaac Newton Medal.
Research interests
Current and resistance quantisation phenomena
Measurement of electron charge
One-dimensional and zero-dimensional electronic phenomena
Quantum transport in general
Localisation and metal-insulator transitions
Properties of strongly interacting electron gases
Bose–Einstein condensation in the solid state
Hybrid magnetic-semiconductor structures
Medical physics, Physics in medicine and biology
Media appearances
Horizon: What is One Degree (10 January 2011) – Interviewed by his former PhD student Ben Miller.
See also
Quantum Hall effect
References
External links
Summary of Pepper's Work from Toshiba
Homepage at the Semiconductor Physics Research group
Pepper's publication list
Toshiba Research Europe
Pepper's Hughes Medal citation
Pepper biography
1942 births
Living people
British Jews
Fellows of the Royal Society
People educated at St Marylebone Grammar School
Alumni of the University of Reading
Fellows of Trinity College, Cambridge
Knights Bachelor
Royal Medal winners
English electrical engineers
Academics of University College London
General Electric Company
Fellows of the American Physical Society
Jewish British scientists |
TSRI may refer to
Taiwan Semiconductor Research Institute
The Scripps Research Institute, former name for Scripps Research |
A fire alarm system is a building system designed to detect and alert occupants and emergency forces of the presence of smoke, fire, carbon monoxide, or other fire-related emergencies. Fire alarm systems are required in most commercial buildings. They may include smoke detectors, heat detectors, and manual fire alarm activation devices, all of which are connected to a Fire Alarm Control Panel (FACP) normally found in an electrical room or panel room. Fire alarm systems generally use visual and audio signalization to warn the occupants of the building. Some fire alarm systems may also disable elevators, which under most circumstances, are unsafe to use during a fire.
Design
After the fire protection is established—usually by referencing the minimum levels of security mandated by the appropriate model building code, insurance agencies, and other authorities—the fire alarm designer will detail specific components, arrangements, and interfaces necessary to accomplish these goals. Equipment specifically manufactured for these purposes is selected, and standardized installation methods are anticipated during the design.
ISO 7240-14 is the international standard for the Design, installation, commissioning, and service of fire detection and fire alarm systems in and around the building. This standard was published in August 2013; Status, Published; Edition 1; Technical Committee ISO/TC 21/SC 3 Fire detection and Alarm system.
NFPA 72, The National Fire Alarm Code is an established and widely used installation standard from the United States. In Canada, the ULC is the standard for the fire system.
Last version 2019; Status, Published. This code is part of a family standard NFPA
TS 54 -14 is a Technical Specification (CEN/TS) for Fire detection and fire alarm system - Part 14: Guidelines for planning, design, installation, commissioning, use, and maintenance. Technical Committee CEN/TC72 has prepared this document, This document is part of the EN 54 series of standards. This standard was published in October 2018; Status, Published.
There are national codes in each European country for planning, design, installation, commissioning, use, and maintenance of fire detection systems with additional requirements that are mentioned on TS 54 -14
Germany, Vds 2095
Italy, UNI 9795
France NF S61-936
Spain UNE 23007-14
United Kingdom BS 5839 Part 1
Across Oceania, some Standards outline the requirements, test methods, and performance criteria for fire detection control and indicating equipment (FDCIE) utilised in building fire detection and fire alarm systems.
Australia AS 1603.4 (superseded), AS 4428.1 (superseded) and AS 7240.2:2018
Parts
Fire alarm control panel (FACP), or fire alarm control unit (FACU): This component, the hub of the system, monitors inputs and system integrity, controls outputs, and transmits information.
Remote Annunciator: a device that connects directly to the panel; the annunciator's main purpose is to allow emergency personnel to view the system status and take command from outside the electrical room the panel is located in. Usually, annunciators are installed by the front door, the door the fire department responds by, or in a fire command center. Annunciators typically have the same commands as the panel's LCD except programming, although some annunciators allow for full system control.
Primary power supply: Commonly, a commercial power utility supplies the non-switched 120 or 240-volt alternating current source. A branch circuit is dedicated to the fire alarm system and its constituents in non-residential applications. "Dedicated branch circuits" should not be confused with "Individual branch circuits" which supply energy to a single appliance.
Secondary (backup) power supplies: This component, commonly consisting of sealed lead-acid storage batteries or other emergency sources, including generators, is used to supply energy during a primary power failure. The batteries can be either inside the bottom of the panel or inside a separate battery box installed near the panel.
Initiating devices: These components act as inputs to the fire alarm control unit and are manually or automatically activated. Examples include pull stations, heat detectors, duct detectors, and smoke detectors. Heat and smoke detectors have different categories of both kinds. Some categories are beam, photoelectric, ionization, aspiration, and duct.
Fire alarm notification appliance: This component uses energy supplied from the fire alarm system or other stored energy source, to inform the proximate persons of the need to take action, usually to evacuate. This is done using pulsing incandescent light, flashing strobe light, electromechanical horn, siren, electronic horn, chime, bell, speaker, or a combination of these devices. Strobes are either made of xenon tubes (most common) or recently LEDs.
Building safety interfaces: This interface allows the fire alarm system to control aspects of the built environment, prepare the building for fire, and control the spread of smoke fumes by influencing air movement, lighting, process control, human transport, and availability of exits.
Initiating devices
Manually actuated devices; also known as fire alarm boxes, manual pull stations, or simply pull stations, break glass stations, and (in Europe) call points. Devices for manual fire alarm activation are installed to be readily located (near the exits), identified, and operated. They are usually actuated using physical interaction, such as pulling a lever or breaking glass.
Automatically actuated devices can take many forms intended to respond to any number of detectable physical changes associated with fire: convected thermal energy for a heat detector, products of combustion for a smoke detector, radiant energy for a flame detector, combustion gases for a fire gas detector, and operation of sprinklers for a water-flow detector. The newest innovations can use cameras and computer algorithms to analyze the visible effects of fire and movement in applications inappropriate for or hostile to other detection methods.
Notification appliances
Alarms can be either motorized bells or wall-mountable sounders/horns. They can also be speaker strobes that sound an alarm, followed by a voice evacuation message for clearer instructions on what to do. Fire alarm sounders can be set to certain frequencies and different tones, including low, medium, and high, depending on the country and manufacturer of the device. Most fire alarm systems in Europe sound like a siren with alternating frequencies. Fire alarm electronic devices are known as horns in the United States and Canada and can be continuous or set to different codes. Fire alarm warning devices can also be set to different volume levels.
Notification Appliances utilize audible, visible, tactile, textual or even olfactory stimuli (odorizer) to alert the occupants of the need to evacuate or take action in the event of a fire or other emergency. Evacuation signals may consist of simple appliances that transmit uncoded information, coded appliances that transmit a predetermined pattern, and or appliances that transmit audible and visible textual information such as live or prerecorded instructions, and illuminated message displays. Some Notification appliances are combined fire alarm/general emergency notification appliances, allowing fire and general emergency notifications from a single device.
In the United States, fire alarm evacuation signals generally consist of a standardized audible tone, with visual notification in all public and common-use areas. Emergency signals are intended to be distinct and understandable to avoid confusion with other signals.
As per NFPA 72, 18.4.2 (2010 Edition)Temporal Code 3 is the standard audible notification in a modern system. It consists of a repeated three-pulse cycle (0.5 s on, 0.5 s off, 0.5 s on, 0.5 s off, 0.5 s on, 1.5 s off). Voice Evacuation is the second most common audible in a modern system. Legacy systems, typically found in older schools and buildings, have used continuous tones alongside other audible schemas.
In the United Kingdom, fire alarm evacuation signals generally consist of a two-tone siren with visual notification in all public and common-use areas. Some fire alarm devices have an alert signal which is generally used for schools for lesson changes, the start of morning break, the end of morning break, the start of lunch break, the end of lunch break, and when the school day is over.
Audible textual appliances that are employed as part of a fire alarm system that includes Emergency Voice Alarm Communications (EVAC) capabilities. High-reliability speakers notify the occupants of the need for action concerning a fire or other emergency. These speakers are employed in large facilities where general undirected evacuation is impracticable or undesirable. The signals from the speakers are used to direct the occupant's response. The system may be controlled from one or more locations within the building, known as Fire Wardens Stations or from a single location designated as the building Fire Command Center. The fire alarm system automatically actuates speakers in a fire event. Following a pre-alert tone, selected groups of speakers may transmit one or more prerecorded messages directing the occupants to safety. These messages may be repeated in one or more languages. Trained personnel activating and speaking into a dedicated microphone can suppress the replay of automated messages to initiate or relay real-time voice instructions.
Emergency voice alarm communication systems
Some fire alarm systems utilize emergency voice alarm communication systems (EVAC) to provide prerecorded and manual voice messages. Voice alarm systems are typically used in high-rise buildings, arenas, and other large "defend-in-place" occupancies such as hospitals and detention facilities where total evacuation is difficult to achieve.
Voice-based systems allow response personnel to conduct orderly evacuation and notify building occupants of changing event circumstances.
In highrise buildings, different evacuation messages may be played on each floor, depending on the location of the fire. The floor the fire is on along with ones above it may be told to evacuate while floors much lower may be asked to stand by.
Mass notification systems/Emergency communication systems
New codes and standards introduced around 2010, especially the new UL Standard 2572, the US Department of Defense's UFC 4-021-01 Design and O&M Mass Notification Systems, and NFPA 72 2010 edition Chapter 24, have led fire alarm system manufacturers to expand their systems voice evacuation capabilities to support new requirements for mass notification including support for multiple types of emergency messaging (i.e., inclement weather emergency, security alerts, amber alerts). The major requirement of a mass notification system are to provide prioritized messaging according to the local facilities' emergency response plan. The emergency response team must prioritize potential emergency events at the site. The fire alarm system must support the promotion and demotion of notifications based on this emergency response plan. Emergency Communication Systems also have requirements for visible notification in coordination with any audible notification activities to meet needs of the Americans with Disabilities Act. Many manufacturers have tried to certify their equipment to meet these new and emerging standards. Mass notification system categories include the following:
Tier 1 systems are in-building and provide the highest level of survivability
Tier 2 systems are out of the building and provide the middle level of survivability
Tier 3 systems are "At Your Side" and provide the lowest level of survivability
Mass notification systems often extend the notification appliances of a standard fire alarm system to include PC-based workstations, text-based digital signage, and a variety of remote notification options including email, text message, RSS feed, or IVR-based telephone text-to-speech messaging.
Residential Fire Systems
Residential Fire Alarm Systems are also common place. Residential system code is a lot less strict then in commercial buildings. Typically residential fire alarm systems are installed as part of a security systems. In the United States, a residential fire alarm system is required if more than 12 smoke detectors are needed. Residential Systems are a lot less complex then commercial systems and have few parts compared to commercial systems.
Building safety interfaces
Magnetic smoke door holders/retainers: wall-mounted solenoids or electromagnets controlled by a fire alarm system or detection component that magnetically secures spring-loaded self-closing smoke-tight doors in the open position. Designed to demagnetize to allow automatic closure of the door on command from the fire control or upon failure of the power source, interconnection, or controlling element. Stored energy in the form of a spring or gravity can then close the door to restrict the passage of smoke from one space to another to maintain a tenable atmosphere on either side of the door during evacuation and firefighting efforts in buildings. Electromagnetic fire door holders can be hard-wired into the fire panel, radio-controlled, triggered by radio waves from a central controller connected to a fire panel, or, acoustic, which learns the sound of the fire alarm and releases the door upon hearing this exact sound.
Duct-mounted smoke detection: smoke detection mounted in such a manner as to sample the airflow through ductwork and other plenums fabricated explicitly for the transport of environmental air into conditioned spaces. Interconnection to the fan motor control circuits is intended to stop air movement, close dampers and generally prevent the recirculation of toxic smoke and fumes from fire into occupiable spaces.
Emergency elevator service: activation of automatic initiating devices associated with elevator operation is used to initiate emergency elevator functions, such as the recall of associated elevator cab(s). The recall will cause the elevator cabs to return to the ground level for use by fire service response teams and to ensure that cabs do not return to the floor of fire incidence, in addition, to prevent people from becoming trapped in the elevators. Phases of operation include primary recall (typically the ground level), alternate/secondary recall (typically a floor adjacent to the ground level—used when the initiation occurred on the primary level), illumination of the "fire hat" indicator when an alarm occurs in the elevator hoistway or associated control room, and in some cases shunt trip (disconnect) of elevator power (generally used where the control room or hoistway is protected by fire sprinklers).
Public address rack (PAR): an audio public address rack shall be interfaced with a fire alarm system, by adding a signaling control relay module to either the rack power supply unit, or the main amplifier driving this rack. The purpose is to "mute" the BGM (background music) of this rack in case of an emergency in case of a fire initiating the true alarm.
British fire alarm system categories
Fire alarm systems in non-domestic premises are generally designed and installed in accordance with the guidance given in BS 5839 Part 1. There are many types of fire alarm systems, each suited to different building types and applications. A fire alarm system can vary dramatically in price and complexity, from a single panel with a detector and sounder in a small commercial property to an addressable fire alarm system in a multi-occupancy building.
BS 5839 Part 1 categorizes fire alarm systems as:
"M" manual system (no automatic fire detectors so the building is fitted with call points and sounders).
"L" automatic systems intended for the protection of life.
"P" automatic systems intended for the protection of property.
Categories for automatic systems are further subdivided into L1 to L5 and P1 to P2.
Zoning
An important consideration when designing fire alarms is that of individual zones. The following recommendations are found in BS 5839 Part 1:
A single zone should not exceed in floor space.
Where addressable systems are in place, two faults should not remove protection from an area greater than .
A building may be viewed as a single zone if the floor space is less than .
Where the floor space exceeds then all zones should be restricted to a single floor level.
Stairwells, lift shafts or other vertical shafts (nonstop risers) within a single fire compartment should be considered as one or more separate zones.
The maximum distance traveled within a zone to locate the fire should not exceed .
Also, the NFPA recommends placing a list for reference near the FACP showing the devices contained in each zone.
See also
Fire Safety Equivalency System
Multiple-alarm fire
National Fire Protection Association
Smoke detector
Fire drill
False alarm
EN 54 – European Standard for Fire detection
Emergency population warning - a way to warn people through audible and visual devices when peoples lives are in danger.
References
External links
Example Specification Section 283100 Fire Alarm Systems
Authoritative guide to fire alarm systems in UK
NFPA Standards
American inventions |
Paul Ahlquist is an American virologist who is Professor of Oncology, Molecular Virology, and Plant Pathology at the University of Wisconsin–Madison. He is the Associate Director of Basic Sciences at the University of Wisconsin Carbone Cancer Center and the Director of the John and Jeanne Rowe Center for Research in Virology at the Morgridge Institute for Research.
Education
Ahlquist earned his B.S. in physics from Iowa State University and his Ph.D. in biophysics from the University of Wisconsin–Madison. His research is focused on the gene expression of RNA viruses.
Honors and awards
Ahlquist was admitted to the National Academy of Sciences in 1993 and was a Howard Hughes Medical Institute (HHMI) investigator from 1997 to 2021.
Research
His work at the University of Wisconsin–Madison and the Morgridge Institute for Research focuses on molecular mechanisms of viral replication, host interactions, and oncogenesis.
Select publications
Zhan H., Unchwaniwala N., Rebolledo-Viveros A., Pennington J., Horswill M., Broadberry R., Myers J., den Boon J.A.., Grant T., Ahlquist P. Nodavirus RNA replication crown architecture reveals proto-crown precursor and viral protein A conformational switching. Proc Natl Acad Sci U S A. 2023 Jan 31;120(5):e2217412120. doi: 10.1073/pnas.2217412120. PMID: 36693094; PMCID: PMC9945985.
den Boon, J. A., Zhan, H., Unchwaniwala, N., Horswill, M., Slavik, K., Pennington, J., Navine, A., & Ahlquist, P. (2022). Multifunctional Protein A Is the Only Viral Protein Required for Nodavirus RNA Replication Crown Formation. Viruses, 14(12), 2711. https://doi.org/10.3390/v14122711.
Evans, E. L., 3rd, Pocock, G. M., Einsdorf, G., Behrens, R. T., Dobson, E. T. A., Wiedenmann, M., Birkhold, C., Ahlquist, P., Eliceiri, K. W., & Sherer, N. M. (2022). HIV RGB: Automated Single-Cell Analysis of HIV-1 Rev-Dependent RNA Nuclear Export and Translation Using Image Processing in KNIME. Viruses, 14(5), 903. https://doi.org/10.3390/v14050903.
Albright, E. R., Morrison, K., Ranganathan, P., Carter, D. M., Nishikiori, M., Lee, J. H., Slayton, M. D., Ahlquist, P., Terhune, S. S., & Kalejta, R. F. (2022). Human cytomegalovirus lytic infection inhibits replication-dependent histone synthesis and requires stem loop binding protein function. Proceedings of the National Academy of Sciences of the United States of America, 119(14), e2122174119. https://doi.org/10.1073/pnas.2122174119.
Benner, B. E., Bruce, J. W., Kentala, J. R., Murray, M., Becker, J. T., Garcia-Miranda, P., Ahlquist, P., Butcher, S. E., & Sherer, N. M. (2022). Perturbing HIV-1 Ribosomal Frameshifting Frequency Reveals a cis Preference for Gag-Pol Incorporation into Assembling Virions. Journal of virology, 96(1), e0134921. https://doi.org/10.1128/JVI.01349-21.
Unchwaniwala, N., Zhan, H., den Boon, J. A., & Ahlquist, P. (2021). Cryo-electron microscopy of nodavirus RNA replication organelles illuminates positive-strand RNA virus genome replication. Current opinion in virology, 51, 74–79. https://doi.org/10.1016/j.coviro.2021.09.008.
Unchwaniwala, N., Zhan, H., Pennington, J., Horswill, M., den Boon, J. A., & Ahlquist, P. (2020). Subdomain cryo-EM structure of nodaviral replication protein A crown complex provides mechanistic insights into RNA genome replication. Proceedings of the National Academy of Sciences of the United States of America, 117(31), 18680–18691. https://doi.org/10.1073/pnas.2006165117.
Nishikiori, M., & Ahlquist, P. (2018). Organelle luminal dependence of (+)strand RNA virus replication reveals a hidden druggable target. Science advances, 4(1), eaap8258. https://doi.org/10.1126/sciadv.aap8258.
den Boon, J. A., Pyeon, D., Wang, S. S., Horswill, M., Schiffman, M., Sherman, M., Zuna, R. E., Wang, Z., Hewitt, S. M., Pearson, R., Schott, M., Chung, L., He, Q., Lambert, P., Walker, J., Newton, M. A., Wentzensen, N., & Ahlquist, P. (2015). Molecular transitions from papillomavirus infection to cervical precancer and cancer: Role of stromal estrogen receptor signaling. Proceedings of the National Academy of Sciences of the United States of America, 112(25), E3255–E3264. https://doi.org/10.1073/pnas.1509322112.
Giménez-Barcons, M., Alves-Rodrigues, I., Jungfleisch, J., Van Wynsberghe, P. M., Ahlquist, P., and Díez, J. The Cellular Decapping Activators LSm1, Pat1, and Dhh1 Control the Ratio of Subgenomic to Genomic Flock House Virus RNAs. J. Virol., 87(11): 6192-6200, 2013.
Seidel, S., Bruce, J., Leblanc, M., Lee, K.-F., Fan, H., Ahlquist, P., and Young, J. A. T. ZASC1 Knockout Mice Exhibit an Early Bone Marrow-Specific Defect in Murine Leukemia Virus Replication. Virol. J., 10(1): 130, 2013.
Diaz, A., and Ahlquist, P. Role of Host Reticulon Proteins in Rearranging Membranes for Positive-Strand RNA Virus Replication. Curr. Opin. Microbiol., 15: 519-524, 2012.
Diaz, A., Gallei, A., and Ahlquist, P. Bromovirus RNA Replication Compartment Formation Requires Concerted Action of 1a’s Self-Interacting RNA Capping and Helicase Domains. J. Virol., 86: 821-834, 2012.
Huang, H.-S., Pyeon, D., Pearce, S. M., Lank, S. M., Griffin, L. M., Ahlquist, P., and Lambert, P. F. Novel Antivirals Inhibit Early Steps in HPV Infection. Antiviral Res., 93: 280-287, 2012.
Wen, Z., Pyeon, D., Wang, Y., Lambert, P., Xu, W., and Ahlquist, P. Orphan Nuclear Receptor PNR/NR2E3 Stimulates p53 Functions by Enhancing p53 Acetylation. Mol. Cell. Biol., 32: 26-35, 2012.
Zhang, J., Diaz, A., Mao, L., Ahlquist, P., and Wang, X. Host Acyl Coenzyme A Binding Protein Regulates Replication Complex Assembly and Activity of a Positive-Strand RNA Virus. J. Virol., 86: 5110-5121, 2012.
Gancarz, B. L., Hao, L., He, Q., Newton, M. A., and Ahlquist, P. Systematic Identification of Novel, Essential Host Genes Affecting Bromovirus RNA Replication. PLoS One, 6(8):e23988, 2011.
Scholthof, K.-B. G., Adkins, S., Czosnek, H., Palukaitis, P., Jacquot, E., Hohn, T., Hohn, B., Saunders, K., Candresse, T., Ahlquist, P., Hemenway, C., and Foster, G. D. Top 10 Plant Viruses in Molecular Plant Pathology. Mol. Plant Pathol., 12: 938-954, 2011.
Wang, X., Diaz, A., Hao, L., Gancarz, B., den Boon, J. A., and Ahlquist, P. Intersection of the Multivesicular Body Pathway and Lipid Homeostasis in RNA Replication by a Positive-Strand RNA Virus. J. Virol., 85: 5494-5503, 2011.
References
External links
His academic home page
His research lab website
His Howard Hughes Medical Institute bio
Living people
Members of the United States National Academy of Sciences
American virologists
Howard Hughes Medical Investigators
Year of birth missing (living people) |
Joe Kaeser (born Josef Käser; June 23, 1957) is a German manager and former CEO of Siemens AG, Berlin & Munich, a role he was in from August 1, 2013, until February 3, 2021.
Early life
Joe Kaeser was born in Arnbruck, in the Bavarian Forest in West Germany, on June 23, 1957. He spent his early life in education throughout Germany. Following his studies in business administration at the Regensburg University of Applied Sciences, he joined Siemens in 1980.
Career
Kaeser subsequently held various business administration management positions, including a term at the Siemens Components Operations in Malacca, Malaysia (1987–1988). In 1990 he was appointed Vice President of business administration of the Opto Semiconductors Division. In 1994 Kaeser served first as Executive Vice President and Chief Financial Officer, and later as CEO of the group's American subsidiary Siemens Components, located in Cupertino, California, as well as at Siemens Microelectronics, in neighboring San Jose.
In 1999 Kaeser joined Corporate Finance where he was responsible for developing a company-wide performance controlling system. During this time he also shared oversight for preparing the company's stock market listing in New York and the worldwide conversion of its accounting system to US GAAP.
From April 2001 to September 2004, Kaeser was a member of the Group Executive Committee of IC Mobile and served as its Chief Financial Officer, where he was especially active in managing and restructuring its finance exposure from customer loans and working capital management.
In his former function as Chief Strategy Officer, Kaeser supported CEO Dr. Klaus Kleinfeld in the design and execution of the Fit4More transformation program, as well as the long-term orientation of the company's strategies on global megatrends.
CEO of Siemens (2013–2021)
In July 2013 it was announced that Kaeser would replace Peter Löscher as the CEO of the Siemens AG.
Since 2013, Kaeser has accompanied Chancellor Angela Merkel on a total of nine state visits abroad, including to China (2014, 2016, 2018) India (2015), Egypt (2017), Tunisia (2017), Argentina (2017), Mexico (2017) and Saudi Arabia (2017). He also travelled with Vice Chancellor Sigmar Gabriel to the US and Mexico in 2017.
During the Hannover Messe in April 2016, Kaeser was among the 15 German CEOs who were invited to a private dinner with President Barack Obama. He was also part of Merkel's delegation on the occasion of her first visit to President Donald Trump in March 2017. At the 2018 World Economic Forum in Davos, he attended a dinner of President Trump with a group of European CEOs.
In January 2020 Kaeser along with the Siemens board of directors invited an environmental activist a role on its board as it made a decision with mining giant Adani.
Russian visit during the 2014 Crimean crisis
Kaeser traveled to Russia to meet with Russian President Vladimir Putin in April 2014 to re-affirm Siemens' commitment to Russian profits despite widespread international condemnation of Russian military intervention. The move was widely criticized in the Western World, including by German Chancellor Angela Merkel.
However Kaeser wasn't the only one who had a more pro-Russian mood at that time; many other prominent Germans like former chancellors Helmut Schmidt and Gehard Schroeder voiced their concern for more understanding of Russia's views, including some in Merkel's own party, like Peter Gauweiler or Armin Laschet, who all faced criticism in the German Press.
Later, former US Adviser Zbigniew Brzezinski revealed that the former World Bank Chief Robert Zoellick aggressively pressured Kaeser due to his Russian visit. Zoellick reminded Kaeser that his company has more business in the United States than in Russia, and it would have negative consequences if he kept following the Russian path.
Other activities
Corporate boards
Siemens Energy AG, Chairman of the Supervisory Board
Allianz Deutschland AG, Member of the Supervisory Board
JPMorgan Chase, Member of the International Council (since 2015)
Daimler AG, Member of the supervisory board (since 2014)
NXP Semiconductors, Member of the Board of Directors (since 2010)
Non-profit organizations
Asia-Pacific Committee of German Business (APA), Chairman (since 2019)
Federation of German Industries (BDI), Member of the Presidium (2017–2019)
European Round Table of Industrialists (ERT), Member
Baden-Badener Unternehmer-Gespräche (BBUG), Member of the Board of Trustees
Deutscher Zukunftspreis, Member of the Board of Trustees
European School of Management and Technology (ESMT), Member of the Board of Trustees
Goethe Institute, Member of the Business and Industry Advisory Board
Technical University of Munich (TUM), Member of the Board of Trustees
Trilateral Commission, Member of the European Group
Stifterverband für die Deutsche Wissenschaft, Member of the Board
Honours
Grand Cross of the Order of Entrepreneurial Merit (Industrial Class), Portugal (13 May 2015)
2017 – Prize for Understanding and Tolerance, awarded by the Jewish Museum Berlin
References
External links
Shaping the Future. The Siemens Entrepreneurs 1847–2018. Ed. Siemens Historical Institute, Hamburg 2018, .
Presidents and Chief Executive Officers of Siemens AG
Intellectual sponsor and speaker at the 2016 Future of Leadership Initiative: Meaning@Work
Siemens
Living people
1957 births
German chief executives
Chief financial officers |
An engineering verification test (EVT) is performed on first engineering prototypes, to ensure that the basic unit performs to design goals and specifications. Verification ensures that designs meets requirements and specification while validation ensures that created entity meets the user needs and objectives.
Tests
Tests may include:
Functional test (basic)
Power measurement
Signal quality test
Conformance test
Electromagnetic interference (EMI) pre-scan
Thermal and four-corner test
Basic parametric measurements, specification verification
Importance
Identifying design problems and solving them as early in the design cycle as possible is a key to keeping projects on time and within budget. Too often, product design and performance problems are not detected until late in the product development cycle, when the product is ready to be shipped.
Prototyping
In the prototyping stage, engineers create actual working samples of the product they plan to produce. Engineering verification testing (EVT) is used on prototypes to verify that the design meets pre-determined specifications and design goals. This valuable information is used to validate the design as is, or identify areas that need to be modified.
Design Verification Test
Design Verification Test (DVT) is an intensive testing program which is performed to deliver objective, comprehensive testing verifying all product specifications, interface standards, Original Equipment Manufacturer (OEM) requirements, and diagnostic commands. It consists of the following areas of testing:
Functional testing (including usability)
Performance testing
Climatic testing
Reliability testing
Environmental testing
Mechanical testing
Mean Time Between Failure (MTBF) prediction
Conformance testing
Electromagnetic compatibility (Electromagnetic Compatibility (EMC)) testing and certification
Safety certification
Design refinement
After prototyping, the product is moved to the next phase of the design cycle: design refinement. Engineers revise and improve the design to meet performance and design requirements and specifications.
References
External links
https://www.fda.gov/regulatory-information/search-fda-guidance-documents/design-control-guidance-medical-device-manufacturers, "Design Control Guidance For Medical Device Manufacturers", Section F: "Design Verification".
Quality control
Product testing |
This is a list of encyclopedic people associated with the University of Cincinnati in the United States of America.
Notable alumni
Those listed include graduates of the University, as well as attendees.
David Applebaum, Israeli physician
Frank P. Austin, celebrity interior designer
Jeff Austin, musician, Yonder Mountain String Band
Juan N. Babauta, graduate, governor of United States Commonwealth of Northern Mariana Islands
Judith Baker, judoka
Theda Bara, silent-film actress
Shari Barkin, pediatrician
John Bardo, educator, President of Wichita State University, Chancellor of Western Carolina University.
John Barrett, graduate, CEO and President of Western & Southern Financial Group
Rachel Barton Butler, playwright
Kathleen Battle, graduate, Grammy Award-winning singer of New York Metropolitan Opera
Shoshana Bean, musical theater graduate, Broadway actress
Stanley Rossiter Benedict, inventor of Benedict's reagent
Raoul Berger, professor at the UC Berkeley School of Law and Harvard Law School, early theorist of originalism
Thomas Berger, A&S graduate, author of Little Big Man
Matt Berninger, lead vocalist and founder of band The National
Theodore Berry, graduate, Mayor of Cincinnati 1972–76; member of Alpha Phi Alpha fraternity
Michael Bierut, DAAP graduate, partner at Pentagram New York
John Shaw Billings, M.D. 1860, began process to organize world's medical literature, now PubMed
Eula Bingham, occupational health scientist
Lee Bowman, graduate, actor in films such as Love Affair, Cover Girl and Bataan
Barnett R. Brickner, rabbi
Frank Brogan, Chancellor of State University System of Florida; former President of Florida Atlantic University
Henry T. Brown, chemical engineer; first African American to earn a BS degree in chemical engineering at the University of Cincinnati
Robert Burck, "naked cowboy" of Times Square in New York City; NYC mayoral candidate
Liz Callaway, singer and actress
David Canary, A&S graduate, multiple Emmy-winning actor on All My Children since 1983
Salmon P. Chase, 23rd Governor of Ohio, U.S. Treasury Secretary 1861–64, Chief Justice 1864-73
Robin T. Cotton, ENT specialist and professor
Dennis Courtney, aka Denis Beaulne, Broadway actor (Peter Pan, Starlight Express, director, choreographer
Chase Crawford, actor and producer
E. Jocob Crull, Montana politician and colonel, rival of Jennette Rankin (first female member of U.S. Congress)
Cherien Dabis, filmmaker, screenwriter, The L Word, Amreeka
David Daniels, singer
Charles G. Dawes, law graduate, 30th Vice President of the United States, winner of Nobel Peace Prize
Scott Devendorf, bass guitarist, founder of band The National
Jonathan Dever, former member of Ohio House of Representatives
Vinod Dham, graduate, "father" of Pentium computer chip (MS Eng, 77)
John Price Durbin, Chaplain of the Senate, president of Dickinson College
Jennifer Eberhardt, social psychologist, MacArthur Fellow
Randy Edelman, music graduate, composer of movie scores, received BMI's Outstanding Career Achievement Award
Margaret Elizabeth Egan, librarian and communication scholar
Suzanne Farrell, prima ballerina, recipient of Kennedy Center Honors and Presidential Medal of Freedom
Hattie V. Feger, professor of education at Clark Atlanta University, 1931-1944
Abraham J. Feldman (1893–1977), rabbi
Mark "Markiplier" Fischbach, YouTube personality/media star
Stephen Flaherty, music graduate, Tony-winning composer (Ragtime, Once on This Island)
Frederick W. Franz, Jehovah's Witness, president of Watchtower Society
Paul Gilger, architecture graduate, architect, conceived Jerry Herman musical revue Showtune, designed Industrial Light & Magic film studio for George Lucas
Samuel H. Goldenson, rabbi
Leon Goldman, pioneer in laser medicine
Alexander D. Goode, one of Four Chaplains
Michael Graves, architecture graduate, architect
Moses J. Gries, rabbi
Louis Grossmann, rabbi
Michael Gruber, stage actor, singer, and dancer
Beth Gylys, poet and professor
Victor H. Haas, 1st Director of NIAID
Albert Hague, music graduate, composer of score for How the Grinch Stole Christmas, won nine Tony Awards for Redhead in 1959
Victor W. Hall, U.S. Navy Rear Admiral
Hollis Hammonds, artist and academic
Earl Hamner, graduate, writer, creator of The Waltons
Walt Handelsman, A&S graduate, Pulitzer Prize-winning political cartoonist
Dorian Harewood, drama graduate, film and television actor, voice artist
Randy Harrison, drama graduate, actor, Queer as Folk
Mary Hecht, BA 1952, American-born Canadian sculptor
James G. Heller, rabbi and composer
Maximilian Heller, rabbi
Bob Herbold, former Microsoft COO
Louise McCarren Herring, engineering graduate, pioneer of non-profit cooperative credit union movement
Al Hirt, trumpeter and bandleader
Ronald Howes, inventor of Easy-Bake Oven
Sarah Hutchings, composer
Bruce Edwards Ivins, microbiologist; key suspect in 2001 anthrax terror attacks, leaving five people dead
Ali Jarbawi, Palestinian politician and academic
James Kaiser, electrical engineer who developed Kaiser window for digital signal processing, winner of IEEE Jack S. Kilby Signal Processing Medal
Jerry Kathman, President and CEO of LPK
Charles Keating, criminal (Keating Five scandal); virulent anti-pornography activist
Robert Kistner, gynecologist
Bradley M. Kuhn, M.S. 2001, software freedom activist
James Michael Lafferty, division CEO in Procter and Gamble, Coca-Cola, and British American Tobacco; current CEO of Fine Hygienic Holding. Olympic Track and Field Coach.
Sean Lahman, historian and sports writer
Kenesaw Mountain Landis, federal judge and first Commissioner of Major League Baseball
William Lawrence, Congressman, first vice president of American Red Cross
Christopher W. Lentz, U.S. Air Force Brigadier General
Liang Sili, academician of Chinese Academy of Sciences
Emil W. Leipziger, rabbi
Abraham Lubin, hazzan
Charlie Luken, law graduate, politician and former Mayor of Cincinnati
Judah Leon Magnes, rabbi, Chancellor/President of the Hebrew University of Jerusalem 1925-1948
Michael Malatin, entrepreneur in field of hospital valet parking
Beverly Malone, nurse and president of American Nurses Association
Steven L. Mandel, anesthesiologist
Jack Manning, actor, stage director, acting teacher
Marco Marsan, author
Kevin McCollum, graduate, Tony-winning Broadway producer (Rent, Avenue Q, The Drowsy Chaperone)
Guy McElroy (M.A. 1972), art historian and curator
Martin A. Meyer, rabbi
Gregory Mixon, (Ph.D. 1989), American historian
Julian Morgenstern, rabbi, Hebrew Union College professor and president
Lena Beatrice Morton, literary scholar
Pamela Myers, musical theater graduate, Tony-nominated stage and screen actor
Morris Newfield, rabbi
Sandra Novack, author
Michele Pawk, musical theater graduate, Tony-winning Broadway actress (Hollywood Arms, Cabaret)
Archimedes Plutonium, (B.A. as Ludwig Hansen, 1972), notable Usenet personality
Paul Polman, CEO of Unilever
Jennie Porter, first black person to receive a Ph.D. from the University of Cincinnati and became the first black female public school principal in Cincinnati
James B. Preston, neurophysiologist
Faith Prince, musical theater graduate, Tony-winning Broadway actress (Guys and Dolls)
Lee Roy Reams, musical theater graduate, Tony-nominated actor, dancer
Michael E. Reynolds, champion of the "earthship" sustainable construction movement
Dennis L. Riley (born 1945), politician in New Jersey General Assembly, represented 4th Legislative District 1980-90
Diana Maria Riva, drama graduate, screen actor
Anne Mason Roberts (1910-1971), HUD official in the 1960s
Michael Robinson, activist for civil right and human rights
Mitch Rowland - Grammy award-winning songwriter & lead guitarist in Harry Styles' band
Jerry Rubin, activist
Nipsey Russell, actor, comedian, game show panelist, Tin Man in film version of The Wiz
Rajiv Satyal, comedian, host and speaker; named the university's radio-station-turned-media group "BearCast"
Linda Schele, art and education major, expert on Mayan inscriptions and hieroglyphics
Robert P. Schumaker, creator of AZFinText, a news-aware high-frequency stock prediction system
Jean Schmidt, Congresswoman from Ohio, 2005–13
Teddi Siddall, drama graduate, screen actor
Abram Simon, rabbi
Yvette Simpson, law graduate, 2011-2017 Cincinnati City Councilwoman
George Speri Sperti, inventor
Joseph B. Strauss, engineering graduate, designed Golden Gate Bridge
Thomas Szasz, psychiatrist and author of The Myth of Mental Illness
Bob Taft, law graduate, 1999-2007 Governor of Ohio
William Howard Taft, law graduate, 27th President of the United States, Supreme Court Chief Justice
Christian Tetzlaff, professional violinist
Paul Tibbets, pilot of B-29 plane that dropped atom bomb over Hiroshima
Dwight Tillery, politician, former Mayor of Cincinnati
Tom Tsuchiya, sculptor, most notable for the bronze plaques for the National Baseball Hall of Fame
Tom Uttech, painter
Anne Valente, novelist and short-story writer
David Bell, author
Rodney Van Johnson, education graduate, actor (soap opera Passions)
Sigismund von Braun, German diplomat, older brother of Wernher von Braun
David J. Williams, Director of Architecture, musician
Clarence A. Winder, civic leader, Mayor of Pasadena, California in 1950s
Chris Wanstrath, co-founder and former CEO of GitHub
Louis Wolsey, rabbi
George Zepin, rabbi
Martin Zielonka, rabbi
Dylan Mulvaney, actress and social media personality
Athletics
Jim Ard, basketball player for 1976 NBA champion Boston Celtics, sixth overall selection of 1970 NBA draft
Skeeter Barnes, Major League Baseball player for Cincinnati Reds, Montreal Expos, St. Louis Cardinals and Detroit Tigers
Connor Barwin, NFL defensive end for Los Angeles Rams, selected 2nd round (46th overall) in 2009 NFL Draft
Bob Bell, NFL defensive end for Detroit Lions and St. Louis Cardinals
Corie Blount, basketball player, Chicago Bulls, first round pick in 1993 NBA draft
Ron Bonham, basketball player, 1962 NCAA champion with Cincinnati Bearcats, 2-time NBA champion with Boston Celtics
Vaughn Booker, NFL defensive end for Kansas City Chiefs, Green Bay Packers and Cincinnati Bengals
Ed Brinkman, All-Star baseball player, Washington Senators and Detroit Tigers
Tony Campana, MLB player for Chicago Cubs
Jim Capuzzi, NFL defensive back and quarterback, played for Green Bay Packers
Brent Celek, NFL tight end for Philadelphia Eagles, selected 5th round (162nd overall) in 2007 NFL Draft, Super Bowl LII Champion
Antonio Chatman, NFL wide receiver, played for Cincinnati Bengals and Green Bay Packers
Trent Cole, NFL defensive end for Philadelphia Eagles 2005–14, selected 5th round (146th overall) in 2005 NFL Draft
Zach Collaros, CFL quarterback, 3-time Grey Cup champion (2012, 2019, 2021)
Cris Collinsworth, law graduate, Emmy-winning sports commentator, NFL wide receiver
Bryan Cook, NFL safety for Kansas City Chiefs, Super Bowl LII Champion
Greg Cook, graduate, NFL quarterback for Cincinnati Bengals
Pat Cummings, NBA player, New York Knicks, Milwaukee Bucks, Dallas Mavericks
Ralph Davis, basketball player, 17th pick of 1960 NBA draft
Zach Day, MLB pitcher
Connie Dierking, basketball player, fifth overall selection of 1958 NBA draft
Jacob Eisner (born 1947), Israeli basketball player
Jason Fabini, NFL offensive tackle, New York Jets
Nate Fish, baseball player and coach
Andre Frazier, NFL linebacker, played for Cincinnati Bengals and Pittsburgh Steelers, 2-time Super Bowl Champion (XL, XLIII)
Danny Fortson, basketball player, 10th overall pick of 1997 NBA draft
Rich Franklin, professional mixed martial artist, former UFC middleweight champion, V.P. of Asian MMA organization ONE Championship
Sauce Gardner, NFL cornerback, New York Jets, selected 1st round (4th overall) in 2022 NFL Draft
Yancy Gates (born 1989), basketball player for Ironi Nahariya of Israeli Premier League
Antonio Gibson, USFL NFL safety, Philadelphia Stars and New Orleans Saints
Mardy Gilyard, CFL wide receiver
Marcellus Greene, NFL and Canadian Football League player
Tyjuan Hagler, football linebacker for NFL's Indianapolis Colts
Ian Happ, MLB player for Chicago Cubs
Josh Harrison, MLB player for Pittsburgh Pirates
Jim Herman, professional golfer, who plays on the PGA tour, 3 professional wins.
Paul Hogue, basketball player, 2-time NCAA champion with Cincinnati Bearcats, 2nd overall pick of 1962 NBA draft
Candice Holley, basketball player
Jim Holstein, pro basketball player, college head coach
Kevin Huber, NFL punter, played for Cincinnati Bengals
Miller Huggins, Hall of Fame baseball player and manager; managed champion New York Yankees teams of 1920s
George Jamison, NFL linebacker, played for Detroit Lions
DerMarr Johnson, basketball player
Lewis Johnson, graduate, track & field broadcaster
Ed Jucker, basketball player, coach of Cincinnati Bearcats' 2-time national champions
Rich Karlis, NFL placekicker, played for Denver Broncos
Brendon Kay, football player
Tinker Keck, XFL football player
Jason Kelce, NFL center for Philadelphia Eagles, Super Bowl LII Champion
Travis Kelce, NFL tight end for Kansas City Chiefs, 2-time Super Bowl Champion (LIV, LVII)
Sean Kilpatrick (born 1990), NBA player for Chicago Bulls, and for Hapoel Jerusalem of the Israeli Basketball Super League
Sandy Koufax, Hall of Fame baseball pitcher, 4-time World Series champion
Steve Logan, basketball player
Kenyon Martin, basketball player for New York Knicks, top pick in 2000 NBA draft
Jason Maxiell, former NBA power forward, played for Detroit Pistons
Urban Meyer, former head football coach for The Florida Gators, and The Ohio State Buckeyes. Winner of the 2007, and 2009 BCS Championship with Florida as well as the 2014 CFP Championship with Ohio State.
Joe Morrison, NFL running back and wide receiver for New York Giants
Haruki Nakamura, NFL safety for Baltimore Ravens, Carolina Panthers
Elbie Nickel, NFL tight end, played for Pittsburgh Steelers
Ray Nolting, NFL running back, played for Chicago Bears
Jim O'Brien, NFL placekicker for Baltimore Colts, Super Bowl V champion
Tom O'Malley, NFL quarterback, played for Green Bay Packers
Brig Owens, NFL defensive back, played for Washington Redskins
Ruben Patterson, NBA player, Portland Trail Blazers, Milwaukee Bucks
David Payne, 110m hurdler, 2008 Olympic silver medalist
Isaiah Pead, NFL running back, played for St. Louis Rams, Pittsburgh Steelers, and Miami Dolphins
Tony Pike, NFL quarterback, played for Carolina Panthers
Desmond Ridder, NFL quarterback for Atlanta Falcons
Oscar Robertson, Hall of Fame basketball player, NBA champion and MVP
Tom Rossley, former football head coach at SMU, offensive coordinator for Green Bay Packers
Kelly Salchow, former Olympic rower (2004 and 2008 Olympic Games), Women's Quadruple Sculls
Kenny Satterfield, professional basketball player, 2001–12
Kerry Schall, competed on reality show The Ultimate Fighter 2, professional MMA fighter
Lance Stephenson, basketball player for Los Angeles Lakers
Andrew Stewart, football player
Clint Stickdorn, football player
Tom Thacker, basketball player, NCAA and NBA champion, top pick of 1963 NBA draft
Jordan Thompson, Olympic gold medalist volleyball player and member of the United States national team.
Bill Talbert, tennis player, 5-time U.S. Open champion, International Tennis Hall of Fame
Tony Trabert, tennis player, Wimbledon and U.S. Open champion, International Tennis Hall of Fame
Jack Twyman, basketball player, College Basketball Hall of Fame, 6-time NBA All-Star
Brandon Underwood, NFL safety, Super Bowl XLV champion
Nick Van Exel, basketball player, 1998 NBA All-Star
LaDaris Vann, football player
Roland West, basketball player
James White, NBA guard/forward for New York Knicks, NBA champion
Bob Wiesenhahn, basketball player, 1961 NCAA champion with Cincinnati Bearcats, 11th overall pick of 1961 NBA draft
John Williamson (born 1986), basketball player for Maccabi Kiryat Gat B.C. of the Israeli Basketball Premier League
Eric Wilson, football player
Mary Wineberg, 2008 Olympic gold medalist, 4 × 400 m relay
George Winn, NFL running back
Derek Wolfe, NFL defensive end, Baltimore Ravens
D. J. Woods, Canadian Football League wide receiver, Ottawa Redblacks
Mike Woods, All-American and NFL player
Tony Yates, basketball player for two-time national champion Cincinnati Bearcats, head coach 1983-89
Kevin Youkilis, 3-time All-Star, Gold Glove winner, 2-time World Series champion, MLB player 2004-13
Curtis Young, NFL defensive end, Green Bay Packers
Notable faculty
Neil Armstrong (until death), astronaut, professor of aerospace engineering
Kamala Balakrishnan, immunologist, professor of transplantation medicine
Carl Blegen, first scientific explorer of Troy
Tanya Froehlich, pediatrician
Karen L. Gould (born 1948), President of Brooklyn College
Michael Griffith, author
Kay Kinoshita, physicist
Santa Ono, biomedical scientist, 28th President of University of Cincinnati, 15th President of University of British Columbia, 15th President of the University of Michigan
Neil Rackham, author of Spin Selling
George Rieveschl, inventor of diphenhydramine (Benadryl)
Albert Sabin, developed the oral live polio vaccine
Vernon L. Scarborough, Mesoamerican archaeologist, professor, and anthropology department head
Herman Schnieder, father of co-operative education
Donald Shell, inventor of Shell sort
Gabriel P. Weisberg, art historian
References
UC Magazine on Famous Alumni
University of Cincinnati people
University of Cincinnati |
The electronics industry is the economic sector that produces electronic devices. It emerged in the 20th century and is today one of the largest global industries. Contemporary society uses a vast array of electronic devices built-in automated or semi-automated factories operated by the industry. Products are primarily assembled from metal–oxide–semiconductor (MOS) transistors and integrated circuits, the latter principally by photolithography and often on printed circuit boards.
The industry's size, the use of toxic materials, and the difficulty of recycling have led to a series of problems with electronic waste. International regulation and environmental legislation have been developed to address the issues.
The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over as of 2018. The largest industry sector is e-commerce, which generated over in 2017.
History
The electric power industry began in the 19th century, which led to the development of inventions such as gramaphones, radio transmitters, receivers and television. The vacuum tube was used for early electronic devices, before later being largely supplanted by semiconductor components as the fundamental technology of the industry.
The first working transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Laboratories in 1947, which led to significant research in the field of solid-state semiconductors during the 1950s. This led to the emergence of the home entertainment consumer electronics industry starting in the 1950s, largely due to the efforts of Tokyo Tsushin Kogyo (now Sony) in successfully commercializing transistor technology for a mass market, with affordable transistor radios and then transistorized television sets.
The industry employs large numbers of electronics engineers and electronics technicians to design, develop, test, manufacture, install, and repair electrical and electronic equipment such as communication equipment, medical monitoring devices, navigational equipment, and computers. Common parts manufactured are connectors, system components, cell systems, and computer accessories, and these are made of alloy steel, copper, brass, stainless steel, plastic, steel tubing, and other materials.
Consumer electronics
Consumer electronics are products intended for everyday use, most often in entertainment, communications and office productivity. Radio broadcasting in the early 20th century brought the first major consumer product, the broadcast receiver. Later products include personal computers, telephones, MP3 players, cell phones, smart phones, audio equipment, televisions, calculators, GPS automotive electronics, digital cameras and players and recorders using video media such as DVDs, VCRs or camcorders. Increasingly these products have become based on digital technologies, and have largely merged with the computer industry in what is increasingly referred to as the consumerization of information technology.
The CEA (Consumer Electronics Association) projected the value of annual consumer electronics sales in the United States to be over in 2008. Global annual consumer electronic sales are expected to reach by 2020.
Effects on the environment
Electrical waste contains hazardous, valuable, and scarce materials, and up to 60 elements can be found in complex electronics.
The United States and China are the world leaders in producing electronic waste, each tossing away about 3 million tons each year. China also remains a major e-waste dumping ground for developed countries. The UNEP estimate that the amount of e-waste being produced – including mobile phones and computers – could rise by as much as 500 percent over the next decade in some developing countries, such as India.
Increasing environmental awareness has led to changes in electronics design to reduce or eliminate toxic materials and reduce energy consumption. The Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE) were released by the European Commission in 2002.
Manufacturing
Largest electronics industry sectors
See also
Consumer electronics
Electronic engineering
Electronics
Microelectronics
MOSFET
Integrated circuit
Nanoelectronics
Power electronics
Semiconductor
Silicon
Technology
Notes
References
External links
Joint Electron Device Engineering Council (JEDEC)
Electronic Industry Citizenship Coalition
Global Electronics Industry: Poster Child of 21st Century Sweatshops and Despoiler of the Environment?, Garrett Brown
20th-century introductions
Industries (economics) |
Quite Universal Circuit Simulator (Qucs) is a free-software electronics circuit simulator software application released under GPL. It offers the ability to set up a circuit with a graphical user interface and simulate the large-signal, small-signal and noise behaviour of the circuit. Pure digital simulations are also supported using VHDL and/or Verilog. Only a small set of digital devices like flip flops and logic gates can be used with analog circuits. Qucs uses its own SPICE-incompatible backend simulator Qucsator, however the Qucs-S fork supports some SPICE backends.
Qucs supports a growing list of analog and digital components as well as SPICE sub-circuits. It is intended to be much simpler to use and handle than other circuit simulators like gEDA or PSPICE.
Analysis types
Analysis types include S-parameter (including noise), AC (including noise), DC, Transient Analysis, Harmonic Balance (not yet finished), Digital simulation (VHDL and Verilog-HDL) and Parameter sweeps.
Features at a glance
Qucs has a graphical interface for schematic capture. Simulation data can be represented in various types of diagrams, including Smith-Chart, Cartesian, Tabular, Polar, Smith-Polar combination, 3D-Cartesian, Locus Curve, Timing Diagram and Truth Table.
The documentation offers many useful tutorials (WorkBook), reports (ReportBook) and a technical description of the simulator.
Other features include the transmission line calculator, Filter synthesis, Smith-Chart tool for power and noise matching, Attenuator design synthesis, Device model and subcircuit library manager, Optimizer for analog designs, the Verilog-A interface, Support for multiple languages (GUI and internal help system), Subcircuit (including parameters) hierarchy, Powerful data post-processing possible using equations and symbolically defined nonlinear and linear devices.
Tool suite
Qucs consists of several standalone programs interacting with each other through a GUI.
The GUI is used to create schematics, setup simulations, display simulation results, writing VHDL code, etc.
The analog simulator, gnucsator, is a command line program which is run by the GUI in order to simulate the schematic which you previously setup. It reads a netlist file augmented with commands, performs simulations, and finally produces a dataset file. It can also report errors.
The GUI includes a text editor which can display netlists and simulation logging information. It is handy to edit files related to certain components (e.g. SPICE netlists, or Touchstone files).
A filter synthesis application can help design various types of filters.
The transmission line calculator can be used to design and analyze different types of transmission lines (e.g. microstrips, coaxial cables).
A component library manager gives access to models for real life devices (e.g. transistors, diodes, bridges, opamps). These are usually implemented as macros. The library can be extended by the user.
The attenuator synthesis application can be used to design various types of passive attenuators.
The command line conversion program tool is used by the GUI to import and export datasets, netlists and schematics from and to other CAD/EDA software. The supported file formats as well as usage information can be found on the manpage of qucsconv.
Additionally, the GUI can steer other EDA tools. Analog and mixed simulations can be performed by simulators that read the Qucsator netlist format. For purely digital simulations (via VHDL) the program FreeHDL or Icarus-Verilog can be used. For circuit optimization (minimization of a cost function), ASCO may be invoked.
Components
The following categories of components are provided:
Lumped components (R, L, C, amplifier, phase shifter, etc.)
Sources
Probes
Transmission lines
Nonlinear components (diodes, transistors, etc.)
Digital components
File containers (S-parameter datasets, SPICE netlists)
Paintings
There is also a Component library that includes various standard components available in the market (bridges, diodes, varistors, LEDs, JFETs, MOSFETS, and so on).
Transistor models
Qucs supports transistor models, some need to be added by hand. Some have been tested, these include
FBH-HBT
HICUM L0 v1.12
HICUM L0 v1.2
HICUM L2 v2.1
HICUM L2 v2.22
HICUM L2 v2.23
MESFET (Curtice, Statz, TOM-1 and TOM-2)
SGP (SPICE Gummel-Poon)
MOSFET
JFET
EPFL-EKV MOSFET v2.6.
Qucs-S
Qucs-S is a fork of Qucs that supports the SPICE-compatible simulator backends of Ngspice, Xyce, SpiceOpus, in addition to Qucsator. Version 2 was released in August 19, 2023.
See also
Comparison of EDA Software
List of free electronics circuit simulators
References
External links
FreeHDL home page
Icarus Verilog home page
Win32 Binaries for Qucs and freehdl
QucsStudio
Free electronic design automation software
Free software programmed in C++
Electronic design automation software for Linux
Electronic circuit simulators
Engineering software that uses Qt |
A binary multiplier is an electronic circuit used in digital electronics, such as a computer, to multiply two binary numbers.
A variety of computer arithmetic techniques can be used to implement a digital multiplier. Most techniques involve computing the set of partial products, which are then summed together using binary adders. This process is similar to long multiplication, except that it uses a base-2 (binary) numeral system.
History
Between 1947 and 1949 Arthur Alec Robinson worked for English Electric Ltd, as a student apprentice, and then as a development engineer. Crucially during this period he studied for a PhD degree at the University of Manchester, where he worked on the design of the hardware multiplier for the early Mark 1 computer.
However, until the late 1970s, most minicomputers did not have a multiply instruction, and so programmers used a "multiply routine"
which repeatedly shifts and accumulates partial results,
often written using loop unwinding. Mainframe computers had multiply instructions, but they did the same sorts of shifts and adds as a "multiply routine".
Early microprocessors also had no multiply instruction. Though the multiply instruction became common with the 16-bit generation,
at least two 8-bit processors have a multiply instruction: the Motorola 6809, introduced in 1978, and Intel MCS-51 family, developed in 1980, and later the modern Atmel AVR 8-bit microprocessors present in the ATMega, ATTiny and ATXMega microcontrollers.
As more transistors per chip became available due to larger-scale integration, it became possible to put enough adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle each partial product one at a time.
Because some common digital signal processing algorithms spend most of their time multiplying, digital signal processor designers sacrifice considerable chip area in order to make the multiply as fast as possible; a single-cycle multiply–accumulate unit often used up most of the chip area of early DSPs.
Binary long multiplication
The method taught in school for multiplying decimal numbers is based on calculating partial products, shifting them to the left and then adding them together. The most difficult part is to obtain the partial products, as that involves multiplying a long number by one digit (from 0 to 9):
123
× 456
=====
738 (this is 123 × 6)
615 (this is 123 × 5, shifted one position to the left)
+ 492 (this is 123 × 4, shifted two positions to the left)
=====
56088
A binary computer does exactly the same multiplication as decimal numbers do, but with binary numbers. In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers comes down to calculating partial products (which are 0 or the first number), shifting them left, and then adding them together (a binary addition, of course):
1011 (this is binary for decimal 11)
× 1110 (this is binary for decimal 14)
======
0000 (this is 1011 × 0)
1011 (this is 1011 × 1, shifted one position to the left)
1011 (this is 1011 × 1, shifted two positions to the left)
+ 1011 (this is 1011 × 1, shifted three positions to the left)
=========
10011010 (this is binary for decimal 154)
This is much simpler than in the decimal system, as there is no table of multiplication to remember: just shifts and adds.
This method is mathematically correct and has the advantage that a small CPU may perform the multiplication by using the shift and add features of its arithmetic logic unit rather than a specialized circuit. The method is slow, however, as it involves many intermediate additions. These additions are time-consuming. Faster multipliers may be engineered in order to do fewer additions; a modern processor can multiply two 64-bit numbers with 6 additions (rather than 64), and can do several steps in parallel.
The second problem is that the basic school method handles the sign with a separate rule ("+ with + yields +", "+ with − yields −", etc.). Modern computers embed the sign of the number in the number itself, usually in the two's complement representation. That forces the multiplication process to be adapted to handle two's complement numbers, and that complicates the process a bit more. Similarly, processors that use ones' complement, sign-and-magnitude, IEEE-754 or other binary representations require specific adjustments to the multiplication process.
Unsigned integers
For example, suppose we want to multiply two unsigned 8-bit integers together: a[7:0] and b[7:0]. We can produce eight partial products by performing eight 1-bit multiplications, one for each bit in multiplicand a:
p0[7:0] = a[0] × b[7:0] = {8{a[0]}} & b[7:0]
p1[7:0] = a[1] × b[7:0] = {8{a[1]}} & b[7:0]
p2[7:0] = a[2] × b[7:0] = {8{a[2]}} & b[7:0]
p3[7:0] = a[3] × b[7:0] = {8{a[3]}} & b[7:0]
p4[7:0] = a[4] × b[7:0] = {8{a[4]}} & b[7:0]
p5[7:0] = a[5] × b[7:0] = {8{a[5]}} & b[7:0]
p6[7:0] = a[6] × b[7:0] = {8{a[6]}} & b[7:0]
p7[7:0] = a[7] × b[7:0] = {8{a[7]}} & b[7:0]
where {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times (Verilog notation).
In order to obtain our product, we then need to add up all eight of our partial products, as shown here:
p0[7] p0[6] p0[5] p0[4] p0[3] p0[2] p0[1] p0[0]
+ p1[7] p1[6] p1[5] p1[4] p1[3] p1[2] p1[1] p1[0] 0
+ p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1] p2[0] 0 0
+ p3[7] p3[6] p3[5] p3[4] p3[3] p3[2] p3[1] p3[0] 0 0 0
+ p4[7] p4[6] p4[5] p4[4] p4[3] p4[2] p4[1] p4[0] 0 0 0 0
+ p5[7] p5[6] p5[5] p5[4] p5[3] p5[2] p5[1] p5[0] 0 0 0 0 0
+ p6[7] p6[6] p6[5] p6[4] p6[3] p6[2] p6[1] p6[0] 0 0 0 0 0 0
+ p7[7] p7[6] p7[5] p7[4] p7[3] p7[2] p7[1] p7[0] 0 0 0 0 0 0 0
-------------------------------------------------------------------------------------------
P[15] P[14] P[13] P[12] P[11] P[10] P[9] P[8] P[7] P[6] P[5] P[4] P[3] P[2] P[1] P[0]
In other words, P[15:0] is produced by summing p0, p1 << 1, p2 << 2, and so forth, to produce our final unsigned 16-bit product.
Signed integers
If b had been a signed integer instead of an unsigned integer, then the partial products would need to have been sign-extended up to the width of the product before summing. If a had been a signed integer, then partial product p7 would need to be subtracted from the final sum, rather than added to it.
The above array multiplier can be modified to support two's complement notation signed numbers by inverting several of the product terms and inserting a one to the left of the first partial product term:
1 ~p0[7] p0[6] p0[5] p0[4] p0[3] p0[2] p0[1] p0[0]
~p1[7] +p1[6] +p1[5] +p1[4] +p1[3] +p1[2] +p1[1] +p1[0] 0
~p2[7] +p2[6] +p2[5] +p2[4] +p2[3] +p2[2] +p2[1] +p2[0] 0 0
~p3[7] +p3[6] +p3[5] +p3[4] +p3[3] +p3[2] +p3[1] +p3[0] 0 0 0
~p4[7] +p4[6] +p4[5] +p4[4] +p4[3] +p4[2] +p4[1] +p4[0] 0 0 0 0
~p5[7] +p5[6] +p5[5] +p5[4] +p5[3] +p5[2] +p5[1] +p5[0] 0 0 0 0 0
~p6[7] +p6[6] +p6[5] +p6[4] +p6[3] +p6[2] +p6[1] +p6[0] 0 0 0 0 0 0
1 +p7[7] ~p7[6] ~p7[5] ~p7[4] ~p7[3] ~p7[2] ~p7[1] ~p7[0] 0 0 0 0 0 0 0
------------------------------------------------------------------------------------------------------------
P[15] P[14] P[13] P[12] P[11] P[10] P[9] P[8] P[7] P[6] P[5] P[4] P[3] P[2] P[1] P[0]
Where ~p represents the complement (opposite value) of p.
There are many simplifications in the bit array above that are not shown and are not obvious. The sequences of one complemented bit followed by noncomplemented bits are implementing a two's complement trick to avoid sign extension. The sequence of p7 (noncomplemented bit followed by all complemented bits) is because we're subtracting this term so they were all negated to start out with (and a 1 was added in the least significant position). For both types of sequences, the last bit is flipped and an implicit −1 should be added directly below the MSB. When the +1 from the two's complement negation for p7 in bit position 0 (LSB) and all the −1's in bit columns 7 through 14 (where each of the MSBs are located) are added together, they can be simplified to the single 1 that "magically" is floating out to the left. For an explanation and proof of why flipping the MSB saves us the sign extension, see a computer arithmetic book.
Floating-point numbers
A binary floating-point number contains a sign bit, significant bits (known as the significand) and exponent bits (for simplicity, we don't consider base and combination field). The sign bits of each operand are XOR'd to get the sign of the answer. Then, the two exponents are added to get the exponent of the result. Finally, multiplication of each operand's significand will return the significand of the result. However, if the result of the binary multiplication is higher than the total number of bits for a specific precision (e.g. 32, 64, 128), rounding is required and the exponent is changed appropriately.
Hardware implementation
The process of multiplication can be split into 3 steps:
generating partial product
reducing partial product
computing final product
Older multiplier architectures employed a shifter and accumulator to sum each partial product, often one partial product per cycle, trading off speed for die area. Modern multiplier architectures use the (Modified) Baugh–Wooley algorithm, Wallace trees, or Dadda multipliers to add the partial products together in a single cycle. The performance of the Wallace tree implementation is sometimes improved by modified Booth encoding one of the two multiplicands, which reduces the number of partial products that must be summed.
For speed, shift-and-add multipliers require a fast adder (something faster than ripple-carry).
A "single cycle" multiplier (or "fast multiplier") is pure combinational logic.
In a fast multiplier,
the partial-product reduction process usually contributes the most to the delay, power, and area of the multiplier.
For speed, the "reduce partial product" stages are typically implemented as a carry-save adder composed of compressors and the "compute final product" step is implemented as a fast adder (something faster than ripple-carry).
Many fast multipliers use full adders as compressors ("3:2 compressors") implemented in static CMOS.
To achieve better performance in the same area or the same performance in a smaller area, multiplier designs may use higher order compressors such as 7:3 compressors;
implement the compressors in faster logic (such transmission gate logic, pass transistor logic, domino logic);
connect the compressors in a different pattern; or some combination.
Example circuits
See also
Booth's multiplication algorithm
Fused multiply–add
Wallace tree
BKM algorithm for complex logarithms and exponentials
Kochanski multiplication for modular multiplication
Logical shift left
References
External links
Multiplier Designs targeted at FPGAs
Binary Multiplier circuit using Half -Adders and digital gates.
Digital circuits
Arithmetic logic circuits
Binary arithmetic
Multiplication |
Chung Hua University (CHU; ) is a private university located in Xiangshan District, Hsinchu City, Taiwan. It was formerly known as Chung Hua Polytechnic Institute founded in 1990 by three local Hsinchu entrepreneurs, Ron-Chang Wang, Zau-Juang Tsai and Lin Junq-tzer. It was upgraded to university status and renamed as "Chung Hua University" in 1997. There are six colleges with 25 departments offering undergraduate courses as well as 16 master programs and 3 PH.D. programs. CHU is accredited by AACSB.
Organization
College of Engineering
Department of Electronics Engineering
Department of Civil Engineering and Engineering Informatics
Department of Mechanical Engineering
Department of Applied Mathematics
Department of Communications Engineering
Department of Microelectronics Engineering
Degree Program of Photonics and Materials Science
Institute of Engineering and Science
Institute of Environmental Resource and Energy in Science and Technology
Institute of Mechanical and Aerospace Engineering
College of Management
Department of Industrial Engineering and System Management
Department and Institute of Technology Management
Department and Institute of Business Administration
Department of Financial Management
Department of Transportation Technology and Logistics Management
Department of International Business
College of Architecture and Planning
Department of Architecture and Urban Planning
Department of Landscape Architecture
Department of Construction Engineering & Project Management
Institute of Construction Management
College of Humanities and Social Science
Department of Foreign Languages and Literature
Department and Institute of Public Administration
Centre of General Education
Centre of Teacher Education
College of Computer Science and Information
Department of Computer Science and Information Engineering
Department of Information Management
Department of Bioinfomatics
Degree Program of Computer Science & Information
College of Tourism
Department of Hotel & Restaurant Management
Department of Leisure and Recreational Management
Degree Program of Tourism and MICE Management
Notable alumni
Hsu Ming-tsai, Mayor of Hsinchu City (2009–2014)
Hsu Yao-chang, Magistrate of Miaoli County
Lin Chih-chien, Mayor of Hsinchu City
Lu Chia-chen, member of Legislative Yuan (2008-2016)
Chang Ching-chung, legislator
Yen Kuan-heng, legislator
Chantel Liu, actor
Edison Lin, singer
See also
List of universities in Taiwan
References
External links
1990 establishments in Taiwan
Universities and colleges established in 1990
Universities and colleges in Hsinchu
Universities and colleges in Taiwan
Comprehensive universities in Taiwan |
St Mary's Christian Brothers' Grammar School (St Mary's CBGS) is a Roman Catholic boys' grammar school in Belfast, Northern Ireland.
History
The origins of the school can be traced to St Mary's School which was established in Divis Street by the Irish Christian Brothers in 1866. The Brothers had been invited by Patrick Dorrian, Bishop of Down and Connor, to educate the working class children of the area. In 1929, a new secondary school was built in the nearby Barrack Street. The students were largely drawn from the surrounding district but also began to attract some from across Belfast and wider afield. Due to the growing student population, it was decided in the 1960s to build a new school. This opened in a site off the Glen Road in 1968.
The Barrack Street campus remained in use until 1998 when all students were accommodated in the greatly extended school on the Glen Road. The original building on Barrack Street is now known as the Westcourt Centre and provides a range of educational and community services. Edmund Ignatius Rice who founded the Irish Christian Brothers was born in Westcourt, Callan, County Kilkenny. In 2012, the Barrack Street building was listed as a 'building of special architectural or historic interest' by the Department of the Environment.
The school was originally entirely run by the Irish Christian Brothers but in the late twentieth century their numbers declined and the school is now entirely staffed by lay teachers. It is now under the trusteeship of the Edmund Rice Schools Trust (NI).
List of Principals
Br. Magee
Br J.M. Murphy: c.1967-1970
Br. O’Neill (Stoneface): c.1973-1976
Br. D.M. McCrohan: c.1976-1986
Br. Larry Ennis: c.1986-1990
Br. Leo Kelly: c.1990-1992
Br. Denis Gleeson: c.1992-1996
Mr. Michael Crilly: 1996-97 (Acting)
Mr. Kevin Burke (An tUas. Caoimhín de Búrca): 1997-2008
Mr. Jim Sheerin: 2008-2014
Mr. John Martin: 2014-2018
Mrs. Siobhán Kelly: 2019–present
Facilities
The school is located on a large site on the lower slopes of the Black Mountain. Besides various teaching classrooms it also has computer suites, a technology suite; art studios, music suite, science laboratories, as well as a large lecture theatre, an assembly hall and canteen. For sports, there are fifteen acres of playing field, including a 3G pitch, and an athletics track. Indoors, there is a gymnasium and a swimming pool.
Academics
The school provides instruction in a broad range of academic subjects. At the advanced level students are prepared for exams in Applied Business, Business Communication Systems, Biology, Chemistry, Mathematics, Further Mathematics, Physics, ICT, Computing, Art & Design, Geography, History, Religious Studies, Politics, English Literature, Drama, Irish, Music, Sports Studies, Media Studies, Home Economics, French, Spanish Travel and Leisure. St Mary's also offer a double award science option and a further maths option which pupils are chosen for.
in 2018, 81% of its entrants achieved five or more GCSEs at grades A* to C, including the core subjects English and Maths.
79% of its students who sat the A-level exams in 2021/22 were awarded three A*-C grades. In addition, there was a 100% per cent pass rate at grades A* to C or equivalent for students who entered BTEC Extended Certificate in IT, Art and Design, Biology, Chemistry, Finance, French, Further Mathematics, Physics, Technology and Sport.
In 2022, the school decided to abandon academic selection for entry.
In 2022, the school produced a special video that described its academic and other activities. It featured a song called Hang on to Tomorrow that was written by the poet Francis O’Hare and performed by Mrs Claire Wright, Head of Religious Studies. Paul Lavery, Head of Drama, wrote and produced the video together with Mr David Guiney, music and drama teacher.
Sport
Gaelic Games
The school hurling team has the Mageean Cup a total of 28 times - the most in the competition. It won the title five times in succession in the 1990s and again three times since 2010. St. Marys also completed an Ulster Colleges double in 2008 winning both the Mageean Cup and the MacLarnon Cup for the first time in the school's history after beating St Columbs (Derry) 1–7 to 0–8 in the final at Healy Park in Omagh on St Patrick's Day.
The school has also had sustained success in handball and Gaelic football.
Soccer
Since the lifting of the ban on school representation in soccer competitions in 2002 the school has become the most successful in Belfast. On St Patrick's Day 2006 at Lisburn Distillery's grounds the Year 12s won its first ever soccer cup, the Belfast Cup, defeating Boys Model School. They followed up the next year with its first NI Cup in 2007 (Year 12) as well as the 2007 Belfast Cup (Year 11).
This success was followed up in 2008 as they won the year 9 Belfast Cup as well as an historic double in lifting both the Carnegie Schools Northern Ireland Cup (Year 13/14) and became the first school in 20 years to retain the Malcolm Brodie northern Ireland Trophy (year 12) with a victory over St Columbs, Derry. The winning tradition continues into the last year of the decade with wins in the NI Cup and Belfast Cup for the U14s and the U15s winning the Belfast cup.
Water polo
It is the only school in Ireland to have a clean sweep of All-Ireland titles at all age groups in consecutive years. A ninth Canada Cup in a row was won in April 2009 with several of the team continuing to represent Ireland at international tournaments.
Other sports
The school also competes in inter-schools competition in trampoline, athletics, golf, and basketball.
Clubs and Societies
Debating
The school runs debating societies in English, Irish and Spanish, and has sent delegates representing Ireland to both the European Youth Parliament and European Youth Commission.
The school has excelled in the European and Irish News inter-school quizzes, currently holding both trophies. The school debating team won the Northern Ireland Schools Debating Championship in 2008, defeating the team from Antrim Grammar School in the final at Stormont. This is the only time St Mary's has won the competition.
Arts
The school maintains an orchestra and a recording studio, stages theatrical and musical performances, as well as entering students in art competitions.
Awards
In 2023, Raymond Herron, a teacher at the school, won the Pastoral Development of the Year award at the finals of the National Awards for Pastoral Care in Education which was held in Worcester, England. The award was for his leadership of the school’s work in promoting restorative practices for conflict and dispute resolution.
Community activities
The school also encourages students to participate in a range of community-oriented activities through the Eco Club, the Social Justice Advocacy Group and the St. Vincent de Paul Society. The school also initiated Project Zambia () which is designed to involve students in providing support for marginalised communities in Zambia.
Notable alumni
See also: Past Pupils, St. Mary's CBGS, Edmund Rice Schools Trust
See also
List of secondary schools in Belfast
References
External links
St Mary's CBGS
Project Zambia
Boys' schools in Northern Ireland
Congregation of Christian Brothers secondary schools in Northern Ireland
Educational institutions established in 1866
Grammar schools in Belfast
Category:Grammar schools in County Antrim
Catholic secondary schools in Northern Ireland
1866 establishments in Ireland |
Mark D. Pesce ( ; born 1962) is an American-Australian author, researcher, engineer, futurist and teacher.
Early life
Pesce was born in Everett, Massachusetts in 1962. In September 1980, Pesce attended Massachusetts Institute of Technology (MIT) for a Bachelor of Science degree, but left in June 1982 to pursue opportunities in the newly emerging high-tech industry. He worked as an engineer for the next few years, developing prototype firmware and software for SecurID cards.
Career
In 1988, Pesce joined Shiva Corporation, which pioneered and popularized dial-up networking. Pesce's role in the company was to develop user interfaces, and his research extended into virtual reality.
In 1991, Pesce founded the Ono-Sendai Corporation, named after a fictitious company in the William Gibson novel Neuromancer. Ono-Sendai was a first-generation virtual reality startup, chartered to create inexpensive, home-based networked VR systems. The company developed a key technology, which earned Pesce his first patent for a "Sourceless Orientation Sensor" that tracks the motion of persons in virtual environments. Sega Corporation of America would use the technology on the design of the Sega VR, a consumer head-mounted display (HMD).
In 1993, Apple hired Pesce as a consulting engineer to develop interfaces between Apple and IBM networking products. In early 1994, while in San Francisco, Pesce, with software engineers Tony Parisi and Gavin Bell, spearheaded an effort to standardize 3D on the Web, and formed VRML Architecture Group (VAG), under the leadership of Pesce. The purpose of VRML was to allow for the creation of 3D environments within the World Wide Web, accessible through a web browser. Working in conjunction with such corporations as Microsoft, Netscape, Silicon Graphics, Sun Microsystems and Sony, Pesce convinced the industry to accept the new protocol as a standard for desktop virtual reality. This development spring-boarded Pesce into a career which has included extensive writings for both the popular and scientific press, teaching and lecturing at universities, conferences, performances, presentations, and films appearances.
Australia
In 2003, Pesce moved to Australia, where he continues to live, and became an Australian citizen on 4 February 2011. He is an Honorary Lecturer at the University of Sydney and was a judge on The New Inventors, a nationally televised program in Australia.
In 2006, Pesce founded FutureSt, a Sydney consultancy, serving as an advisory to analytics firm PeopleBrowsr, and The Serval Project.
In 2008, Pesce began writing an online column for the Australian Broadcasting Corporation's The Drum Opinion.
More recently Pesce has been designing and coding Plexus, a Web 2.0 address book and social networking tool, and is writing his next book, The Next Billion Seconds. His current major project, however, is Light MooresCloud, an ambient device of 52-LEDs which is a lamp with a LAMP-stack; the trademark pays homage to the inexpensive ubiquitous computing engendered by Moore's Law. Inspired by the GPIO of a borrowed Raspberry Pi, which he realized allowed web users anywhere on the planet to turn an LED on or off on his machine from their browsers, MooresCloud was brought from concept to prototype by a team in eight weeks. Highly configurable, the device has been touted as "illumination as a service".
From January 2004 through January 2006, Pesce was the senior lecturer in Emerging Media and Interactive Design at the Australian Film Television and Radio School (AFTRS) in Sydney, Australia. He now holds an Honorary Appointment at the University of Sydney and has shared some of his lectures online.
Other teaching
Pesce began his teaching career in 1996 as a VRML instructor at both the University of California at Santa Cruz and San Francisco State University, where he would later create the school's certificate program in the 3-D Arts. In 1998, Pesce was asked to join the faculty of the University of Southern California, as the founding chair of the Graduate Program in Interactive Media at the USC School of Cinema-Television.
Books
Mark Pesce: Augmented Reality: Unboxing Tech's next big thing. Polity Press, 2021.
Mark Pesce. The Next Billion Seconds. Blurb, 2012
Mark Pesce, Programming DirectShow and Digital Video. Seattle, Washington, Microsoft Press, May 2003.
Mark Pesce, The Playful World: How Technology Transforms our Imagination. New York, Ballantine Books (Random House), October 2000.
Mark Pesce, Learning VRML: Design for Cyberspace. Cambridge, Massachusetts: Ziff-Davis Publishing, 1997.
Mark Pesce, VRML: Flying through the Web. Indianapolis, Indiana: New Riders Publishing, 1996.
Mark Pesce, VRML: Browsing and Building Cyberspace. Indianapolis, Indiana: New Riders Publishing, 1995.
Introduction to Celia Pearce, The Interactive Book. Indianapolis, Indiana: Macmillan Technical Publishing, 1997.
Film projects
Man With a Movie Tube, short form video, January 2007
Unbomb, short form video, August 2003.
Body Hits (BBC 3), location producer, November 2002.
This Strange Eventful History, feature length video about Burning Man, August 2002
Becoming Transhuman, feature length video, inspired by Terence McKenna and others, August 2001
References
External links
Pesce's personal homepage
Pesce's professional homepage
Pesce's professional blog
1962 births
Living people
People from Everett, Massachusetts
Massachusetts Institute of Technology alumni
Computer graphics researchers
Computer science writers
Computer science educators
American technology writers
Virtual reality
University of Southern California faculty
Futurologists |
The LM317 is a popular adjustable positive linear voltage regulator. It was designed by Bob Dobkin in 1976 while he worked at National Semiconductor.
The LM337 is the negative complement to the LM317, which regulates voltages below a reference. It was designed by Bob Pease, who also worked for National Semiconductor.
Specifications
Without a heat sink with an ambient temperature at 50 °C such as on a hot summer day inside a box, a maximum power dissipation of (TJ-TA)/RθJA = ((125-50)/80) = 0.98 W can be permitted. (A piece of shiny sheet metal of aluminium with the dimensions 6 x 6 cm and 1.5 mm thick, results in a thermal resistance that permits 4.7 W of heat dissipation).
In a constant voltage mode with an input voltage source at VIN at 34 V and a desired output voltage of 5 V, the maximum output current will be PMAX / (VIN-VO) = 0.98 / (34-5) = 32 mA.
For a constant current mode with an input voltage source at VIN at 12 V and a forward voltage drop of VF=3.6 V, the maximum output current will be PMAX / (VIN - VF) = 0.98 / (12-3.6) = 117 mA.
Operation
As linear regulators, the LM317 and LM337 are used in DC to DC converter applications.
Linear regulators inherently waste power; the power dissipated is the current passed multiplied by the voltage difference between input and output. A LM317 commonly requires a heat sink to prevent the operating temperature from rising too high. For large voltage differences, the power lost as heat can ultimately be greater than that provided to the circuit. This is the tradeoff for using linear regulators, which are a simple way to provide a stable voltage with few additional components. The alternative is to use a switching voltage regulator, which is usually more efficient, but has a larger footprint and requires a larger number of associated components.
In packages with a heat-dissipating mounting tab, such as TO-220, the tab is connected internally to the output pin which may make it necessary to electrically isolate the tab or the heat sink from other parts of the application circuit. Failure to do this may cause the circuit to short.
Voltage regulator
The LM317 has three pins: INput, OUTput, and ADJustment. Internally the device has a bandgap voltage reference which produces a stable reference voltage of Vref= 1.25 V followed by a feedback-stabilized amplifier with a relatively high output current capacity. How the adjustment pin is connected determines the output voltage as follows.
If the adjustment pin is connected to ground the output pin delivers a regulated voltage of 1.25 V at currents up to the maximum. Higher regulated voltages are obtained by connecting the adjustment pin to a resistive voltage divider between the output and ground. Then
Vref is the difference in voltage between the OUT pin and the ADJ pin. Vref is typically 1.25 V during normal operation.
Because some quiescent current flows from the adjustment pin of the device, an error term is added:
To make the output more stable, the device is designed to keep the quiescent current at or below 100µA, making it possible to ignore the error term in nearly all practical cases.
Current regulator
The device can be configured to regulate the current to a load, rather than the voltage, by replacing the low-side resistor of the divider with the load itself. The output current is that resulting from dropping the reference voltage across the resistor. Ideally, this is:
Accounting for quiescent current, this becomes:
LM317 can also be used to design various other circuits like 0 V to 30 V regulator circuit, adjustable regulator circuit with improved ripple rejection, precision current limiter circuit, tracking pre-regulator circuit, 1.25 V to 20 V regulator circuit with minimum program current, adjustable multiple on-card regulators with single control, battery charger circuit, 50 mA constant current battery charger circuit, slow turn-on 15 V regulator circuit, ac voltage regulator circuit, current-limited 6 V charger circuit, adjustable 4 V regulator circuit, high-current adjustable regulator circuit and many more.
Compared to 78xx/79xx
The LM317 is an adjustable analogue to the popular 78xx fixed regulators. Like the LM317, each of the 78xx regulators is designed to adjust the output voltage until it is some fixed voltage above the adjustment pin (which in this case is labelled "ground").
The mechanism used is similar enough that a voltage divider can be used in the same way as with the LM317 and the output follows the same formula, using the regulator's fixed voltage for Vref (e.g. 5 V for 7805). However, the 78xx device's quiescent current is substantially higher and less stable. Because of this, the error term in the formula cannot be ignored and the value of the low-side resistor becomes more critical. More stable adjustments can be made by providing a reference voltage that is less sensitive than a resistive divider to current fluctuations, such as a diode drop or a voltage buffer. The LM317 is designed to compensate for these fluctuations internally, making such measures unnecessary.
The LM337 relates in the same way to the fixed 79xx regulators.
Second sources from Eastern Bloc
The LM317 has an East European equivalent, the B3170V, which was manufactured in the German Democratic Republic (East Germany) by HFO (part of Kombinat Mikroelektronik Erfurt).
Also, in USSR was manufactured and most popular ICs K142EN12A and KR142EN12A. These ICs are functional analogues of the LM317
See also
Bandgap voltage reference
Brokaw bandgap reference
List of LM-series integrated circuits
References
External links
LM317 Circuit Schematics and Pinouts
Online calculator to pick resistors for LM317 circuit
Band-Gap
The Design of Band-Gap Reference Circuits: Trials and Tribulations – Robert Pease, National Semiconductor (shows LM317 design in Figure 4: LM117)
LM317 Bandgap Voltage Reference Example (ECE 327) – Brief explanation of the temperature-independent bandgap reference circuit within the LM317.
Datasheets / Databooks
Voltage Regulator Databook (Historical 1980), National Semiconductor
LM317 (positive), LM350 (3 Amp), Texas Instruments (TI acquired National Semiconductor)
LM317 (positive), LM350 (3 Amp), ON Semiconductor
LM317 (positive), STMicroelectronics
LM337 (negative), Texas Instruments
Linear integrated circuits
Voltage regulation |
The lifting scheme is a technique for both designing wavelets and performing the discrete wavelet transform (DWT). In an implementation, it is often worthwhile to merge these steps and design the wavelet filters while performing the wavelet transform. This is then called the second-generation wavelet transform. The technique was introduced by Wim Sweldens.
The lifting scheme factorizes any discrete wavelet transform with finite filters into a series of elementary convolution operators, so-called lifting steps, which reduces the number of arithmetic operations by nearly a factor two. Treatment of signal boundaries is also simplified.
The discrete wavelet transform applies several filters separately to the same signal. In contrast to that, for the lifting scheme, the signal is divided like a zipper. Then a series of convolution–accumulate operations across the divided signals is applied.
Basics
The simplest version of a forward wavelet transform expressed in the lifting scheme is shown in the figure above. means predict step, which will be considered in isolation. The predict step calculates the wavelet function in the wavelet transform. This is a high-pass filter. The update step calculates the scaling function, which results in a smoother version of the data.
As mentioned above, the lifting scheme is an alternative technique for performing the DWT using biorthogonal wavelets. In order to perform the DWT using the lifting scheme, the corresponding lifting and scaling steps must be derived from the biorthogonal wavelets. The analysis filters () of the particular wavelet are first written in polyphase matrix
where .
The polyphase matrix is a 2 × 2 matrix containing the analysis low-pass and high-pass filters, each split up into their even and odd polynomial coefficients and normalized. From here the matrix is factored into a series of 2 × 2 upper- and lower-triangular matrices, each with diagonal entries equal to 1. The upper-triangular matrices contain the coefficients for the predict steps, and the lower-triangular matrices contain the coefficients for the update steps. A matrix consisting of all zeros with the exception of the diagonal values may be extracted to derive the scaling-step coefficients. The polyphase matrix is factored into the form
where is the coefficient for the predict step, and is the coefficient for the update step.
An example of a more complicated extraction having multiple predict and update steps, as well as scaling steps, is shown below; is the coefficient for the first predict step, is the coefficient for the first update step, is the coefficient for the second predict step, is the coefficient for the second update step, is the odd-sample scaling coefficient, and is the even-sample scaling coefficient:
According to matrix theory, any matrix having polynomial entries and a determinant of 1 can be factored as described above. Therefore, every wavelet transform with finite filters can be decomposed into a series of lifting and scaling steps. Daubechies and Sweldens discuss lifting-step extraction in further detail.
CDF 9/7 filter
To perform the CDF 9/7 transform, a total of four lifting steps are required: two predict and two update steps.
The lifting factorization leads to the following sequence of filtering steps.
Properties
Perfect reconstruction
Every transform by the lifting scheme can be inverted. Every perfect-reconstruction filter bank can be decomposed into lifting steps by the Euclidean algorithm. That is, "lifting-decomposable filter bank" and "perfect-reconstruction filter bank" denotes the same. Every two perfect-reconstruction filter banks can be transformed into each other by a sequence of lifting steps. For a better understanding, if and are polyphase matrices with the same determinant, then the lifting sequence from to is the same as the one from the lazy polyphase matrix to .
Speedup
Speedup is by a factor of two. This is only possible because lifting is restricted to perfect-reconstruction filter banks. That is, lifting somehow squeezes out redundancies caused by perfect reconstruction.
The transformation can be performed immediately in the memory of the input data (in place, in situ) with only constant memory overhead.
Non-linearities
The convolution operations can be replaced by any other operation. For perfect reconstruction only the invertibility of the addition operation is relevant. This way rounding errors in convolution can be tolerated and bit-exact reconstruction is possible. However, the numeric stability may be reduced by the non-linearities. This must be respected if the transformed signal is processed like in lossy compression. Although every reconstructable filter bank can be expressed in terms of lifting steps, a general description of the lifting steps is not obvious from a description of a wavelet family. However, for instance, for simple cases of the Cohen–Daubechies–Feauveau wavelet, there is an explicit formula for their lifting steps.
Increasing vanishing moments, stability, and regularity
A lifting modifies biorthogonal filters in order to increase the number of vanishing moments of the resulting biorthogonal wavelets, and hopefully their stability and regularity. Increasing the number of vanishing moments decreases the amplitude of wavelet coefficients in regions where the signal is regular, which produces a more sparse representation. However, increasing the number of vanishing moments with a lifting also increases the wavelet support, which is an adverse effect that increases the number of large coefficients produced by isolated singularities. Each lifting step maintains the filter biorthogonality but provides no control on the Riesz bounds and thus on the stability of the resulting wavelet biorthogonal basis. When a basis is orthogonal then the dual basis is equal to the original basis. Having a dual basis that is similar to the original basis is, therefore, an indication of stability. As a result, stability is generally improved when dual wavelets have as much vanishing moments as original wavelets and a support of similar size. This is why a lifting procedure also increases the number of vanishing moments of dual wavelets. It can also improve the regularity of the dual wavelet. A lifting design is computed by adjusting the number of vanishing moments. The stability and regularity of the resulting biorthogonal wavelets are measured a posteriori, hoping for the best. This is the main weakness of this wavelet design procedure.
Generalized lifting
The generalized lifting scheme was developed by Joel Solé and Philippe Salembier and published in Solé's PhD dissertation. It is based on the classical lifting scheme and generalizes it by breaking out a restriction hidden in the scheme structure. The classical lifting scheme has three kinds of operations:
A lazy wavelet transform splits signal in two new signals: the odd-samples signal denoted by and the even-samples signal denoted by .
A prediction step computes a prediction for the odd samples, based on the even samples (or vice versa). This prediction is subtracted from the odd samples, creating an error signal .
An update step recalibrates the low-frequency branch with some of the energy removed during subsampling. In the case of classical lifting, this is used in order to "prepare" the signal for the next prediction step. It uses the predicted odd samples to prepare the even ones (or vice versa). This update is subtracted from the even samples, producing the signal denoted by .
The scheme is invertible due to its structure. In the receiver, the update step is computed first with its result added back to the even samples, and then it is possible to compute exactly the same prediction to add to the odd samples. In order to recover the original signal, the lazy wavelet transform has to be inverted. Generalized lifting scheme has the same three kinds of operations. However, this scheme avoids the addition-subtraction restriction that offered classical lifting, which has some consequences. For example, the design of all steps must guarantee the scheme invertibility (not guaranteed if the addition-subtraction restriction is avoided).
Definition
Generalized lifting scheme is a dyadic transform that follows these rules:
Deinterleaves the input into a stream of even-numbered samples and another stream of odd-numbered samples. This is sometimes referred to as a lazy wavelet transform.
Computes a prediction mapping. This step tries to predict odd samples taking into account the even ones (or vice versa). There is a mapping from the space of the samples in to the space of the samples in . In this case the samples (from ) chosen to be the reference for are called the context. It could be expressed as
Computes an update mapping. This step tries to update the even samples taking into account the odd predicted samples. It would be a kind of preparation for the next prediction step, if any. It could be expressed as
Obviously, these mappings cannot be any functions. In order to guarantee the invertibility of the scheme itself, all mappings involved in the transform must be invertible. In case that mappings arise and arrive on finite sets (discrete bounded value signals), this condition is equivalent to saying that mappings are injective (one-to-one). Moreover, if a mapping goes from one set to a set of the same cardinality, it should be bijective.
In the generalized lifting scheme the addition/subtraction restriction is avoided by including this step in the mapping. In this way the classical lifting scheme is generalized.
Design
Some designs have been developed for the prediction-step mapping. The update-step design has not been considered as thoroughly, because it remains to be answered how exactly the update step is useful. The main application of this technique is image compression. There some interesting references such as, and.
Applications
Wavelet transforms that map integers to integers
Fourier transform with bit-exact reconstruction
Construction of wavelets with a required number of smoothness factors and vanishing moments
Construction of wavelets matched to a given pattern
Implementation of the discrete wavelet transform in JPEG 2000
Data-driven transforms, e.g., edge-avoiding wavelets
Wavelet transforms on non-separable lattices, e.g., red-black wavelets on the quincunx lattice
See also
The Feistel scheme in cryptology uses much the same idea of dividing data and alternating function application with addition. Both in the Feistel scheme and the lifting scheme this is used for symmetric en- and decoding.
References
External links
Lifting Scheme – brief description of the factoring algorithm
Introduction to The Lifting Scheme
The Fast Lifting Wavelet Transform
Digital signal processing
Matrix decompositions
Wavelets |
Avnet, Inc. is a distributor of electronic components headquartered in Phoenix, Arizona, named after Charles Avnet, who founded the company in 1921. After its start on Manhattan's Radio Row, the company became incorporated in 1955 and began trading on the New York Stock Exchange in 1961. On May 8, 2018, Avnet changed stock markets to Nasdaq, trading under the same ticker AVT.
History
19201930
In 1921, Charles Avnet, a 33-year-old Russian-Jewish immigrant, began buying surplus radio parts and selling them to the public on the Radio Rows of United States port cities. As radio manufacturing grew, parts distribution took off. In the mid-1920s, when factory-made radios began to replace radio parts, he adjusted his distribution pipeline and began selling parts to manufacturers and dealers. In the mid-1920s, Avnet diversified by branching out into car radio kits and automobile assembly kits. During the Great Depression, he shifted the focus from retailing to wholesaling.
19301940
During World War II, Avnet made antennas for the U.S. armed forces. Charles's son, Lester, joined the business at that time. After the war was over, Avnet focused on buying and selling surplus electronic and electrical parts.
19501960
In 1955, Avnet Electronic Supply Company was incorporated with a primary business of selling capacitors, fasteners and switches. In 1956, the corporation opened a second connector assembly plant near Los Angeles specifically for the aircraft industry. Three years after it incorporated, the company changed its name to Avnet Electronics Corporation, and went public on the American Stock Exchange the following year.
19601970
In 1960, Avnet made its first acquisition, British Industries Corp. (BIC), an audio equipment company. With this acquisition, it began selling die casting machines, guitars, and television antennas, and earned a spot trading on the New York Stock Exchange. In the mid-1960s, the company briefly owned several record labels including Liberty Records and Blue Note. Avnet acquired guitar manufacturer Guild Musical Instruments in 1965; in that year a Guild Starfire 12 guitar was presented to Beatles legends John Lennon and George Harrison.
Over the course of the decade (from 1960 to 1970), Avnet expanded with several acquisitions:
These acquisitions expanded the company into additional fields of semiconductors, relays, and potentiometers. In 1964, the company renamed itself again as Avnet, Incorporated. Founder Charles Avnet died that same year, and his son Lester became president and chairman.
19701980
In 1973, Avnet became Intel Corp.'s first distributor, solidifying Avnet's place in the computer business. Together, Avnet and supplier Intel began selling computer peripherals, complete systems, and software. In 1979, Avnet hit $1 billion in revenue for the first time.
In 1979, Lester Avnet died and was succeeded by Simon Sheib as chief executive officer. Sheib, along with Anthony Hamilton, president of the Hamilton Electro Corporation acquisition, combined the two companies to form Hamilton/Avnet, which eventually became Avnet Electronic Marketing Group, led by Hamilton. The company shifted its strategy during this time to focus on sales, warehouse and stocking facilities, product development, and expanding markets, and became the first distributor of semiconductors, integrated circuits, and microprocessors.
19802000
In 1998, Roy Vallee became CEO and chairman, the same year that the company relocated its corporate headquarters to Phoenix (from Great Neck, NY).
2010present
In July 2010, the company purchased Bell Microelectronics for $631 million. In July 2011, CEO Roy Vallee retired, and then-COO Richard Hamada was appointed CEO.
In March 2012 Avnet, Inc. acquired Ascendant Technologies and in October 2012, BrightStar Partners, Inc. and BSP Software LLC. This expanded Avnet Technology Solutions.
In September 2016, Tech Data announced that it had entered into an agreement to acquire the Technology Solutions operating group from Avnet, Inc. in a stock and cash transaction valued at approximately US$2.6 billion. Under the terms of the agreement, Avnet received at closing approximately $2.4 billion in cash and 2.785 million shares of Tech Data common stock, representing an approximate 7 percent ownership position in Tech Data. In October 2016, the company purchased British components distributor Premier Farnell for £691M.
In March 2019, Avnet announced that it was working with blockchain payment provider BitPay to accept cryptocurrency as payment for products and services. Avnet stated that it had already closed "several multi-million dollar cryptocurrency transactions" in the first month of accepting Bitcoin.
Leadership history
Chief Executive Officers since the 1970s for Avnet have included:
Tony Hamilton
Leon Machiz
Roy Vallee
Rick Hamada
William Amelio
Phil Gallagher
Notes
External links
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Electronic component distributors
American companies established in 1921
Business services companies established in 1921
Companies based in Phoenix, Arizona
Distribution companies of the United States
1921 establishments in New York City
1960s initial public offerings |
Frederick Jelinek (18 November 1932 – 14 September 2010) was a Czech-American researcher in information theory, automatic speech recognition, and natural language processing. He is well known for his oft-quoted statement, "Every time I fire a linguist, the performance of the speech recognizer goes up".
Jelinek was born in Czechoslovakia before World War II and emigrated with his family to the United States in the early years of the communist regime. He studied engineering at the Massachusetts Institute of Technology and taught for 10 years at Cornell University before accepting a job at IBM Research. In 1961, he married Czech screenwriter Milena Jelinek. At IBM, his team advanced approaches to computer speech recognition and machine translation. After IBM, he went to head the Center for Language and Speech Processing at Johns Hopkins University for 17 years, where he was still working on the day he died.
Personal life
Jelinek was born on November 18, 1932, as Bedřich Jelínek in Kladno to Vilém and Trude Jelínek. His father was Jewish; his mother was born in Switzerland to Czech Catholic parents and had converted to Judaism. Jelínek senior, a dentist, had planned early to escape Nazi occupation and flee to England; he arranged for a passport, visa, and the shipping of his dentistry materials. The couple planned to send their son to an English private school. However, Vilém decided to stay at the last minute and was eventually sent to the Theresienstadt concentration camp, where he died in 1945. The family was forced to move to Prague in 1941, but Frederick, his sister and motherthanks to the latter's backgroundescaped the concentration camps.
After the war, Jelinek entered in the gymnasium, despite having missed several years of schooling because education of Jewish children had been forbidden since 1942. His mother, anxious that her son should get a good education, made great efforts for their emigration, especially when it became clear he would not be allowed to even attempt the graduation examination. His mother hoped her son would become a physician, but Jelinek dreamed of being a lawyer. He studied engineering in evening classes at the City College of New York and received stipends from the National Committee for a Free Europe that allowed him to study at the Massachusetts Institute of Technology. About his choice of specialty, he said: "Fortunately, to electrical engineering there belonged a discipline whose aim was not the construction of physical systems: the theory of information". He obtained his Ph.D. in 1962, with Robert Fano as his adviser.
In 1957, Jelinek paid an unexpected visit to Prague. He had been in Vienna and applied for a visa, hoping to see his former acquaintances again. He met with his old friend Miloš Forman, who introduced him to film student Milena Tobolováwhose screenplay had been the basis for the movie Easy Life (Snadný život). His flight back to the U.S. had a stopover in Munich, during which he called her to propose. Tobolová was considered a dissident and the authorities were not happy with her film. Jelinek asked for help from Jerome Wiesner and Cyrus Eaton, the latter who lobbied Nikita Khrushchev. Following the inauguration of John F. Kennedy, a group of Czech dissidents were allowed to emigrate in January 1961. Thanks to the lobbying, the future Milena Jelinek was one of them.
After completing his graduate studies, Jelinek, who had developed an interest in linguistics, had plans to work with Charles F. Hockett at Cornell University. However these fell through and during the next ten years he continued to study information theory. Having previously worked at IBM during a sabbatical, he began full-time work there in 1972at first on leave for Cornell, but permanently from 1974. He remained there for over twenty years. Although at first he had been offered a regular research job, upon his arrival he learned that Josef Raviv had recently been promoted to head of the newly opened IBM Haifa Research Laboratory, and became head of the Continuous Speech Recognition group at the Thomas J. Watson Research Center. Despite his team's successes in this area, Jelinek's work remained little known in his home country because Czech scientists were not allowed to participate in key conferences.
After the 1989 fall of communism, Jelinek helped establish scientific relationships, regularly visiting to lecture and helping to persuade IBM to establish a computing centre at Charles University. In 1993, he retired from IBM and went to Johns Hopkins University's Center for Language and Speech Processing, where he was director and Julian Sinclair Smith Professor of Electrical and Computer Engineering. He was still working there at the time of his death; Jelinek died of a heart attack at the close of an otherwise normal workday in mid-September 2010. He was survived by his wife, daughter and son, sister, stepsister, and three grandchildren, including Sophie Gold Jelinek.
Research and legacy
Information theory was a fashionable scientific approach in the mid '50s. However, pioneer Claude Shannon wrote in 1956 that this trendiness was dangerous. He said, "Our fellow scientists in many different fields, attracted by the fanfare and by the new avenues opened to scientific analysis, are using these ideas in their own problems ... It will be all too easy for our somewhat artificial prosperity to collapse overnight when it is realized that the use of a few exciting words like information, entropy, redundancy, do not solve all our problems." During the next decade, a combination of factors shut down the application of information theory to natural language processing (NLP) problemsin particular machine translation. One factor was the 1957 publication of Noam Chomsky's Syntactic Structures, which stated, "probabilistic models give no insight into the basic problems of syntactic structure". This accorded well with the philosophy of the artificial intelligence research of the time, which promoted rule-based approaches. The other factor was the 1966 ALPAC report, which recommended that the government should stop funding research into machine translation. ALPAC chairman John Pierce later said that the field was filled with "mad inventors or untrustworthy engineers". He said that the underlying linguistic problems must be solved before attempts at NLP could be reasonably made. These elements essentially halted research in the field.
Jelinek had begun to develop an interest in linguistics after the immigration of his wife, who initially enrolled in the MIT linguistics program with the help of Roman Jakobson. Jelinek often accompanied her to Chomsky's lectures, and even discussed the possibility of changing orientation with his adviser. Fano was "really upset", and after the failure of his project with Hockett at Cornell, he did not return to this field of research until starting work at IBM. The scope of research at IBM was considerably different from that of most other teams. According to Mark Liberman, "While [Jelinek] was leading IBM's effort to solve the general dictation problem during the decade or so following 1972, most other U.S. companies and academic researchers were working on very limited problems ... or were staying out of the field entirely".
Jelinek regarded speech recognition as an information theory problema noisy channel, in this case the acoustic signalwhich some observers considered a daring approach. The concept of perplexity was introduced in their first model, New Raleigh Grammar, which was published in 1976 as the paper "Continuous Speech Recognition by Statistical Methods" in the journal Proceedings of the IEEE. According to Young, the basic noisy channel approach "reduced the speech recognition problem to one of producing two statistical models". Whereas New Raleigh Grammar was a hidden Markov model, their next model, called Tangora, was broader and involved n-grams, specifically trigrams. Even though "it was obvious to everyone that this model was hopelessly impoverished", it was not improved upon until Jelinek presented another paper in 1999. The same trigram approach was applied to phones in single words. Although the identification of parts of speech turned out not to be very useful for speech recognition, tagging methods developed during these projects are now used in various NLP applications.
The incremental research techniques developed at IBM eventually became dominant in the field after DARPA, in the mid-80s, returned to NLP research and imposed that methodology to participating teams, shared common goals, data, and precise evaluation metrics. The Continuous Speech Recognition Group's research, which required large amounts of data to train the algorithms, eventually led to the creation of the Linguistic Data Consortium. In the 1980s, although the broader problem of speech recognition remained unsolved, they sought to apply the methods developed to other problems; machine translation and stock value prediction were both seen as options. A group of IBM researchers went on to work for Renaissance Technologies. Jelinek wrote, "The performance of the Renaissance fund is legendary, but I have no idea whether any methods we pioneered at IBM have ever been used. My former colleagues will not tell me: theirs is a very hush-hush operation!" Methods very similar to those developed for achieving speech recognition are at the base of most machine translation systems in use today. Observers have said that Pierce's paradigm, according to which engineering achievements in this area would be built on scientific progress, has been inverted, with the achievements in engineering being at the base of a number of scientific findings.
Jelinek's works won "best paper" awards on several occasions, and he received a number of company awards while he worked at IBM. He received the Society Award for "outstanding technical contributions and leadership" from the IEEE Signal Processing Society for 1997, and the ESCA Medal for Scientific Achievement in 1999. He was a recipient of an IEEE Third Millennium Medal in 2000, the European Language Resources Association's first Antonio Zampolli Prize in 2004, the 2005 James L. Flanagan Speech and Audio Processing Award, and the 2009 Lifetime Achievement Award from the Association for Computational Linguistics. He received an honoris causa Ph.D. from Charles University in 2001, was elected to the National Academy of Engineering in 2006 and was made one of twelve inaugural fellows of the International Speech Communication Association in 2008.
Selected publications
Jelinek, Frederick (1968). Probabilistic Information Theory: Discrete and memoryless models. McGraw-Hill series in systems science. New York: McGraw-Hill. 689p. (review)
———————- (1969). "Fast sequential decoding algorithm using a stack". IBM Journal of Research and Development 13(6):675–685. .
———————- (1969). "Tree encoding of memoryless time-discrete sources with a fidelity criterion". IEEE Transactions on Information Theory 15(5):584–590. . (received 1971 "Best Paper" award)
Bahl, Lalit R.; John Cocke, Frederick Jelinek, Josef Raviv (1974). "Optimal decoding of linear codes for minimizing symbol error rate". IEEE Transactions on Information Theory 20(2):284–287. . (received Information Theory Society Golden Jubilee paper award)
———————- (1976). "Continuous speech recognition by statistical methods". Proceedings of the IEEE 64(4):532–556. .
Brown, P.; J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R, Mercer and P. Roossin (1988). "A statistical approach to language translation" . In Dénes Vargha, ed. Coling 88: Proceedings of the 12th conference on Computational linguistics, volume 1. Budapest: John Von Neumann society for computing sciences. pp. 71–76. . .
———————- (1990). "Self-Organized Language Modeling for Speech Recognition". In Alex Waibel & Kai-Fu Lee, eds. Readings in speech recognition. San Mateo: Morgan Kaufmann. 629p. .
———————-; John D. Lafferty and Robert L. Mercer. (1990) "Basic methods of probabilistic context free grammars". Technical Report RC 16374 (72684), IBM.
Reprinted in Laface, Pietro; Renato De Mori (1992). Speech Recognition and Understanding: Recent advances, trends, and applications. NATO ASI series. Series F, Computer and systems sciences, 75. New York: Springer-Verlag. pp. 345–360. .
———————- (1997). Statistical Methods for Speech Recognition. Cambridge, Mass.: MIT Press. 283p. . (review) (review 2)
Chelba, Ciprian; Frederick Jelinek (2000). "Structured Language Modeling". Computer Speech & Language 14(4):283–332. (received 2002 "Best Paper" award).
Expanded version of a presentation at NLDB'99. Klagenfurt, Austria, June 17–19, 1999 ().
Xu, Peng; Ahmad Emami and Frederick Jelinek (2003). "Training Connectionist Models for the Structured Language Model". In Michael Collins and Mark Steedman, eds. EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing. East Stroudsburg, Penn.: Association for Computational Linguistics. pp. 160–167. . . (won "best paper" award)
References
Notes
References
External links
Institutional page at Johns Hopkins university
1932 births
2010 deaths
American people of Bohemian descent
American people of Czech-Jewish descent
Statistical natural language processing
Czechoslovak emigrants to the United States
Cornell University faculty
Harvard University faculty
Johns Hopkins University faculty
MIT School of Engineering alumni
IBM Research computer scientists
IBM employees
Members of the United States National Academy of Engineering
People from Kladno
Speech processing researchers
Natural language processing researchers |
Queen Elizabeth School (), or QES and QE ( or ) for short, is a school in Hong Kong. The school was the first English as a Medium of Instruction (EMI) (Anglo-Chinese) co-education secondary school founded by the Government of Hong Kong. It is located on a mound at the boundary of Sai Yee Street and Prince Edward Road West in Mong Kok, Kowloon, adjacent to Grand Century Place, CCC Heep Woh Primary School () and HK Weaving Mills Association Primary School ().
The school was conceived in 1953 when Queen Elizabeth II was crowned. Later it was run in September 1954 but operated as an after school on the premises in King's College, until October 1955 when it moved to the present location in Mong Kok.
The QES school camp in Tsam Chuk Wan, Sai Kung District, the New Territories was opened in 1962. With the transfer of sovereignty over Hong Kong to China in 1997, the original school badge with a crown was changed to a new one with the logo of the Education Department. Later, the Education Department was replaced by the Education and Manpower Bureau (EMB) (now Education Bureau, EDB), and the school badge was changed again.
The school's new annex was opened in September 2004.
School camp
Founded a campsite in Tsam Chuk Wan, Sai Kung District, the New Territories in 1962, Queen Elizabeth School is the only school in Hong Kong which owns a school camp.
Camp Warden Association
The Camp Warden Association (CWA) is responsible for organizing camping activities for students and to manage the condition of the campsite.
Training Course (TC): A year-long program offered every year to S.3 students who wish to join CWA.
Wardens in Training (Wits): Students in TC are called Wits.
Wardens: Students who passed the course and are recognized as qualified member of CWA.
CWA is composed of three major boards.
Administration Board: Responsible for promotion and external communication with alumni and school officials.
Instruction Board: Responsible for preparing training materials for Wits.
Maintenance Board: Responsible for maintaining facilities in campsite.
Facilities
Publications
School magazines: published per year since 1954 when the school was founded. Issue 63 will be issued this year.
《未學集》: edited and issued by Chinese & Chinese History Society of the school per two years, formerly 《新苗集》. The last version is 《未學集五編》.
Magazine for School Camp Golden Jubilee "From the Few to Many Camp 50 & Beyond"
Other special magazines
Achievements
As of 2023, QES counts 5 winners of the prestigious Hong Kong Outstanding Students Awards, ranking 17th (tied with Wah Yan College, Hong Kong and Madam Lau Kam Lung Secondary School of MFBM) among all secondary schools in Hong Kong.
Alumnus Dr. Law Ka-Ho holds the highest record of 10 distinctions in School Certificate Examination, and 7 distinctions in the Matriculation Examination. Sing Tao Daily highlighted him as the [10+7] Summa Cum Laude (狀 元). See the documented references in Section 13 on the Chinese Wikipedia page, and link to the Sing Tao news article.
Principals
Cheong Wai-fung (張維豐, 1954–1959)
Arthur Hinton (韓敦 or 韓頓, 1959–1967)
T. McC. Chamberlain (張伯倫, 1967–1970)
H.N. McNeil (麥尼路, 1970–1975)
Tan Peng-kian (陳炳乾, 1975–1980)
Su Chung-jen (蘇宗仁, 1980–1992)
Chan Ping-tat (陳秉達, 1992–1996)
Yeung Chi-hung (楊志雄, 1996–1997)
Sin Chow Dick-yee (冼周的兒, 1997–1998)
Yeung Chi-hung (楊志雄, 1998–2001)
Pang Cheung Yee-fan (彭張怡芬, 2001–2008)
Tong Kwok-keung (唐國強, 2008–2012)
Chan Ka-wai (陳家偉, 2012 -2015 )
Yuen Kwong-yip (袁廣業, 2015–2018)
Eric Chan Cheung-wai (陳祥偉,2018–)
Notable alumni
Science, culture and art
Ken P. Chong (張建平), Professor and researcher at the George Washington University and the National Institute of Science and Technology, https://www.nist.gov/, Fellow of ASME, cited in the American Men & Women of Science. Former Director of Mechanics and Materials at the U.S. National Science Foundation [www.nsf.gov].
Manying Ip (葉宋曼瑛): Professor of Asian Studies at Auckland University, expert on multicultural & transnational research. Fellow of the New Zealand Academy of Humanities and Royal Society of NZ.(Ref Ip)
Dorothy Y. Ko, (高彦頤) Professor of History and Women's Studies, Barnard College, Columbia University.
Hon-Yim Ko (高漢棪), Glenn L. Murphy Professor and Chair of Engineering, University of Colorado; Outstanding Educator of America, honored by the American Society of Engineering Education, Fellow of ASCE, cited in the American Men & Women of Science.
Fuk Kwok Li (李復國) Director of Mars Projects, awarded NASA highest honours of Outstanding Leadership Medal and Distinguished Service Medal, cited in the American Men & Women of Science.
Edward W Ng (伍煒國), mathematical scientist, California Institute of Technology, and NASA, Fellow of AAAS, cited in the American Men & Women of Science.
Man-Chiu Poon (潘文釗), Professor, Departments of Medicine, Pediatrics and Oncology, University of Calgary; Fellow of the Royal College of Physicians and Surgeons of Canada (Ref. MC Poon)
Peter T. Poon (潘天佑), Telecommunications manager and space technologist on NASA projects to Mars, Jupiter, Saturn, the Sun and Outer Solar System., profiled in Marquis Who's Who
Patrick Tam, (譚秉亮) Deputy Director and Head of Embryology Unit, Children's Medical Research Institute; Professor, Discipline of Medicine, Sydney Medical School, University of Sydney; Foreign Fellow of British Royal Society. (Ref Tam)
Benjamin Wah (華雲生), world-class computer scientist, Provost of CUHK, Fellow of AAAS & IEEE, cited in the American Men & Women of Science.
Kon Max Wong (黄榦), Director, Communications Technology Research Centre, Canada Research Chair Professor in Signal Processing, McMaster University; Honorary Professor of Electrical Engineering, Imperial College, London; Fellow IEEE; Fellow Canadian Academy of Engineering; Fellow Royal Society of Canada. (Ref. Wong)
Angelina K.Y. Chin (司徒娟兒), CIA, CRMA, is a retired executive of General Motors Co. and the Federal Reserve Bank of Chicago who has been extensively involved with The IIA, holding numerous volunteer roles, including chair of the Global Ethics Committee and the Audit Committee, and member of the North American Nominating Committee, Board of Regents, Professional Issues, and Education Products committees. She currently serves on the Internal Audit Foundation's Committee of Research and Education Advisors (CREA) and has co-authored and reviewed several Foundation publications, including Sawyer's Internal Auditing, 7th Edition – Enhancing and Protecting Organizational Value (2019).
Politics, economics and law
Li Kwan-ha (李君夏): Former Commissioner of Hong Kong Police Force
Pansy Wong (黃徐毓芳): New Zealand's first Asian Member of Parliament and Cabinet Minister
Rimsky Yuen Kwok-keung (袁國強): Secretary for Justice of Hong Kong
Engineering
Wun, Ho-kit (溫皓傑), Researcher in Transport Engineering at the Hong Kong University of Science and Technology
Fu, Ho-yu (傅浩宇), Soil Mechanics Expert at the Imperial College London
References
External links
Education and Manpower Bureau of the Government of the HKSAR
Professor P. Tam elected Foreign Fellow of BRS.
Secondary schools in Hong Kong
Government schools in Hong Kong
Educational institutions established in 1954
Mong Kok
1954 establishments in Hong Kong |
Newton Faller (January 25, 1947–October 9, 1996) the son of Kurt Faller and Ada Faller from Rio Grande do Sul, was a Brazilian computer scientist and electrical engineer. He is credited with the discovery of adaptive Huffman codes while an employee of IBM do Brasil in Rio. He was later the head of the Brazilian UNIX development project at the Electronic Computing Center of the Federal University of Rio de Janeiro (NCE/UFRJ), Rio de Janeiro.
He started his career working with data compression, studying the classical Huffman Codes and was the first to propose the "adaptive Huffman codes". This discovery became his Master's thesis and was later published in:
Newton Faller, "An Adaptive System for Data Compression," Record of the 7th Asilomar Conference on Circuits, Systems and Computers, pp. 593–597, 1973.
Later, Robert G. Gallager (1978) and Donald Knuth (1985) proposed some complements and the algorithm became widely known as FGK (from the initials of each of the researchers).
Later, Faller went to study in the United States from 1976 to 1981 and received a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 1981.
He was married to Maria Ester Kremer Faller and had two daughters, Maria Clara Kremer Faller and Ana Luisa Kremer Faller. He spent his childhood in Flamengo, Rio de Janeiro, and had two younger sisters: Ana Maria Faller and Angela Faller.
Faller died in 1996 and today the Brazilian equivalent of the Turing Award is called the "Newton Faller Award".
References
D. A. Huffman, "A Method for the Construction of Minimum Redundancy Codes," Proc. IRE, Vol. 40, No. 9, pp. 1098–1101, 1952.
Robert G. Gallager, "Variations on a Theme by Huffman," IEEE Transactions on Information Theory, Vol. 24, No. 6, pp. 668–674, Nov. 1978.
Donald E. Knuth, "Dynamic Huffman Coding," Journal of Algorithms, Vol. 6, pp. 163–180, 1985.
1947 births
1996 deaths
Brazilian people of German descent
Brazilian computer scientists
20th-century Brazilian engineers
People associated with Federal University of Rio de Janeiro |
The SMIC Private School () is a private, coeducational K-12 school located in the Zhangjiang Science City of Shanghai, China. The school was founded by Semiconductor Manufacturing International Corporation (SMIC) in 2001 and by 2009 had over 1,450 students. 2017 marked its 16th anniversary. The School is accredited by the Western Association of Schools and Colleges and the East Asia Regional Council of Schools.
History
The school was initially founded to provide education for the families of employees of the SMIC company. Since 2004, the school has been open for public enrollment.
The school grew rapidly since its founding; the school had 75 students in the 1st year, and almost 700 students in its 4th year. The school had its first English track graduating class of 7 in 2006. In 2009, the school had over 1800 students from 22 different countries. The school is authorized and approved by The U.S. College Board as an official SAT and AP testing center.
The school provides two academic tracks: an international division that uses an American curriculum with a Chinese requirement and a Chinese track that is based on the local academic curriculum but with strong English emphasis. The SMIC Private School is accredited by the Pudong Board of Education and was awarded as an "Excellent Private Elementary School and Middle/High School in China" in December 2009.
In 2018, the school was at the center of a major food safety scandal in which its cafeteria contractor Eurest, a subsidiary of Compass Group, was found to have supplied expired and substandard food. During an inspection of the cafeteria, visiting parents of students at the school discovered moldy vegetables, expired seasoning, and food labeled with a production date in the future. The incident was reported in international media.
As a result of the incident, the school headmaster Zhu Ronglin and two other administrative staff members were dismissed and are currently under investigation by government authorities. Following the incident, Shanghai food safety authorities ordered an investigation of cafeterias across the city. Expired items were also found at Concordia International School Shanghai, whose supplier is also Eurest. As a result of the investigation, the schools were ordered to cut ties with Eurest.
In 2019, the school assigned a textbook to eighth grade students containing assignments for the winter break. Parents reported finding a lewd short story in the textbook titled "Mommy's Washcloth" which described a child seeing his father having extramarital oral sex with their maid. The publisher of the book has apologized and has fired the editor of the textbook. However, the school is still to be held liable for failing to properly scrutinize the textbook which was not government-approved. According to the Communist Party mouthpiece China Daily, "[the] school will be severely punished for the sexually oriented joke" by the education authority.
Facilities
The school is housed on a campus 80,293 square meters in size in Zhangjiang Science City of Shanghai. Academic facilities include: 2 libraries, 5 computer labs, an observatory, two auditoriums, and specialty classrooms for dance, music, and studio art. Athletic facilities include an indoor lap pool and recreation center, a gymnasium, tennis courts, all-weather basketball courts, and a 400-meter track. These include:
The Observatory is a unique feature of the School architecture. The school's astronomical observatory sits atop the Middle/High School building and is visible from the junction of Longdong Avenue and Guanglan Road. The observatory has a diameter of 8 meters. The round roof-top of the observatory protects the telescope and its accompanying equipment and facilitates a 360 degree view of the sky. The dome can be programmed to open or shut, as well as stop at a specific position during its navigation. The Observatory is now defunct.
The Elementary School (ES) Auditorium can accommodate around 150 students. It is used for regular school assemblies, for watching educational movies, and by teachers for their multimedia lessons. The Middle/High School (MHS) also has two auditoriums, the seating capacity of each auditorium is about 400. They are used commonly for important school ceremonies and school-wide performances.
The School has two libraries. One is for the Elementary School and the other is for the Middle/High School.
The Gym is situated on the second floor of the building that houses the school cafeteria on the first floor. It has a standard size basketball court, and movable volleyball and badminton equipment.
The school cafeteria was set up in 2004 and can accommodate more than 700 students at present. It has been renovated twice – once in the summer of 2012, the other the summer of 2013.
Academics
The school offers preschool and kindergarten, elementary, and middle/high school education. The elementary and middle/high school divisions offer an international division and a Chinese track.
The middle and high school international divisions use Common Core and the AERO curriculum, which fulfills entrance requirements for United States colleges and universities. Combined enrollment with Chinese track students is offered for Chinese language classes, select elective courses, and sports teams. Students from both English and Chinese tracks come together rarely for weekly and school-wide events.
The school offers accelerated/honors and Advanced Placement in English, Mathematics, Science, History, and Chinese language. Advanced Placement courses include Psychology, Calculus AB/BC, Statistics, English Literature, English Language and Composition, Spanish, French, U.S. History, European History, World History, Computer Science A, Computer Science Principles, Chinese, Biology, Physics I, Physics II, Economics, Seminar, Research, Studio Art and Chemistry.
Students must complete 100 hours of community service to fulfill graduation requirements. Past opportunities have included volunteering at local Shanghai orphanages as well as coordinating a charity concert. Several school clubs, such as Community Service Club, offer community service hours for students who are members and participate in events. Some students volunteer as teachers' assistants during the English summer camps.
SMIC's newspaper, the Xin Lang Scholar, received first place for Best Front Page and Second Place Overall award from the American Scholastic Press Association (ASPA) during the 2004-2005 school year, and received First Place Overall with Special Merit during the 2009-2010 school year.
Co-curricular activities for middle and high school
Students run a variety of co-curricular clubs at the school.
High School Student Council
The High School Student Council is an elected representative body. It operates a co-op store and organizes student-oriented events throughout the year.
In 2017, the council organized a petition against the decision of the school's administration to introduce mandatory school uniforms without consultation with students, parents, and teachers. The 25-page bilingual policy paper garnered over 300 signatures. The school principal Kelley Ridings issued a response five months later, reiterating the administration's position and noting that "we appreciate this civil demonstration allowing positive exchange of views."
Student publications
The school's official publication is the Shark Scholar. The publication is a member of National Scholastic Press Association. The online publication is run by students in the Journalism elective, and in 2018 replaced the print publication XinLang Scholar, which had previously won First Place numerous times in the newspaper category of the American Scholastic Press Association.
Sharks2 is an unofficial student-run digital publication operated by the Sharks Digital Club, established in 2014. It provides periodical news and other digital services via a WeChat Official Account.
TEDxSMICSchool
TEDxSMICSchool is an independently organized TEDx event approved by TED, featuring an annual set of student and guest speakers sharing ideas focused around the annual theme. It was established in 2015 and is run by an unofficial student group.
Art Charity Program
The School started its annual Art Charity Program in 2004. In 2008, through a partnership of SMIC Private School students and SMIC employees, the School donated 88,000 RMB to help sick and needy children in its 5th annual Art Charity Program through the collection of more than 1,600 pieces of art.
Debate Club
The Debate Club participates in a number of intramural, regional, and national debate tournaments, including the SASDO held at the Shanghai American School Pudong.
Model United Nations
The SMIC School Model United Nations is the largest club in SMIC. It hosted its first SMICMUN Conference in 2014, and the conference grew steadily to accommodate around 100 delegates from both SMIC and other international school teams. MUN club members attend around 8-10 regional, national, and international conferences every year.
Computer Club
Members of the computer club develop apps and programs.
Business Club
Members of the Business Club explore economics, finance, and business. The Business Club has participated in the National Economic Challenge hosted by the Center for Economic Education, the Wharton Investment Contest hosted by the Wharton School of Business of the University of Pennsylvania, and various stock market simulators. Business Club is focused on starting its own non-profit business.
Athletics
The SMIC School is a member school of both SISAC and CISSA athletic conferences. The SISAC conference is a competitive league primarily for high school students, while CISSA is participation based, non-competitive league primarily for students in grades 6 through 8. The SMIC Sharks are consistently one of the top performing schools in SISAC. Students from SMIC Private School participate in a variety of sports including volleyball, basketball, soccer, cross country, track and field, badminton, swimming, and table tennis. In 2018, the SMIC Private School's Boy's Varsity Volleyball team won first place in the SISAC tournament for the first time in the school's history.
Notable alumni
Eric Chien, winner of the Grand Prix for close-up magic at FISM 2018
Jephanie Chen, goalkeeper and MVP of the China national team at the Women's Lacrosse World Cup 2018
Dio Shin, TedX speaker and Cambridge University Varsity writer. 2023
Sister campus
The school has one sister campus in Beijing, which is run independently of the Shanghai campus.
The Beijing SMIC Private School was established in September 2005. The Beijing school is a Pre-K to Grade 9 school offering an English Track and a Chinese Track. The 28,000-square-meter campus is attended by 2,000 students and has an academic staff of 300. Like the Shanghai campus, its English Track offers an American curriculum with 280 students and 50 academic staff members. The average class size is 25 and the student-teacher ratio is 6:1. The school is open to enrollment from the public, and claims to "encourage [students'] efforts to aim for excellence, while retaining a sense of honor, community, and joy."
References
External links
Official website of SMIC Private School
Private schools in China
International schools in Shanghai
Private schools in Shanghai |
A power module or power electronic module provides the physical containment for several power components, usually power semiconductor devices. These power semiconductors (so-called dies) are typically soldered or sintered on a power electronic substrate that carries the power semiconductors, provides electrical and thermal contact and electrical insulation where needed. Compared to discrete power semiconductors in plastic housings as TO-247 or TO-220, power packages provide a higher power density and are in many cases more reliable.
Module Topologies
Besides modules that contain a single power electronic switch (as MOSFET, IGBT, BJT, Thyristor, GTO or JFET) or diode, classical power modules contain multiple semiconductor dies that are connected to form an electrical circuit of a certain structure, called topology. Modules also contain other components such as ceramic capacitors to minimize switching voltage overshoots and NTC thermistors to monitor the module's substrate temperature. Examples of broadly available topologies implemented in modules are:
switch (MOSFET, IGBT), with antiparallel Diode;
bridge rectifier containing four (1-phase) or six (3-phase) diodes
half bridge (inverter leg, with two switches and their corresponding antiparallel diodes)
H-Bridge (four switches and the corresponding antiparallel diodes)
boost or power factor correction (one (or two) switches with one (or two) high frequency rectifying diodes)
ANPFC (power factor correction leg with two switches and their corresponding antiparallel diodes and four high frequency rectifying diodes)
three level NPC (I-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes)
three level MNPC (T-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes)
three level ANPC (multilevel inverter leg with six switches and their corresponding antiparallel diodes)
three level H6.5 - (consisting of six switches (four fast IGBTs/two slower IGBTs) and five fast diodes)
three-phase inverter (six switches and the corresponding antiparallel diodes)
Power Interface Module (PIM) - (consisting of the input rectifier, power factor correction and inverter stages)
Intelligent Power Module (IPM) - (consisting of the power stages with their dedicated gate drive protection circuits. May also be integrated with the input rectifier and power factor correction stages.)
Electrical Interconnection Technologies
Additional to the traditional screw contacts the electrical connection between the module and other parts of the power electronic system can also be achieved by pin contacts (soldered onto a PCB), press-fit contacts pressed into PCB vias, spring contacts that inherently press on contact areas of a PCB or by pure pressure contact where corrosion-proof surface areas are directly pressed together.
Press-fit pins achieve a very high reliability and ease the mounting process without the need for soldering. Compared to press-fit connections, spring contacts have the benefit of allowing easy and non-destructive removal of the connection several times, as for inspection or replacement of a module, for instance. Both contact types have rather limited current-carrying capability due to their comparatively low cross-sectional area and small contact surface. Therefore, modules often contain multiple pins or springs for each of the electrical power connections.
Current Research and Development
The current focus in R&D is on cost reduction, increase of power density, increase of reliability and reduction of parasitic lumped elements. These parasitics are unwanted capacitances between circuit parts and inductances of circuit traces. Both can have negative effects on the electromagnetic radiation (EMR) of the module if it is operated as an inverter, for instance. Another problem connected to parasitics is their negative impact on the switching behavior and the of the power semiconductors. Therefore, manufacturers work on minimizing the parasitic elements of their modules while keeping cost low and maintain a high degree of interchangeability of their modules with those of a second source (other manufacturer).
A further aspect for optimization is the so-called thermal path between the heat source (the dies) and the heat-sink. The heat has to pass through different physical layers such as solder, DCB, baseplate, thermal interface material (TIM) and the bulk of the heat-sink, until it is transferred to a gaseous medium such as air or a fluid medium such as water or oil. Since modern silicon-carbide power semiconductors show a larger power density, the requirements for heat transfer are increasing.
Applications
Power modules are used for power conversion equipment such as industrial motor drives, embedded motor drives, uninterruptible power supplies, AC-DC power supplies and in welder power supplies.
Power modules are also widely found in inverters for renewable energies as wind turbines, solar power panels, tidal power plants and electric vehicles (EVs).
History
The first potential-free power module was introduced into the market by Semikron in 1975. It is still in production, which gives an idea about the lifecycles of power modules.
Manufacturers
APEI
Eltek
Vishay
Danfoss
StarPower
Infineon
Mitsubishi
Semikron
ROHM
Dynex Semiconductor
CREE
AT&S
Fuji Electric
See also
Powerpack (drivetrain)
Power-egg
Prime mover
References
External links
Semikron Application Manual Power Semiconductors: extensive information about power semiconductor application and power module technology
Eltek Flatpack2 48V HE Example of Power module; a high efficiency rectifier
Power electronics |
Inglenook Sidings, created by Alan Wright (1928 - January 2005), is a model railway train shunting puzzle. It consists of a specific track layout, a set of initial conditions, a defined goal, and rules which must be obeyed while performing the shunting operations.
More broadly, in model railway usage inglenook may refer to a track layout (or portion thereof) that is based on or resembles the Inglenook Sidings puzzle.
Details
The track is based on Kilham Sidings, on the Alnwick-Cornhill branch of the North Eastern Railway (NER). The sidings should be able to accommodate 5, 3, and 3 wagons, the leading spur accommodating 3 wagons and the locomotive. For the original version of the puzzle there are 8 wagons in the sidings, the rule being:
Form a train of 5 wagons selected at random of a specific order from the 8 wagons.
See also
Timesaver, another shunting puzzle
References
External links
Inglenook Designs - Carendt.com - Small Inglenook Layout Designs (inspiration).
Model Railways Shunting Puzzles - Inglenook Sidings - A full description of the puzzle.
TaFWeb - Inglenook Sidings - Includes a downloadable Inglenook Sidings layout for Trainz.
Beaver Games : Inglenook Shunting - An online JavaScript based Inglenook game.
Inglenook Shunting Puzzles - A mathematical analysis of the solvability of Inglenook puzzles.
Shunting puzzles |
HRS-100, ХРС-100, GVS-100 or ГВС-100, (see Ref.#1, #2, #3 and #4) (, , ) was a third generation hybrid computer developed by Mihajlo Pupin Institute (Serbia, then SFR Yugoslavia) and engineers from USSR in the period from 1968 to 1971. Three systems HRS-100 were deployed in Academy of Sciences of USSR in Moscow and Novosibirsk (Akademgorodok) in 1971 and 1978. More production was contemplated for use in Czechoslovakia and German Democratic Republic (DDR), but that was not realised.
HRS-100 was invented and developed to study the dynamical systems in real and accelerated scale time and for efficient solving of wide array of scientific tasks at the institutes of the A.S. of USSR (in the fields: Aerospace-nautics, Energetics, Control engineering, Microelectronics, Telecommunications, Bio-medical investigations, Chemical industry etc.).
Overview
HRS-100 was composed of:
Digital computer:
central processor
16 kilowords of 0.9 μs 36-bit magnetic core primary memory, expandable to 64 kilowords.
secondary disk storage
peripheral devices (teleprinters, punched tape reader/punchers, parallel printers and punched card readers).
multiple Analog computer modules
Interconnection devices
multiple analog and digital Peripheral devices
Central processing unit
HRS-100 has a 32-bit TTL MSI processor with following capabilities:
four basic arithmetic operations are implemented in hardware for both fixed point and floating point operations
Addressing modes: immediate/literal, absolute/direct, relative, unlimited-depth multi-level memory indirect and relative-indirect
7 index registers and dedicated "index arithmetic" hardware
32 interrupt "channels" (10 from within the CPU, 10 from peripherals and 12 from interconnection devices and analog computer)
Primary memory
Primary memory was made up of 0.9 μs cycle time magnetic core modules. Each 36-bit word is organized as follows:
32 data bits
1 parity bit
3 program protection bits specifying which program (Operating System and up to 7 running applications) has access
Secondary storage
Secondary storage was composed of up to 8 of the CDC 9432D removable-media disk drive devices. Capacity of one set of disk platters was about 4 million 6-bit words or 768,000 words of HRS-100 computer. Total, combined, capacity of 8 drives is, therefore, 6,144,000 words. Each disk set comprised 6 platters out of which 10 surfaces are used. Data was organized into 100 cylinders and 16 1536-bit sectors (48 HRS-100 words).
Average data access time was 100 ms (max. 165 ms). Maximum seek time was 25 ms. Raw transfer sector write speed was 208,333 characters/s.
Peripherals
Peripherals communicate with the computer using interrupts and full length of HRS-100 words. Each separate unit has its own controller. Following devices were produced or planned:
5 to 8 channel Punched tape reader type PE 1001 (500-1000 characters/s)
5 to 8 channel Tape puncher type PE 4060 (150 characters/s)
IBM 735 teleprinter (88 character set, 7-bit data + 1 parity bit, printing speed: 15 characters/s)
Fast line printer DP 2440 (up to 700 lines/min, 64-character set, 132 characters per line)
Standard 80-column punched card reader DP SR300 (reading up to 300 cards/min)
Interconnection hardware
Interconnection hardware (called simply "Link") connects digital and analog components of HRS-100 into a single unified computer. It comprised:
Control unit for exchange of logic signals
Blocks of Aanalog-to-digital and digital-to-analog converters
16-bit 100 μs clock generator
Conversion channel relay block
Power supply
Link takes commands from a digital computer component and organizes their execution via 2 32-bit data channels, 11 control channels, synchronization signals via 3 channels and 9 interrupt channels. Connection between a digital and analog computers is established through a "common-control panel" and two separate consoles. Communicating digital data with analog consoles is done through 16 control, 16 sensitivity, 16 indicator and 10 functional "lines".
Analog-to-digital conversion is achieved by a single signed 14-bit 70,000 samples/s A/D converter and a 32-channel multiplexer. Digital-to-analog conversion is achieved by 16 independent signed 14-bit D/A converters with double registers. Typical D/A conversion took 2 μs.
Analog computer
Analog component of HRS-100 system is composed of up to seven analog machines all connected to the common-control panel. It contains all elements required to independently solve linear and non-linear differential equations, both directly and iteratively.
Units of analog computer:
linear analog calculation elements
non-linear analog calculation elements
parallel logic elements
electronic potentiometer system
calculation module and parallel logic control system
periodic block
control system
address system
measurement system
exchangeable program board (analog and digital)
reference voltage supply
Linear analog computer elements were designed to facilitate 0.01% precision in static mode and 0.1% in dynamic mode, for signals up to 1kHz. Non-linear elements precision was not required to be better than 0.1%.
Analog component of HRS-100 has its own peripheral units:
multi-channel ultraviolet writer
three-colour oscilloscope
X-Y writer
Development team
HRS-100 was designed and developed by the following team (see Ref.#1, #4, #5, and #6):
Principal Science Researchers: Prof. Boris Yakovlevich Kogan (Institute of Control Sciences - IPU AN.USSR, Moscow), Petar Vrbavac and Georgi Konstantinov (Mihajlo Pupin Institute, Belgrade).
Chief designers:
Digital part: Svetomir Ojdanić, Dušan Hristović (SFRY), A. Volkov, V. Lisikov (USSR)
Analogue part: B.J.Kogan, N. N. Mihaylov (USSR), Slavoljub Marjanović, Pavle Pejović (SFRY)
Link: Milan Hruška, Čedomir Milenković (SFRY), A. G. Spiro (USSR)
Software: E. A. Trahtengerc, S.J.Vilenkin, V. L. Arlazarov (USSR), Nedeljko Parezanović (SFRY).
See also
History of computer hardware in the SFRY
Mihajlo Pupin Institute
List of Soviet computer systems
Reference literature
HRS-100 (Hardware and Design Principles), pp. 3–52, by prof. Boris J.Kogan(Ed), IPU AN.USSR, Moscow, 1974 (in Russian).
HRS-100, Proceedings of Intern. Congress AICA-1973, Prague, pp. 305–324, 27–31.August 1973.
Analog Computing in the Soviet Union, by D. Abramovitch, IEEE Control Systems Magazine, pp. 52–62, June 2005.
Hybrid Computing System HRS-100, by P.Vrbavac, S.Ojdanic, D.Hristovic, M.Hruska, S.Marjanovic, Proc. of the 6. Int. Symp. on Electronics and Automation, pp. 347–356, Herceg Novi, Yugoslavia, 21–27.June 1971.
Development of the Computing Technology in Serbia (Razvoj Racunarstva u Srbiji), by Dušan Hristović, Phlogiston journal, No 18/19, pp. 89–105, Museum of the science and technology (MNT-SANU), Belgrade 2010/2011.
"50 Years of Computing in Serbia"(50 Godina Racunarstva u Srbiji), by D.B.Vujaklija and N.Markovic(Ed),pp. 37–44, DIS,IMP and PC Press, Belgrade 2011. (In Serbian).
Mihajlo Pupin Institute
Analog computers
Soviet Union–Yugoslavia relations
One-of-a-kind computers
Soviet computer systems
1960s in Belgrade
1970s in Belgrade |
Loudspeaker measurement is the practice of determining the behaviour of loudspeakers by measuring various aspects of performance. This measurement is especially important because loudspeakers, being transducers, have a higher level of distortion than other audio system components used in playback or sound reinforcement.
Anechoic measurement
One way to test a loudspeaker requires an anechoic chamber, with an acoustically transparent floor-grid. The measuring microphone is normally mounted on an unobtrusive boom (to avoid reflections) and positioned 1 metre in front of the drive units on the axis with the high-frequency driver. While this can produce repeatable results, such a 'free-space' measurement is not representative of performance in a room, especially a small room. For valid results at low frequencies, a very large anechoic chamber is needed, with large absorbent wedges on all sides. Most anechoic chambers are not designed for accurate measurement down to 20 Hz and most are not capable of measuring below 80 Hz.
Tetrahedral chamber
A tetrahedral chamber is capable of measuring the low frequency limit of the driver without the large footprint required by an anechoic chamber. This compact measurement system for loudspeaker drivers is defined in IEC 60268-21:2018, IEC 60268-22:2020 and AES73id-2019.
Half-space measurement
An alternative is to simply lay the speaker on its back pointing at the sky on open grass. Ground reflection will still interfere but will be greatly reduced in the mid-range because most speakers are directional, and only radiate very low frequencies backward. Putting absorbent material around the speaker will reduce mid-range ripple by absorbing rear radiation. At low frequencies, the ground reflection is always in-phase, so that the measured response will have increased bass, but this is what generally happens in a room anyway, where the rear wall and the floor both provide a similar effect. There is a good case, therefore, using such half-space measurements, and aiming for a flat half-space response. Speakers that are equalised to give a flat free-space response, will always sound very bass-heavy indoors, which is why monitor speakers tend to incorporate half-space, and quarter-space (for corner use) settings which bring in attenuation below about 400 Hz.
Digging a hole and burying the speaker flush with the ground allows far more accurate half-space measurement, creating the loudspeaker equivalent of the boundary effect microphone (all reflections precisely in-phase) but any rear port must remain unblocked, and any rear-mounted amplifier must be allowed cooling air. Diffraction from the edges of the enclosure is reduced, creating a repeatable and accurate, but not very representative, response curve.
Room measurements
At low frequencies, most rooms have resonances at a series of frequencies where a room dimension corresponds to a multiple of half wavelengths. Sound travels at about , so a room long will have resonances from 27.5 Hz upwards. These resonant modes cause large peaks and dips in the sound level of a constant signal as the frequency of that signal varies from low to high.
Additionally, reflections, dispersion, absorption, etc. all strongly alter the perceived sound, though this is not necessarily consciously noticeable for either music or speech, at frequencies above those dominated by room modes. These alterations depend on speaker locations with respect to reflecting, dispersing, or absorbing surfaces (including changes in speaker orientation) and on the listening position. In unfortunate situations, a slight movement of any of these, or of the listener, can cause considerable differences. Complex effects, such as stereo (or multiple channel) aural integration into a unified perceived "sound stage" can be lost easily.
There is limited understanding of how the ear and brain process sound to produce such perceptions, and so no measurement, or combination of measurements, can assure successful perceptions of, for instance, the "sound stage" effect. Thus, there is no assured procedure that will maximize speaker performance in any listening space (with the exception of the sonically unpleasant anechoic chamber). Some parameters, such as reverberation time (in any case, really applicable only to larger volumes), and overall room "frequency response" can be somewhat adjusted by addition or subtraction of reflecting, diffusing, or absorbing elements, but, though this can be remarkably effective (with the right additions or subtractions and placements), it remains something of an art and a matter of experience. In some cases, no such combination of modifications has been found to be very successful.
Microphone positioning
All multi-driver speakers (unless they are coaxial) are difficult to measure correctly if the measuring microphone is placed close to the loudspeaker and slightly above or below the optimum axis because the different path length from two drivers producing the same frequency leads to phase cancellation. It is useful to remember that, as a rule of thumb, 1 kHz has a wavelength of in air, and 10 kHz a wavelength of only . Published results are often only valid for very precise positioning of the microphone to within a centimetre or two.
Measurements made at 2 or 3 m, in the actual listening position between two speakers can reveal something of what is actually going on in a listening room. Horrendous though the resulting curve generally appears to be (in comparison to other equipment), it provides a basis for experimentation with absorbent panels. Driving both speakers is recommended, as this stimulates low-frequency room 'modes' in a representative fashion. This means the microphone must be positioned precisely equidistant from the two speakers if 'comb-filter' effects (alternate peaks and dips in the measured room response at that point) are to be avoided. Positioning is best done by moving the mic from side to side for maximum response on a 1 kHz tone, then a 3 kHz tone, then a 10 kHz tone. While the very best modern speakers can produce a frequency response flat to ±1 dB from 40 Hz to 20 kHz in anechoic conditions, measurements at 2 m in a real listening room are generally considered good if they are within ±12 dB.
Nearfield measurements
Room acoustics have a much smaller effect on nearfield measurements, so these can be appropriate when anechoic chamber analysis cannot be done. Measurements should be done at much shorter distances from the speaker than the speaker (or the sound source, like horn, vent) overall diameter, where the half-wavelength of the sound is smaller than the speaker overall diameter. These measurements yield direct speaker efficiency, or the average sensitivity, without directional information. For a multiple sound source speaker system, the measurement should be carried out for all sound sources (woofer, bass-reflex vent, midrange speaker, tweeter...).
These measurements are easy to carry out, can be done at almost any room, more punctual than in-box measurements, and predicts half-space measurements, but without directivity information.
Frequency response measurement
Frequency response measurements are only meaningful if shown as a graph, or specified in terms of ±3 dB limits (or other limits). A weakness of most quoted figures is a failure to state the maximum SPL available, especially at low frequencies. A power bandwidth measurement is, therefore, most useful, in addition to frequency response, this being a plot of maximum SPL out for a given distortion figure across the audible frequency range.
Distortion measurement
Distortion measurements on loudspeakers can only go as low as the distortion of the measurement microphone itself of course, at the level tested. The microphone should ideally have a clipping level of 120 to 140 dB SPL if high-level distortion is to be measured. A typical top-end speaker, driven by a typical 100watt power amplifier, cannot produce peak levels much above 105 dB SPL at 1 m (which translates roughly to 105 dB at the listening position from a pair of speakers in a typical listening room). Achieving truly realistic reproduction requires speakers capable of much higher levels than this, ideally around 130 dB SPL. Even though the level of live music measured on a (slow responding and RMS reading) sound level meter might be in the region of 100 dB SPL, programme level peaks on percussion will far exceed this. Most speakers give around 3% distortion measured 468-weighted 'distortion residue' reducing slightly at low levels. Electrostatic speakers can have lower harmonic distortion but suffer higher intermodulation distortion. 3% distortion residue corresponds to 1 or 2% total harmonic distortion. Professional monitors may maintain modest distortion up to around 110 dB SPL at 1 m, but almost all domestic speaker systems distort badly above 100 dB SPL.
Colouration analysis
Loudspeakers differ from most other items of audio equipment in suffering from colouration, the tendency of various parts of the speaker — the cone, its surround, the cabinet, the enclosed space — to carry on moving when the signal ceases. All forms of resonance cause this, by storing energy, and resonances with high Q factor are especially audible. Much of the work that has gone into improving speakers in recent years has been about reducing colouration, and Fast Fourier Transform, or FFT, measuring equipment was introduced in order to measure the delayed output from speakers and display it as a time vs. frequency waterfall plot or spectrogram plot. Initially, an analysis was performed using impulse response testing, but this 'spike' suffers from having very low energy content if the stimulus is to remain within the peak ability of the speaker. Later equipment uses correlation on other stimulus such as a maximum length sequence system analyser (MLSSA). Using multiple sine wave tones as a stimulus signal and analyzing the resultant output, Spectral Contamination testing provides a measure of a loudspeakers 'self-noise' distortion component. This 'picket fence' type of signal can be optimized for any frequency range, and the results correlate exceptionally well with sound quality listening tests.
See also
Audio power
Audio noise measurement
Audio quality measurement
Bandwidth extension
Directional sound
Isobaric loudspeaker
Loudspeaker acoustics
Parabolic loudspeaker
Programme levels
Speaker driver
Spherical coordinate system
Studio monitor
References
External links
MLSSA site
Praxis loudspeaker measurement system
CONEQ loudspeaker measurement and correction system
TTC Tetrahedral Test Chambers
Audio & Loudspeaker Technologies International (ALTI)
Loudspeaker technology
Sound measurements |
The PACE Award is an annual award from Automotive News. The focus of the award is an innovation (i) developed primarily by a supplier, (ii) that is new to the automotive industry, (iii) that is in use (e.g., used on a vehicle in production), and (iv) that "changes the rules of the game". Awards have been given for products, materials, processes, capital equipment, software and services. A panel of independent judges from industry, finance, research, and academia choose finalists from the initial applicants, make site visits to evaluate the innovation, and then gather to select winners, independent of the sponsors. Winners to date include suppliers from Japan, Korea, China, the US, Canada, Brazil, Germany, France, Italy, Poland and other European countries. Among the most awarded companies over the years are BorgWarner, Delphi Automotive, Federal-Mogul (acquired in 2018 by Tenneco), Valeo and PPG Industries as well as Robert Bosch GmbH, Gentex Corporation, and Siemens.
Automotive News and Ernst & Young founded the program, which gave out the first awards in 1995 to celebrate innovation, technological advancement and business performance among automotive suppliers. The black-tie awards ceremony has been held just prior to the annual SAE (Society of Automotive Engineers) convention in Detroit. The costs of the program (administration, judges out-of-pocket expenses, and the award ceremony) are paid by a combination of application fees and money from sponsors. For the 2019-20 the sponsors were Deloitte, APMA (the Ontario, Canada Automotive Modernization Program) and Invest Canada. Historically the annual award cycle began with gathering applications (through the end of summer, with online submissions); the selection of 30-35 finalists, who were announced at the auto industry's annual Global Leadership Conference in October at the Greenbrier; site visits by judges in November and December; the selection of winners in late January or early February; and the announcement of winners at a black tie event in Detroit in March or April.
For the 2019-2020 award cycle PACE added recognition as a PACE Pilot, which targeted pre-commercial innovations, to try to capture the rise of innovations tied to safety, fuel efficiency, vehicle electrification and driving assist/driving automation. This award attempts to capture innovations earlier in the development cycle, e.g. when they show up in announcements at the Consumer Electronics Show.
Due to the coronavirus pandemic, 2020 awards were made online. No decisions have been made on how the Awards process will adapt in 2020-21 in the face of the pandemic.
Awards programs such as PACE are potential sources for studying the nature of technological change. One such attempt is in Smitka and Warrian (2017). In Chapter6, "Automotive Innovation Model and the Supply Chain: PACE Awards", they find that:
"The results indicate that technology-pull is dominant, not technology-push. We do observe some innovation that seems to represent using new materials to implement a well-understood approach that was not previously cost-effective or otherwise was not used. We also find the occasional bright idea that in some cases could have been implemented decades ago, had anyone thought to try. However, overall we find that new vehicle technologies are responses to regulatory pressure to improve safety, limit emissions, and improve fuel efficiency."
Similarly, David Andrea notes in Forbes that:
"While the popular press is having an interesting debate on whether Detroit or Silicon Valley will have the greatest influence over the mobility revolution, the 2019 PACE awards are dominated by the suppliers with the deepest R&D capabilities and longest histories of commercializing innovation. Reviewing the finalists, it is interesting that the vast majority, 26 of the 31 come from what would be considered "traditional suppliers." And only 6 of the firms are located outside 100 miles of industry's Detroit epicenter. Perhaps this is the result of selection bias as the process to submit applications, meet with judges, produce promotional materials and the like is not small in budget of executive time and funds, two resources always in short supply in smaller firms or start-ups."
Winners
From 1997, citations describing the winning innovations are available on the Automotive News PACE Award web site. Winners for years not listed below, as well as citations that provide a one-paragraph overview of the innovation, can be found on the PACE Awards web site.
1995
AP Technoglass Company
Dana
Gentex Corporation
Johnson Controls
Philips Service Industries
1996
Cherry GmbH
Delco Electronics
Fayette Tubular Products
Gage Products Company
Progressive Tool and Industries Company (PICO)
1997
Robert Bosch GmbH
Dana - Auto Mate 2
Gentex Corporation - Gentex Metal Reflector
Johnson Controls - HomeLink
Rapid Design Services
1998
Benteler International - Thermally Efficient, Air-Gap Manifold and Exhaust Tube Applications, Parts and Systems
Cooper Automotive/Wagner Lighting - Chrysler Dakota and Durango Front Lamp Assembly and Related Manufacturing and Assembly Process
Dürr AG - Radiant Floor Construction (RFC) Paint Oven
Eaton Corporation - Eaton Spicer Solo
Gentex Corporation - Aspheric Auto-Dimming Exterior Mirror
Johnson Controls - CorteX
1999
ASHA Corporation - GERODISC (limited slip, hydro-mechanical coupling device)
Benteler International - WIN88 Rear Twist Beam Axle
Delphi Corporation - Stabilitrak
Goodyear - run-flat tires
Meritor - RHP Highway Parallelogram Trailer Air Suspension System
Motorola - 32-bit engine management controller
Stackpole Limited - high load-bearing Powdered Metal Parts
Teleflex Automotive Group - Adjustable Pedal System
2000
Autoliv - ASH-2 inflationary device
Delphi Delco Electronics Systems - Adaptive Cruise Control
Gentex Corporation - Binary, Complementary Synthetic-White LED Illuminators
The Gleason Works - Power Dry Cutting/UMC Ultima Axle Gear Manufacturing
Lumileds - SnapLED
PPG Industries - Powder Clearcoat Paint
Rieter - Ultra Light acoustic vehicle treatment
Siemens - keyless entry system
2001
Product Innovation:
Hendrickson International - Integrated Front Air Suspension and Steer Axle System
PPG Industries - Acoustic Coating
Raytheon - Night Vision
Tenneco - ASD (Acceleration Sensitive Damping)
Information Technology/Internet:
Delphi Automotive - Math Based Metal Removal (MBMR) software
Quality Measurement Control - CM4D Analyze software system
Management Practice:
ZF Friedrichshafen - Ergonomically based job assignment employee rotation process in its Tuscaloosa, Alabama plant
Manufacturing Process:
Nucap Industries - NUCAP Retention System
Europe:
Robert Bosch GmbH - High Pressure Common Rail
Open Category - Enduring Innovations:
Shape Corporation - Tubular High Strength Swept Bumpers
Open Category - Environmental:
BASF - Integrated Process
2002
Product Innovation:
Delphi Automotive - Quadrasteer
Delphi Deco Electronics Systems - Passive Occupant Detection System, Generation II (PODS II)
Goodyear - Wrangler maximum traction/reinforced, off-road tire (MT/R)
PPG Industries - Transportation Coating - FrameCoat Electrocoating
The POM Group - Direct Metal Deposition process
Europe:
Robert Bosch GmbH - Aerotwin windshield wipers
ZF Getriebe GmbH
Information Technology:
Engineous Software - iSight software for process integration and design optimization
2003
Product Innovation:
3M - Solar Reflecting Film
Delphi Automotive - MagneRide variable suspension damping
Federal-Mogul Corporation - Wagner ThermoQuiet Brake Pads and Shoes
Material Sciences Corporation - Acoustically engineered steel laminate Quiet Steel
PPG Industries - Ceramic clearcoat paint
Product Europe:
Siemens VDO Automotive - Piezo Common Rail Diesel Direct Injection System
Manufacturing Process & Capital Equipment:
Bishop Steering Technology - Warm Forging Die and Integrated Automatic Precision Forging Cell
Dürr AG - RoDip 3 electro-coating
The POM Group - RapiDIES foam forming process
Robert Bosch GmbH - Cassette Chrome Plating Process
Information Technology & Services:
Perceptron Inc. - AutoGauge FMS In-Process Measuring System
2004
Product Innovation:
Delphi Delco Electronics Systems - Delphi Forewarn Back-up Aid and Side Alert
Denso - Very High Pressure Solenoid Fuel Injection System
Johnson Controls - Overhead Rail Vehicle Personalization System
Visteon - Long Life Filtration Systems
Product Europe:
TRW Automotive - Active Control seatbelt retractor
Process:
BASF - ColorCARE software for controlling and comparing paint color
DuPont - Wet on Wet Two Tone Products
Filter Specialists - FERRX 5000 magnetic separation device to remove ferrous particles of the initial, e-coat, prior to the application of base coat paint
Information Technology:
AutoForm Engineering - DieDesigner Stamping FEA Simulation Geometry Generation
Delphi Automotive - horizontal modeling and digital process design for CAD/CAM
Motorola - VIAMOTO navigation system
2005
Product:
Dura Automotive Systems - Racklift Window Lift System
Gentex Corporation - SmartBeam Headlamp Dimming Microelectronics Solution
Illinois Tool Works - Direct Fuel System (DFS)
Multimatic - I-Beam Control Arm
Tenneco - Kinetic RFS (Reverse Function Stabilizer) technology
Valeo - Lane Departure Warning System
Product Europe:
Advanced Automotive Antennas - Fractal Antennas
BorgWarner - DualTronic dual clutch transmission - more commonly known as Volkswagen Group's Direct-Shift Gearbox (DSG)
Siemens VDO Automotive - Information Systems Passenger Cars reconfigurable color head-up display
Manufacturing Process & Capital Equipment:
Siemens VDO Automotive - DEKA VII Low Pressure Electronic Gasoline Fuel Injectors
Information Technology & Services:
i2 Technologies - Optimal Scheduler software for auto assembly plants
Innovative OEM Collaborator Awards:
Chrysler with Dura Automotive Systems - Improved Window-Lift Systems
Mercedes-Benz with Sick AG - Entry/Exit Light Curtain
2006
Product:
Federal-Mogul Corporation - Monosteel Diesel Piston
Illinois Tool Works - BosScrew Fastener
Magneti Marelli - Software Flexfuel Sensor (SFS) for Flexible-fuel vehicles
Osram Opto Semiconductors - Color on Demand interior lighting
SKF - X-Tracker Asymmetric Hub Bearing Unit
Product Europe:
Preh Automotive - Windshield Defogging Sensor
Tenneco - Low Cost, Low Weight Muffler
Valeo - StARS Micro-Hybrd system
Manufacturing Process & Capital Equipment:
Dow Automotive - Betamate LESA Adhesive System
PosiCharge - Battery Charging System
Information Technology:
CogniTens - OptiCell Non-Contact Measuring System for quality control inspection
Innovative OEM Collaborator Awards:
General Motors with PPG Industries - Color Harmony Process
Ford with PosiCharge - Fast Charging Battery Technology
2007
Product:
Alcoa - Dura-Bright Wheels with XBR Technology
Autoliv - Safety Vent Airbag
Federal-Mogul Corporation - HTA (High Temperature Alloy) Exhaust Gaskets
Halla Climate Control Corp - Wave Blade Fan and Saw Tooth Shroud
Valeo - Multi-Beam Radar (MBR) Blind-Zone Radar Sensor
Product Europe:
BorgWarner Turbo & Emissions Systems - BorgWarner Turbo & Emissions Systems Gasoline Turbocharger with Variable Turbine Geometry
Federal-Mogul Corporation Goetze Diamond Coating (GDC) (Piston Ring Coating)
Manufacturing Process & Capital Equipment:
Behr GmbH & Co. KG - BehrOxal surface treatment for corrosion protection
DuPont - DuPont EcoConcept paint and process
Hirotec - E3 Hemming Press
Information Technology:
RTT USA - RTT DeltaGen visualization toolset
Tenneco - Diesel Aftertreatment Predictive Development Process
Innovative OEM Collaborator Awards:
Porsche with BorgWarner Turbo & Emissions Systems - BorgWarner Turbo & Emissions Systems Gasoline Turbocharger with Variable Turbine Geometry
Porsche with BorgWarner TorqTransfer Systems - BorgWarner High Energy ITM3e AWD System
Volkswagen with DuPont - DuPont EcoConcept paint and process
2008
Product:
Cummins - Cummins 6.7-liter Turbo diesel
Dow Automotive - IMPAXX Energy Absorbing Foam
Eaton Corporation - CRUTONITE Valve Alloy
Gentex Corporation - Rear Camera Display (RCD) Mirror
Magneti Marelli - Tetrafuel System for use with Gasoline, Ethanol or Compressed Natural Gas
Xanavi Informatics Corporation and Sony Corporation - Around View Monitor (AVM)
Product Europe:
BorgWarner Turbo & Emissions Systems - Turbocharger with R2S Regulated Two-Stage Technology
Continental AG - Direct Injection System for Gasoline Applications
Valeo - Park 4U Semi-Automatic Parallel Parking
Manufacturing Process & Capital Equipment:
PPG Industries - Green Logic Paint Detackification Process
Webasto - Panoramic Polycarbonate Roof Module
Information Technology and Services:
Delphi Automotive - Sirius Backseat TV
Innovation Partnership Awards:
Chrysler with Mahle GmbH - CamInCam Variable valve timing (VVT) camshaft
Honda with Takata Corporation - Motorcycle Airbag System
Nissan with Xanavi Informatics Corporation and Sony Corporation - Around View Monitor (AVM)
2009
Product:
BorgWarner Morse TEC - Morse TEC CTA Camshaft Phasing System
Eaton Corporation - Eaton Twin Vortices Supercharger - TVS
Futuris Automotive - Tufted PET Carpet
Magna Mirrors - BlindZone Mirror
Product Europe:
BorgWarner BERU Systems - Pressure Sensor Glow Plug (PSG) for Diesel Engines
LuK GmbH & Co. - LuK Double Clutch for Double Clutch Transmissions
TI Automotive - Saddle-Shaped PZEV Plastic Fuel Tank
Manufacturing Process & Capital Equipment:
Alcoa - Alcoa's Vacuum Die Casting (AVDC) for Lightweight Door Assemblies
Henkel - Bonderite TecTalis Pre-treatment Process
Information Technology & Services:
Dassault Systèmes - DELMIA Automation digital manufacturing and production software solution
Microsoft - Microsoft Auto
Innovation Partnership Awards:
Ford with BorgWarner Morse TEC
General Motors with Futuris Automotive
2010
Product:
Delphi Automotive - Electronically Scanning Radar
Dura Automotive Systems - Horizontal Sliding Rear Window with Defrost
Meridian Lightweight Technologies - Single Piece Cast Magnesium Liftgate Inner Panel
PPG Industries - Super High Power Electrocoat
TI Automotive - Dual Channel Single Stage (DCSS 39-50) Electric fuel Pump
WABCO Vehicle Control Systems - OptiDrive Transmission Automation System
Product Europe:
Continental/NGK Insulators - Smart NOx Sensor
Delphi Corporation Powertrain Systems Division - Delphi direct Acting Piezo Injector
Federal-Mogul Corporation - Bayonet Connection System for Profile Wiper Blades
ZF Getriebe GmbH - ZF 8HP 8 Automatic Transmission
Manufacturing Process & Capital Equipment:
Henkel - Aquence Autodeposition and Co-Cure Paint Process
Dürr AG - EcoDryScrubber paint overspray retrieval system
Federal-Mogul Corporation - DuraBowl Piston Reinforcement Process
Federal-Mogul Corporation - High Precision Electro-Erosion Machining
Johnson Controls/Nordenia Deutschland - molded polypropylene (PP) Thin Film
Informatian Technology & Services:
Siemens PLM Software - Teamcenter In-Vehicle Software (IVS) Management System
Innovation PArtnership Awards:
Bombardier Recreational Products with Robert Bosch GmbH - Vehicle Stability System (VSS) for a 3-Wheeled Vehicle
Ford with Clarion Corporation of America - Next Generation Navigation System
Ford with Dura Automotive Systems - Horizontal Sliding Rear window with Defrost
Ford with Meridian Lightweight Technologies - Single Piece Cast Magnesium Liftgate Inner Panel
2011
Product:
Delphi Automotive - Delphi Multec GDi Fuel Injector
Federal-Mogul Corporation - EcoTough Piston Coating for Gasoline Engines
Federal-Mogul Corporation - Low-Friction LKZ Oil Control Ring (Innovative Two-piece Oil Ring for Direct-Injection Gasoline Engines)
Henkel - Terophon High Damping Foam
Honeywell Turbo Technologies - Honeywell DualBoost Turbocharger for Medium Duty Diesel Engines
Janesville Acoustics - Molded Fiber IP Closeputs with Integrated Lighting and Ducts
Key Safety Systems - Inflatable Seat Belt System
Mahle GmbH - Electrical Waste Gate Actuator
Osram Opto Semiconductors GmbH - LED Headlamp
Robert Bosch GmbH - Bosch P2 Parallel Full Hybrid Electric Vehicle System
Schaeffler Technologies - Lightweight Balance Shaft with Roller Bearings
Manufacturing Process:
Takata Corporation - Vacuum Folding Technology
Innovative Partnership Awards:
Chrysler with Janesville Acoustics - Molded Fiber IP Closeputs with Integrated Lighting and Ducts
Ford with Dassault Systèmes - Powertrain Digital Integration and Automation (PDIA)
Ford with Key Safety Systems - Inflatable Seat Belt System
2012
Product:
BorgWarner Turbo Systems - Turbocharger for Internal Combustion Engines with Low-Pressure Exhaust Gas Recirculation
Delphi Automotive - Delphi L-Shape Crimp for 0.13 mm2 wire size
Hendrickson Auxiliary Axle Systems - Complient Tie rod (CTR) Assembly and Damening System with PerfecTrak Technology
Honeywell Turbo Technologies - High Temperature, Ball Bearing (HTBB) VNT Turbo
Lear Corporation - Lear Solid State Smart Junction Box (S3JB)
Magna Mirrors - Infinity Mirror with touch screen technology
Methode Electronics Innovative TouchSensor Controls to Ford's MyFord Touch User Interface System
Schaeffler Technologies - UniAir Fully Variable Valve Lift System
Valeo - VisioBlade System (high-efficiency adaptive windshield washer system)
Manufacturing Process:
Delphi Automotive - Delphi Thermal Multi Port Folded Tube Condenser
Federal-Mogul Corporation - Two-Dimensional Ultrasonic Testing for Raised Gallery Diesel Pistons (Manufacturing Process)
Nalco Company - APEX Program-Sustainable Technology for Paint Detackification
PPG Industries - B1 and B2 Compact Process Paint Technology
3M/Esys Automation - Robotic Production System with Wheel Weights for Precision Tire and Wheel Balancing
Innovation Partnership Awards:
Fiat Powertrain and Chrysler with Schaeffler Technologies - UniAir Fully Variable Valve Lift System
Ford with Dana - Active Warm-up Heat Exchanger with Integrated Thermal Bypass Valve
2013
Product
BorgWarner Turbo Systems - Regulated 3--turbocharger System (R3S)
Brose North America - Hands-free Liftgate Opener
Continental Interior Division, Body and Security - Tire Pressure Monitoring System (LocSync)
Continental Chassis & Safety Division, ADAS Business Unit - 24GHZ ISM Band Short Range Radar
Dana - Diamond Series Driveshafts
Delphi Automotive - F2E Distributed Pump Common Rail System
Federal-Mogul - Coating for Engine Bearings
GPM GmbH - Electro-Hydraulic Controlled Flow (ECF) Water Pump
Halla Visteon Climate Control Corporation - Metal Seal Fitting
PPG Industries - Andaro Tint Dispersion
Valeo - Air Intake Module with integrated Water Charge Air Cooler
Manufacturing Process and Capital Equipment
Federal-Mogul - Injection Molding of High Modulus Bonded Pistons used in High Pressure Transmissions
Schuler Hydroforming Division - Hydroforming and Global Die Standardization Process
Information Technology
Hughes Telematics - Automotive Software Remote Update Technology
Innovation Partnership Winners
BMW with BorgWarner Turbo Systems - Regulated 3-turbocharger System (R3S)
General Motors with Takata Corporation - Front Center Airbag
Mercedes-Benz with Hughes Telematics - Automotive Software Remote Update Technology
Toyota with Continental Chassis & Safety Division, ADAS Business Unit - 24GHZ ISM Band Short Range Radar
Volkswagen with Valeo - Air Intake Module
2014
Product
Autoliv Inc. - Vårgårda Sweden - "Green" Airbag Inflator
BASF Corp. - Wyandotte. Mich. - Mold in Color High Touch, High Gloss Black Interior Door Switch Bezels
BorgWarner Transmission Systems - Auburn Hills, Mich. - BorgWarner Stop/Start Accumulator Solenoid Valve (Eco-Launch™ Solenoid Valve)
Continental Automotive - Chassis and Safety Business Unit - Auburn Hills, Mich. - Pressure Sensor for Pedestrian Protection (PPS pSAT)
Delphi Automotive - Warren, Ohio - ErgoMate™ Mechanical Assist System
Dow Automotive Systems - Auburn Hills, Mich. - BETAMATE™ Epoxy Structural Adhesive for Durable Bonding of Untreated Aluminum
Federal-Mogul – Wiesbaden, Germany - High Performance Bearings Without Lead
HELLA KGaA Hueck & Co. - Lippstadt, Germany - LED Matrix Beam Head Lights
Robert Bosch LLC - Farmington Hills, Mich. - Spray Enhancements in Gasoline Direct Injection Enabled by Laser Drilling
Schaeffler Group - Wooster, Ohio - Torque Converter with Centrifugal Pendulum Absorber
Valeo - Driving Assistance Product Group - Bietigheim-Bissingen, Germany - Back-over Protection System
ZF Friedrichshafen - Saarbrucken, Germany - Car Powertrain Technology Division - ZF's 9-speed Automatic Transmission
Manufacturing Process and Capital Equipment
ArcelorMittal and Magna-Cosma International - Chicago - Laser Ablation Process
Henkel Corporation - Madison Heights, Mich. - BONDERITE® 2798™ Process for High Aluminum
TI Automotive - Auburn Hills, Mich. - Adaptable Plastic Fuel Tank Advanced Process Technology (TAPT) for all vehicle powertrains
Innovation Partnership Winners
Ford for partnership on the high-gloss black interior door switch bezels with BASF Corporation
General Motors for partnership on the Eco-Launch™ solenoid valve with BorgWarner Transmission Systems
Honda R&D Americasfor partnership on the laser ablation process with ArcelorMittal and Magna-Cosma International
Paccar for partnership on the BETAMATE™ structural adhesive for untreated aluminum with Dow Automotive Systems
Tesla Motors for partnership on the Tegra® Visual Computing Module (VCM) with NVIDIA Corporation
Volvo Car Corporation for partnership on the pedestrian protection airbag with Autoliv Inc.
2015
BorgWarner - Limited-slip differential for front-wheel drive >> Detailed citation
ContinentalAG - Printed circuit board for transmission control units >> Detailed citation
Continental Automotive Systems Inc. - Multiapplication unified sensor element >> Detailed citation
Denso- Standardized HVAC unit >> Detailed citation
Federal-Mogul- DuroGlide piston ring coating >> Detailed citation
Federal-Mogul - MicroTorq seal for rotating shafts >> Detailed citation
FTE automotive - 2Polymer hydraulic gear shift actuator >> Detailed citation
GKN Driveline - Two-speed gearbox for electrified vehicles >> Detailed citation
Magna Closures - PureView seamless sliding window >> Detailed citation
Mahle - Evotec 2 lightweight piston >> Detailed citation
Nvidia - Tegra visual computing module >> Detailed citation
Osram Opto Semiconductors - Oslon black flat multichip family >> Detailed citation
Sika Automotive - Adhesive for mixed material bonding >> Detailed citation
Valeo Electrical Systems - Efficient alternator >> Detailed citation
2020
American Axle & Manufacturing, Detroit - Electric driveline
Continental Structural Plastics, Auburn Hills, Mich. Subsidiary of Teijin- CarbonPro pickup box
Delphi Technologies, Kokomo, Ind. - DIFlex-integrated circuit
EJOT Fastening Systems, Wixom, Mich. - EJOWELD friction element welding
Gentex Corp., Zeeland, Mich. - Integrated toll module
Lear Corp., Southfield, Mich. - Xevo commerce and service platform
Magna Exteriors, Troy, Mich. - Composite space frame
Marelli, Auburn Hills, Mich. - h-Digi lighting module
Mobileye REM Division, Jerusalem - Road Experience Management
Schaeffler Technologies, Herzogenaurach, Germany - Compact coaxial transmission for e-axle
Stoneridge, Novi, Mich. - MirrorEye camera monitor system
Tenneco, Southfield, Mich. - IROX2 bearing coating
Valeo, Bietigheim-Bissingen, Germany - XtraVue trailer
See also
List of motor vehicle awards
International Engine of the Year
Progressive Insurance Automotive X Prize
RJC Car of the Year
Ward's 10 Best Engines
References
External links
PACE Award official site(1)
PACE Award official site(2)
PACE Awards at autonews.com
Descriptions of Innovations of PACE Finalists and Award Winners
Automotive accessories
Motor vehicle awards |
In a hierarchical telecommunications network, the backhaul portion of the network comprises the intermediate links between the core network, or backbone network, and the small subnetworks at the edge of the network.
The most common network type in which backhaul is implemented is a mobile network. A backhaul of a mobile network, also referred to as mobile-backhaul connects a cell site towards the core network. The two main methods of mobile backhaul implementations are fiber-based backhaul and wireless point-to-point backhaul. Other methods, such as copper-based wireline, satellite communications and point-to-multipoint wireless technologies are being phased out as capacity and latency requirements become higher in 4G and 5G networks.
In both the technical and commercial definitions, backhaul generally refers to the side of the network that communicates with the global Internet, paid for at wholesale commercial access rates to or at an Internet exchange point or other core network access location. Sometimes middle mile networks exist between the customer's own LAN and those exchanges. This can be a local WAN connection.
Cell phones communicating with a single cell tower constitute a local subnetwork; the connection between the cell tower and the rest of the world begins with a backhaul link to the core of the internet service provider's network (via a point of presence). A backhaul may include wired, fiber optic and wireless components. Wireless sections may include using microwave bands and mesh and edge network topologies that may use a high-capacity wireless channel to get packets to the microwave or fiber links.
Definition
Visualizing the entire hierarchical network as a human skeleton, the core network would represent the spine, the backhaul links would be the limbs, the edge networks would be the hands and feet, and the individual links within those edge networks would be the fingers and toes.
Other examples include:
Connecting wireless base stations to the corresponding base station controllers.
Connecting DSLAMs to the nearest ATM or Ethernet aggregation node.
Connecting a large company's site to a metro Ethernet network.
Connecting a submarine communications cable system landing point (which is usually in a remote location) with the main terrestrial telecommunications network of the country that the cable serves.
National broadband plans
A telephone company is very often the internet service provider providing backhaul, although for academic research and education networks, large commercial networks or municipal networks, it is increasingly common to connect to public broadband backhaul. See national broadband plans from around the world, many of which were motivated by the perceived need to break the monopoly of incumbent commercial providers. The US plan for instance, specifies that all community anchor institutions should be connected by gigabit fiber optics before the end of 2020.
Available backhaul technologies
The choice of backhaul technology must take account of such parameters as capacity, cost, reach, and the need for such resources as frequency spectrum, optical fiber, wiring, or rights of way.
Generally, backhaul solutions can largely be categorized into wired (leased lines or copper/fiber) or wireless (point-to-point, point-to-multipoint over high-capacity radio links). Wired is usually a very expensive solution and often impossible to deploy in remote areas, hence making wireless a more suitable and/or a viable option. Multi-hop wireless architecture can overcome the hurdles of wired solutions to create efficient large coverage areas and with growing demand in emerging markets where often cost is a major factor in deciding technologies, a wireless backhaul solution is able to offer 'carrier-grade' services, whereas this is not easily feasible with wired backhaul connectivity.
Backhaul technologies include:
Free-space optical (FSO)
Point-to-point microwave radio relay transmission (terrestrial or, in some cases, by satellite)
Point-to-multipoint microwave-access technologies, such as LMDS, Wi-Fi, WiMAX, etc., can also function for backhauling purposes
DSL variants, such as ADSL, VDSL and SHDSL
PDH and SDH/SONET interfaces, such as (fractional) E1/T1, E3, T3, STM-1/OC-3, etc.
Ethernet
VoIP telephony over dedicated and public IP networks
Backhaul capacity can also be leased from another network operator, in which case that other network operator generally selects the technology being used, though this can be limited to fewer technologies if the requirement is very specific such as short-term links for emergency/disaster relief or for public events, where cost and time would be major factors and would immediately rule out wired solutions, unless pre-existing infrastructure was readily accessible or available.
Wireless vs. wireline backhaul
Wireless backhaul is easy to deploy, cost efficient and can provide high capacity connectivity, e.g., multiple gigabits per second, and even tens of Gbps. Wireline fiber backhaul, on the other hand, can provide practically endless capacity, but requires investment in deploying fiber as well as in optical equipment.
The above-mentioned tradeoff is considered when planning. The type of backhaul for each site is determined taking into consideration the capacity requirement (current and future), deployment timeline, fiber availability and feasibility and budget constraints.
WiFi mesh networks for wireless backhaul
As data rates increase, the range of wireless network coverage is reduced, raising investment costs for building infrastructure with access points to cover service areas. Mesh networks are unique enablers that can reduce this cost due to their flexible architecture.
With mesh networking, access points are connected wirelessly and exchange data frames with each other to forward to/from a gateway point.
Since a mesh requires no costly cable constructions for its backhaul network, it reduces total investment cost. Mesh technology’s capabilities can boost extending coverage of service areas easily and flexibly.
For further cost reduction, a large-scale high-capacity mesh is desirable. For instance, Kyushu University's Mimo-Mesh Project, based in Fukuoka City, Fukuoka Prefecture, Japan, has developed and put into use new technology for building high capacity mesh infrastructure. A key component is called IPT, intermittent periodic transmit, a proprietary packet-forwarding scheme that is designed to reduce radio interference in the forwarding path of mesh networks. In 2010, hundreds of wireless LAN access points incorporating the technology were installed in the commercial shopping and entertainment complex, Canal City Hakata, resulting in the successful operation of one of the world's largest indoor wireless multi-hop backhauls. That network uses a wireless multi-hop relay of up to 11 access points while delivering high bandwidth to end users. Actual throughput is double that of standard mesh network systems using conventional packet forwarding. Latency, as in all multi-hop relays, suffers, but not to the degree that it compromises voice over IP communications.
Open solutions: using many connections as a backhaul
Many common wireless mesh network hotspot solutions are supported in open source router firmware including DD-WRT, OpenWRT and derivatives. The IEEE 802.21 standard specifies basic capabilities for such systems including 802.11u unknown user authentication and 802.11s ad hoc wireless mesh networking support. Effectively these allow arbitrary wired net connections to be teamed or ganged into what appears to be a single backhaul – a "virtual private cloud". Proprietary networks from Meraki follow similar principles. The use of the term backhaul to describe this type of connectivity may be controversial technically. They invert the business definition, as it is the customer who is providing the connectivity to the open Internet while the vendor is providing authentication and management services.
Very long range (including submarine) networks
On very large scale long range networks, including transcontinental, submarine telecommunications cables are used. Sometimes these are laid alongside HVDC cables on the same route. Several companies, including Prysmian, run both HVDC power cables and telecommunications cables as far as FTTx. This reflects the fact that telecommunications backhaul and long range high voltage electricity transmission have many technologies in common, and are almost identical in terms of route clearing, liability in outages, and other legal aspects.
See also
Access network
Free Space Optics (FSO)
Last mile
Middle mile
Optical fiber
Point-to-multipoint
Point-to-point
Return channel
Wireless LAN
References
Bibliography
Hilt, Attila (2022). Throughput Estimation of K-zone Gbps Radio Links Operating in the E-band, Informacije MIDEM, Journal of Microelectronics, Electronic Components and Materials, Vol.52, No.1, pp.29-39, , Slovenia, 2022. DOI: 10.33180/InfMIDEM2022.104. (PDF)
Telecommunications infrastructure
Network architecture
Wireless networking |
The process of establishing documentary evidence demonstrating that a procedure, process, or activity carried out in testing and then production maintains the desired level of compliance at all stages. In the pharmaceutical industry, it is very important that in addition to final testing and compliance of products, it is also assured that the process will consistently produce the expected results. The desired results are established in terms of specifications for outcome of the process. Qualification of systems and equipment is therefore a part of the process of validation. Validation is a requirement of food, drug and pharmaceutical regulating agencies such as the US FDA and their good manufacturing practices guidelines. Since a wide variety of procedures, processes, and activities need to be validated, the field of validation is divided into a number of subsections including the following:
Equipment validation
Facilities validation
HVAC system validation
Cleaning validation
Process Validation
Analytical method validation
Computer system validation
Similarly, the activity of qualifying systems and equipment is divided into a number of subsections including the following:
Design qualification (DQ)
Component qualification (CQ)
Installation qualification (IQ)
Operational qualification (OQ)
Performance qualification (PQ)
History
The concept of validation was first proposed by two Food and Drug Administration (FDA) officials, Ted Byers and Bud Loftus, in 1979 in USA, to improve the quality of pharmaceuticals. It was proposed in direct response to several problems in the sterility of large volume parenteral market. The first validation activities were focused on the processes involved in making these products, but quickly spread to associated processes including environmental control, media fill, equipment sanitization and purified water production.
The concept of validation was first developed for equipment and processes and derived from the engineering practices used in delivery of large pieces of equipment that would be manufactured, tested, delivered and accepted according to a contract
The use of validation spread to other areas of industry after several large-scale problems highlighted the potential risks in the design of products. The most notable is the Therac-25 incident. Here, the software for a large radiotherapy device was poorly designed and tested. In use, several interconnected problems led to several devices giving doses of radiation several thousands of times higher than intended, which resulted in the death of three patients and several more being permanently injured.
In 2005 an individual wrote a standard by which the transportation process could be validated for cold chain products. This standard was written for a biological manufacturing company and was then written into the PDA's Technical Report # 39,thus establishing the industry standard for cold chain validation. This was critical for the industry due to the sensitivity of drug substances, biologics and vaccines to various temperature conditions. The FDA has also been very focused on this final area of distribution and the potential for a drug substances quality to be impacted by extreme temperature exposure.
4.6. Accuracy:
Accuracy of an analytical procedure is the closeness of test results obtained by that procedure
to the true value. The accuracy of an analytical procedure shall be established across its range.
4.7. Precision:
The precision of an analytical procedure expresses the closeness of agreement between a series
of measurements obtained from multiple sampling of the same homogeneous sample under the
prescribed conditions.
4.8. Method precision (Repeatability):
Method precision carried out on different test preparation of a homogenous sample within short
interval of time under same experimental conditions.
4.9. Intermediate precision (Ruggedness):
Intermediate precision (Ruggedness) expresses within-laboratories variations i.e. different
days, different analysts, different equipment etc.
4.10. Range:
The range of an analytical procedure is the interval between the upper and lower concentration
of analyte in the sample for which it has been demonstrated that the analytical procedure has a
suitable level of precision, accuracy and linearity
Reasons for validation
FDA, or any other food and drugs regulatory agency around the globe not only ask for a product that meets its specification but also require a process, procedures, intermediate stages of inspections, and testing adopted during manufacturing are designed such that when they are adopted they produce consistently similar, reproducible, desired results which meet the quality standard of product being manufactured and complies the Regulatory and Security Aspects. Such procedures are developed through the process of validation. This is to maintain and assure a higher degree of quality of food and drug products.
"Process validation is defined as the collection and evaluation of
data, from the process design stage through commercial production, which establishes scientific
evidence that a process is capable of consistently delivering quality product. Process validation
involves a series of activities taking place over the lifecycle of the product and process.". A properly designed system will provide a high degree of assurance that every step, process, and change has been properly evaluated before its implementation. Testing a sample of a final product is not considered sufficient evidence that every product within a batch meets the required specification.
Validation Master Plan
The Validation Master Plan is a document that describes how and when the validation program will be executed in a facility. Even though it is not mandatory, it is the document that outlines the principles involved in the qualification of a facility, defines the areas and systems to be validated and provides a written program for achieving and maintaining a qualified facility with validated processes. It is the foundation for the validation program and should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computer validation. The regulations also set out an expectation that the different parts of the production process are well defined and controlled, such that the results of that production will not substantially change over time.
The validation process
The validation scope, boundaries and responsibilities for each process or groups of similar processes or similar equipment's must be documented and approved in a validation plan. These documents, terms and references for the protocol authors are for use in setting the scope of their protocols. It must be based on a Validation Risk Assessment (VRA) to ensure that the scope of validation being authorised is appropriate for the complexity and importance of the equipment or process under validation. Within the references given in the VP the protocol authors must ensure that all aspects of the process or equipment under qualification; that may affect the efficacy, quality and or records of the product are properly qualified. Qualification includes the following steps:
Design qualification (DQ)- Demonstrates that the proposed design (or the existing design for an off-the-shelf item) will satisfy all the requirements that are defined and detailed in the User Requirements Specification (URS). Satisfactory execution of the DQ is a mandatory requirement before construction (or procurement) of the new design can be authorised.
Installation qualification (IQ) – Demonstrates that the process or equipment meets all specifications, is installed correctly, and all required components and documentation needed for continued operation are installed and in place.
Operational qualification (OQ) – Demonstrates that all facets of the process or equipment are operating correctly.
Performance qualification (PQ) – Demonstrates that the process or equipment performs as intended in a consistent manner over time.
Component qualification (CQ) – is a relatively new term developed in 2005. This term refers to the manufacturing of auxiliary components to ensure that they are manufactured to the correct design criteria. This could include packaging components such as folding cartons, shipping cases, labels or even phase change material. All of these components must have some type of random inspection to ensure that the third party manufacturer's process is consistently producing components that are used in the world of GMP at drug or biologic manufacturer.
There are instances when it is more expedient and efficient to transfer some tests or inspections from the IQ to the OQ, or from the OQ to the PQ. This is allowed for in the regulations, provided that a clear and approved justification is documented in the Validation Plan (VP).
This combined testing of OQ and PQ phases is sanctioned by the European Commission Enterprise Directorate-General within ‘Annex 15 to the EU Guide to Good Manufacturing Practice guide’ (2001, p. 6) which states that:
"Although PQ is described as a separate activity, it may in some cases be appropriate to perform it in conjunction with OQ."
Computer System Validation
This requirement has naturally expanded to encompass computer systems used both in the development and production of, and as a part of pharmaceutical products, medical devices, food, blood establishments, tissue establishments, and clinical trials. In 1983 the FDA published a guide to the inspection of Computerized Systems in Pharmaceutical Processing, also known as the 'bluebook'. Recently both the American FDA and the UK Medicines and Healthcare products Regulatory Agency have added sections to the regulations specifically for the use of computer systems. In the UK, computer validation is covered in Annex 11 of the EU GMP regulations (EMEA 2011). The FDA introduced 21 CFR Part 11 for rules on the use of electronic records, electronic signatures (FDA 1997).
The FDA regulation is harmonized with ISO 8402:1994, which treats "verification" and "validation" as separate and distinct terms. On the other hand, many software engineering journal articles and textbooks use the terms "verification" and "validation" interchangeably, or in some cases refer to software "verification, validation, and testing (VV&T)" as if it is a single concept, with no distinction among the three terms.
The General Principles of Software Validation (FDA 2002) defines verification as
"Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase."
It also defines Validation as
"Confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled". The software validation guideline states: “The software development process should be sufficiently well planned, controlled, and documented to detect and correct unexpected results from software changes." Annex 11 states "The validation documentation and reports should cover the relevant steps of the life
cycle."
Weichel (2004) recently found that over twenty warning letters issued by the FDA to pharmaceutical companies specifically cited problems in Computer System Validation between 1997 and 2001.
Probably the best known industry guidance available is the GAMP Guide, now in its fifth edition and known as GAMP5 published by ISPE (2008). This guidance gives practical advice on how to satisfy regulatory requirements.
Scope of Computer Validation
The definition of validation above discusses production of evidence that a system will meet its specification. This definition does not refer to a computer application or a computer system but to a process. The main implications in this are that validation should cover all aspects of the process including the application, any hardware that the application uses, any interfaces to other systems, the users, training and documentation as well as the management of the system and the validation itself after the system is put into use. The PIC/S guideline (PIC/S 2004) defines this as a 'computer related system'.
Much effort is expended within the industry upon validation activities, and several journals are dedicated to both the process and methodology around validation, and the science behind it.
Risk Based Approach To Computer Validation
In the recent years, a risk-based approach has been adopted within the industry, where the testing of computer systems (emphasis on finding problems) is wide-ranging and documented but not heavily evidenced (i.e. hundreds of screen prints are not gathered during testing). Annex 11 states "Risk management should be applied throughout the lifecycle of the computerised system taking into account patient safety, data integrity and product quality. As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerised system."
The subsequent validation or verification of computer systems targets only the "GxP critical" requirements of computer systems. Evidence (e.g. screen prints) is gathered to document the validation exercise. In this way it is assured that systems are thoroughly tested, and that validation and documentation of the "GxP critical" aspects is performed in a risk-based manner, optimizing effort and ensuring that computer system's fitness for purpose is demonstrated.
The overall risk posed by a computer system is now generally considered to be a function of system complexity, patient/product impact, and pedigree (Configurable-Off-The-Shelf or Custom-written for a certain purpose). A lower risk system should merit a less in-depth specification/testing/validation approach. (e.g. The documentation surrounding a spreadsheet containing a simple but "GxP" critical calculation should not match that of a Chromatography Data System with 20 Instruments)
Determination of a "GxP critical" requirement for a computer system is subjective, and the definition needs to be tailored to the organisation involved. However, in general a "GxP" requirement may be considered to be a requirement which leads to the development/configuration of a computer function which has a direct impact on patient safety,
the pharmaceutical product being processed, or has been developed/configured to meet a regulatory requirement. In addition if a function has a direct impact on GxP data (security or integrity) it may be considered "GxP critical".
Product life cycle approach in validation
Validation process efforts must account for the complete product life cycle, including developmental procedures adapted for qualification of a drug product commencing with its research and development phase, rationale for adapting a best fit formula which represents the relationship between required outputs and specified inputs, and procedure for manufacturing. Each step is required to be justified and monitored in order to provide a good quality food and drug product. The FDA emphasizes the product life cycle approach in its evaluation of manufacturer regulatory compliance as well.
See also
Good Automated Manufacturing Practice (GAMP)
Verification and Validation
Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme
Regulation of therapeutic goods
United States Pharmacopeia
References
Bibliography
Health Canada Validation Guidelines
Akers, J. (1993), 'Simplifying and improving Process Validation', Journal of Parenteral Science and Technology, vol. 47, no. 6, pp. 281–284.
ASTM E2537 Guide for Application of Continuous Quality Verification for Pharmaceutical and Biopharmaceutical Manufacturing
EMEA (1998), EUDRALEX Volume 4 – Medicinal Products for Human and Veterinary Use : Good Manufacturing Practice, European Medicines Agency, London
European Commission Enterprise Directorate-General (2001), Final Version of Annex 15 to the EU Guide to Good Manufacturing Practice, Qualification and Validation, Brussels. European Commission Enterprise Directorate-General.
US FDA: Guideline on general principles of Process Validation
Part 11: Electronic Records; Electronic Signatures,Code of Federal Regulations
Garston Smith, H. (2001), 'Considerations for Improving Software Validation', Journal of Validation Technology, vol. 7, no. 2, pp. 150–157.
IT Pharma Validation Europe: News and Updates on Computer System Validation and Infrastructure Qualification – e.g. EudraLex Volume 4 – Annex 11 computerised systems – revision January 2011
Lopez, Orlando (2002), “21 CFR Part 11 – A Complete Guide to International Compliance,” published by Sue Horwood Publishing Limited.
McDowall, R. D. (2005), 'Effective and practical risk management options for computerised system validation', The Quality Assurance Journal, vol. 9, no. 3, pp. 196–227.
Parker G, (2005) ‘Developing Appropriate Validation and Testing Strategies’ Presented for Scimcon Ltd at the Thermo Informatics World Conference. North America.
Powell-Evans, K. (1998), 'Streamlining Validation', Pharmaceutical Technology Europe, vol. 10, no. 12, pp. 48–52.
Segalstad, S.H (2008), ‘International IT Regulations and Compliance: Quality Standards in the Pharmaceutical and Regulated Industries, John Wiley & Sons , pp. 157 – 178.
Swartz, M. (2006) ‘Analytical Instrument Qualification’, Avanstar [online], available at: http://www.advanstar.com/test/pharmascience/pha-sci_supp-promos/phasci_reg_guidance/articles/Instrumentation1_Swartz_rv.pdf (Accessed 29 March 2009).
Validating Software used for the Pharmaceutical Industry. (2007). Retrieved July 6, 2009, from http://www.plainsite.net/validation/validation.htm
WHO Technical Report Series, No. 937, 2006. Annex 4. Appendix 5. 2006
Wingate, G.A.S. (2004), 'Computer Systems Validation: Quality Assurance, Risk Management, and Regulatory Compliance for the Pharmaceutical and Healthcare Industry', Interpharm Press.
Guidance for Industry. Process Validation: General Principles and Practices. U.S. Department of Health and Human Services Food and Drug Administration. January 2011.
Clinical research
Pharmaceutical industry
Quality
Clinical data management |
The Faculty of Science is one of six faculties at the University of Waterloo.
History
In the fall of 1959, the first students were enrolled in the Faculty of Science. As of 2015, there are 5,021 full-time undergraduates and 547 full-time graduate students.
In 2004/05, Science attracted almost $42.5 million in research funding in areas such as aquatic ecology, microbiology, solid state chemistry, environmental biology and groundwater contamination clean-up. In 2013/14 brought in $65 million in research funds, accounting for 38% of the University's total research income.
In October 2002, the Institute for Quantum Computing was established, with the assistance of Mike Lazaridis, as well the Perimeter Institute for Theoretical Physics.
The current Dean of the faculty is Dr. Bob Lemieux, who began his appointment July 1, 2015. He is the 9th Dean of Science at the University of Waterloo.
Others who have held the title of Dean of Science include:
Dr. Terry McMahon, who was established in 2007.
Departments
There are currently four departments in the Faculty of Science. They are the Departments of Biology, Chemistry, Earth & Environmental Sciences, and Physics & Astronomy. The Faculty of Science also runs the School of Optometry, and the School of Pharmacy. The School of Optometry is Canada's only English-language School of Optometry, renowned for its outreach programs and vision research. The Internationally renowned Perimeter Institute for Theoretical Physics is associated with the Department of Physics, and the Institute for Quantum Computing is also run through the Faculty of Science.
Student life
Students in the Faculty of Science are represented by the Science Society (SciSoc) which hosts social events, represents student interests to the university, and operates the Science C&D student coffee shop in the Biology 1 building.
Answerable to SciSoc are the departmental clubs, which host social events for students in a particular program. These clubs include the Biochem Student Association, Biology Undergraduate Student Society, Chemistry Club, Materials & Nanosciences Society, Physics Undergraduate Society, Science and Business Students' Association, and WATROX. Due to the wide range of departments in the faculty, the clubs are based out of the many different buildings that are part of the Faculty of Science.
Additionally, all students pay into the Waterloo Science Endowment Fund (WatSEF) which provides funding for updating lab equipment and ensuring students have access to latest technologies.
The mascot for the faculty of Science was Arriba the Amoeba. In 2023, it changed to a friendly, intelligent dinosaur name Cobalt.
References
Faculty of Science
1959 establishments in Ontario |
Micromechanics (or, more precisely, micromechanics of materials) is the analysis of composite or heterogeneous materials on the level of the individual constituents that constitute these materials.
Aims of micromechanics of materials
Heterogeneous materials, such as composites, solid foams, polycrystals, or bone, consist of clearly distinguishable constituents (or phases) that show different mechanical and physical material properties. While the constituents can often be modeled as having isotropic behaviour, the microstructure characteristics (shape, orientation, varying volume fraction, ..) of heterogeneous materials often leads to an anisotropic behaviour.
Anisotropic material models are available for linear elasticity. In the nonlinear regime, the modeling is often restricted to orthotropic material models which do not capture the physics for all heterogeneous materials. An important goal of micromechanics is predicting the anisotropic response of the heterogeneous material on the basis of the geometries and properties of the individual phases, a task known as homogenization.
Micromechanics allows predicting multi-axial responses that are often difficult to measure experimentally. A typical example is the out-of-plane properties for unidirectional composites.
The main advantage of micromechanics is to perform virtual testing in order to reduce the cost of an experimental campaign. Indeed, an experimental campaign of heterogeneous material is often expensive and involves a larger number of permutations: constituent material combinations; fiber and particle volume fractions; fiber and particle arrangements; and processing histories). Once the constituents properties are known, all these permutations can be simulated through virtual testing using micromechanics.
There are several ways to obtain the material properties of each constituent: by identifying the behaviour based on molecular dynamics simulation results; by identifying the behaviour through an experimental campaign on each constituent; by reverse engineering the properties through a reduced experimental campaign on the heterogeneous material. The latter option is typically used since some constituents are difficult to test, there are always some uncertainties on the real microstructure and it allows to take into account the weakness of the micromechanics approach into the constituents material properties. The obtained material models need to be validated through comparison with a different set of experimental data than the one use for the reverse engineering.
Generality on micromechanics
A key point of micromechanics of materials is the localization, which aims at evaluating the local (stress and strain) fields in the phases for given macroscopic load states, phase properties, and phase geometries. Such knowledge is especially important in understanding and describing material damage and failure.
Because most heterogeneous materials show a statistical rather than a deterministic arrangement of the constituents, the methods of micromechanics are typically based on the concept of the representative volume element (RVE). An RVE is understood to be a sub-volume of an inhomogeneous medium that is of sufficient size for providing all geometrical information necessary for obtaining an appropriate homogenized behavior.
Most methods in micromechanics of materials are based on continuum mechanics rather than on atomistic approaches such as nanomechanics or molecular dynamics. In addition to the mechanical responses of inhomogeneous materials, their thermal conduction behavior and related problems can be studied with analytical and numerical continuum methods. All these approaches may be subsumed under the name of "continuum micromechanics".
Analytical methods of continuum micromechanics
Voigt (1887) - Strains constant in composite, rule of mixtures for stiffness components.
Reuss (1929) - Stresses constant in composite, rule of mixtures for compliance components.
Strength of Materials (SOM) - Longitudinally: strains constant in composite, stresses volume-additive. Transversely: stresses constant in composite, strains volume-additive.
Vanishing Fiber Diameter (VFD) - Combination of average stress and strain assumptions that can be visualized as each fiber having a vanishing diameter yet finite volume.
Composite Cylinder Assemblage (CCA) - Composite composed of cylindrical fibers surrounded by cylindrical matrix layer, cylindrical elasticity solution. Analogous method for macroscopically isotropic inhomogeneous materials: Composite Sphere Assemblage (CSA)
Hashin-Shtrikman Bounds - Provide bounds on the elastic moduli and tensors of transversally isotropic composites (reinforced, e.g., by aligned continuous fibers) and isotropic composites (reinforced, e.g., by randomly positioned particles).
Self-Consistent Schemes - Effective medium approximations based on Eshelby's elasticity solution for an inhomogeneity embedded in an infinite medium. Uses the material properties of the composite for the infinite medium.
Mori-Tanaka Method - Effective field approximation based on Eshelby's elasticity solution for inhomogeneity in infinite medium. As is typical for mean field micromechanics models, fourth-order concentration tensors relate the average stress or average strain tensors in inhomogeneities and matrix to the average macroscopic stress or strain tensor, respectively; inhomogeneity "feels" effective matrix fields, accounting for phase interaction effects in a collective, approximate way.
Numerical approaches to continuum micromechanics
Methods based on Finite Element Analysis (FEA)
Most such micromechanical methods use periodic homogenization, which approximates composites by periodic phase arrangements. A single repeating volume element is studied, appropriate boundary conditions being applied to extract the composite's macroscopic properties or responses. The Method of Macroscopic Degrees of Freedom can be used with commercial FE codes, whereas analysis based on asymptotic homogenization typically requires special-purpose codes.
The Variational Asymptotic Method for Unit Cell Homogenization (VAMUCH) and its development, Mechanics of Structural Genome (see below), are recent Finite Element based approaches for periodic homogenization.
In addition to studying periodic microstructures, embedding models and analysis using macro-homogeneous or mixed uniform boundary conditions can be carried out on the basis of FE models. Due to its high flexibility and efficiency, FEA at present is the most widely used numerical tool in continuum micromechanics, allowing, e.g., the handling of viscoelastic, elastoplastic and damage behavior.
Mechanics of Structure Genome (MSG)
A unified theory called mechanics of structure genome (MSG) has been introduced to treat structural modeling of anisotropic heterogeneous structures as special applications of micromechanics. Using MSG, it is possible to directly compute structural properties of a beam, plate, shell or 3D solid in terms of its microstructural details.
Generalized Method of Cells (GMC)
Explicitly considers fiber and matrix subcells from periodic repeating unit cell. Assumes 1st-order displacement field in subcells and imposes traction and displacement continuity. It was developed into the High-Fidelity GMC (HFGMC), which uses quadratic approximation for the displacement fields in the subcells.
Fast Fourier Transforms (FFT)
A further group of periodic homogenization models make use of Fast Fourier Transforms (FFT), e.g., for solving an equivalent to the Lippmann–Schwinger equation. FFT-based methods at present appear to provide the numerically most efficient approach to periodic homogenization of elastic materials.
Volume Elements
Ideally, the volume elements used in numerical approaches to continuum micromechanics should be sufficiently big to fully describe the statistics of the phase arrangement of the material considered, i.e., they should be Representative Volume Elements (RVEs).
In practice, smaller volume elements must typically be used due to limitations in available computational power. Such volume elements are often referred to as Statistical Volume Elements (SVEs). Ensemble averaging over a number of SVEs may be used for improving the approximations to the macroscopic responses.
See also
Micromechanics of Failure
Eshelby's inclusion
Representative elementary volume
Composite material
Metamaterial
Negative index metamaterials
John Eshelby
Rodney Hill
Zvi Hashin
References
External links
Micromechanics of Composites (Wikiversity learning project)
Further reading
Composite materials |
Fluorosilicate glass (FSG) is a glass material composed primarily of fluorine, silicon and oxygen. It has a number of uses in industry and manufacturing, especially in semiconductor fabrication where it forms an insulating dielectric. The related fluorosilicate glass-ceramics have good mechanical and chemical properties.
Semiconductor fabrication
FSG has a small relative dielectric constant (low-κ dielectric) and is used in between metal copper interconnect layers during silicon integrated circuit fabrication process. It is widely used by semiconductor fabrication plants on geometries under 0.25 microns (μ). FSG is effectively a fluorine-containing silicon dioxide (κ=3.5, while κ of undoped silicon dioxide is 3.9). FSG is used by IBM. Intel started using Cu metal layers and FSG on its 1.2 GHz Pentium processor at 130 nm complementary metal–oxide–semiconductor (CMOS). Taiwan Semiconductor Manufacturing Company (TSMC) combined FSG and copper in the Altera APEX.
Fluorosilicate glass-ceramics
Fluorosilicate glass-ceramics are crystalline or semi-crystalline solids formed by careful cooling of molten fluorosilicate glass. They have good mechanical properties.
Potassium fluororichterite based materials are composed from tiny interlocked rod-shaped amphibole crystals; they have good resistance to chemicals and can be used in microwave ovens. Richterite glass-ceramics are used for high-performance tableware.
Fluorosilicate glass-ceramics with sheet structure, derived from mica, are strong and machinable. They find a number of uses and can be used in high vacuum and as dielectrics and precision ceramic components. A number of mica and mica-fluoroapatite glass-ceramics were studied as biomaterials.
See also
Fluoride glass
Glass
Silicate
References
Silicates
Glass compositions
Integrated circuits
Semiconductor fabrication materials
Biomaterials |
Nanochemistry is an emerging sub-discipline of the chemical and material sciences that deals with the development of new methods for creating nanoscale materials. The term "nanochemistry" was first used by Ozin in 1992 as 'the uses of chemical synthesis to reproducibly afford nanomaterials from the atom "up", contrary to the nanoengineering and nanophysics approach that operates from the bulk "down"'. Nanochemistry focuses on solid-state chemistry that emphasizes synthesis of building blocks that are dependent on size, surface, shape, and defect properties, rather than the actual production of matter. Atomic and molecular properties mainly deal with the degrees of freedom of atoms in the periodic table. However, nanochemistry introduced other degrees of freedom that controls material's behaviors by transformation into solutions. Nanoscale objects exhibit novel material properties, largely as a consequence of their finite small size. Several chemical modifications on nanometer-scaled structures approve size dependent effects.
Nanochemistry is used in chemical, materials and physical science as well as engineering, biological, and medical applications. Silica, gold, polydimethylsiloxane, cadmium selenide, iron oxide, and carbon are materials that show its transformative power. Nanochemistry can make the most effective contrast agent of MRI out of iron oxide (rust) which can detect cancers and kill them at their initial stages. Silica (glass) can be used to bend or stop lights in their tracks. Developing countries also use silicone to make circuits for the fluids used in pathogen detection. Nano-construct synthesis leads to the self-assembly of the building blocks into functional structures that may be useful for electronic, photonic, medical, or bioanalytical problems. Nanochemical methods can be used to create carbon nanomaterials such as carbon nanotubes, graphene, and fullerenes which have gained attention in recent years due to their remarkable mechanical and electrical properties.
Applications
Medicine
Magnetic Resonance Imaging Detection (MDR)
Over the past two decades, ion oxide nanoparticles for biomedical use had increased dramatically, largely due to its ability of non-invasive imaging, targeting and triggering drug release, or cancer therapy. Stem or immune cell could be marked with ion oxide nanoparticles to be detected by Magnetic resonance imaging (MDR). However, the concentration of ion oxide nanoparticles needs to be high enough to enable the significant detection by MDR. Due to the limited understanding of physicochemical nature of ion oxide nanoparticles in biological systems, more research is needed to ensure nanoparticles can be controlled under certain conditions for medical usage without posing harm to human.
Drug delivery
Emerging methods of drug delivery involving nanotechnological methods can be useful by improving bodily response, specific targeting, and non-toxic metabolism. Many nanotechnological methods and materials can be functionalized for drug delivery. Ideal materials employ a controlled-activation nanomaterial to carry a drug cargo into the body. Mesoporous silica nanoparticles (MSN) have increased in research popularity due to their large surface area and flexibility for various individual modifications while maintaining high-resolution performance under imaging techniques. Activation methods greatly vary across nanoscale drug delivery molecules, but the most commonly used activation method uses specific wavelengths of light to release the cargo. Nanovalve-controlled cargo release uses low-intensity light and plasmonic heating to release the cargo in a variation of MSN containing gold molecules. The two-photon activated photo-transducer (2-NPT) uses near infrared wavelengths of light to induce the breaking of a disulfide bond to release the cargo. Recently, nanodiamonds have demonstrated potential in drug delivery due to non-toxicity, spontaneous absorption through the skin, and the ability to enter the blood–brain barrier.
The unique structure of carbon nanotubes also gives rise to many innovative inventions of new medical methods. As more medicine is made at the nano level to revolutionize the ways for human to detect and treat diseases, carbon nanotubes become a stronger candidate in new detection methods and therapeutic strategies. Specially, carbon nanotubes can be transformed into sophisticated biomolecule and allow its detection through changes in the carbon nanotube fluorescence spectra. Also, carbon nanotubes can be designed to match the size of small drug and endocitozed by a target cell, hence becoming a delivery agent.
Tissue engineering
Cells are very sensitive to nanotopographical features, so optimization of surfaces in tissue engineering has pushed towards implantation. Under appropriate conditions, a carefully crafted 3-dimensional scaffold is used to direct cell seeds toward artificial organ growth. The 3-D scaffold incorporates various nanoscale factors that control the environment for optimal and appropriate functionality. The scaffold is an analog of the in vivo extracellular matrix in vitro, allowing for successful artificial organ growth by providing the necessary, complex biological factors in vitro.
Wounds healing
For abrasions and wounds, nanochemistry has demonstrated applications in improving the healing process. Electrospinning is a polymerization method used biologically in tissue engineering but can also be used for wound dressing and drug delivery. This produces nanofibers that encourage cell proliferation, antibacterial properties, in controlled environment. These properties appear macroscopically, however, nanoscale versions may show improved efficiency due to nanotopographical features. Targeted interfaces between nanofibers and wounds have higher surface area interactions and are advantageous in vivo. There is evidence that certain nanoparticles of silver are useful to inhibit some viruses and bacteria.
Cosmetics
Materials in certain cosmetics such as sun cream, moisturizer, and deodorant may have potentials benefits from the use of nanochemistry. Manufacturers are working to increase the effectiveness of various cosmetics by facilitating oil nanoemulsion. These particles have extended the boundaries in managing wrinkling, dehydrated, and inelastic skin associated with aging. In sunscreen, titanium dioxide and zinc oxide nanoparticles prove to be effective UV filters but can also penetrate through skin. These chemicals protect the skin against harmful UV light by absorbing or reflecting the light and prevent the skin from retaining full damage by photoexcitation of electrons in the nanoparticle.
Electrics
Nanowire compositions
Scientists have devised a large number of nanowire compositions with controlled length, diameter, doping, and surface structure by using vapor and solution phase strategies. These oriented single crystals are being used in semiconductor nanowire devices such as diodes, transistors, logic circuits, lasers, and sensors. Since nanowires have a one-dimensional structure, meaning a large surface-to-volume ratio, the diffusion resistance decreases. In addition, their efficiency in electron transport which is due to the quantum confinement effect, makes their electrical properties be influenced by minor perturbation. Therefore, the use of these nanowires in nanosensor elements increases the sensitivity in electrode response. As mentioned above, the one-dimensionality and chemical flexibility of the semiconductor nanowires make them applicable in nanolasers. Peidong Yang and his co-workers have done some research on the room-temperature ultraviolet nanowires used in nanolasers. They have concluded that using short wavelength nanolasers has applications in different fields such as optical computing, information storage, and microanalysis.
Catalysis
Nanoenzymes (or nanozymes)
The small size of nanoenzymes (or nanozymes) (1–100 nm) has provided them with unique optical, magnetic, electronic, and catalytic properties. Moreover, the control of surface functionality of nanoparticles and the predictable nanostructure of these small-sized enzymes have allowed them to create a complex structure on their surface that can meet the needs of specific applications
Research areas
Nanodiamonds
Synthesis
Fluorescent nanoparticles are highly sought after. They have broad applications, but their use in macroscopic arrays allows them efficient in applications of plasmonics, photonics, and quantum communications. While there are many methods in assembling nanoparticles array, especially gold nanoparticles, they tend to be weakly bonded to their substrate so they can't be used for wet chemistry processing steps or lithography. Nanodiamonds allow for greater variability in access that can subsequently be used to couple plasmonic waveguides to realize quantum plasmonic circuitry.
Nanodiamonds can be synthesized by employing nanoscale carbonaceous seeds created in a single step by using a mask-free electron beam-induced position technique to add amine groups. This assembles nanodiamonds into an array. The presence of dangling bonds at the nanodiamond surface allows them to be functionalized with a variety of ligands. The surfaces of these nanodiamonds are terminated with carboxylic acid groups, enabling their attachment to amine-terminated surfaces through carbodiimide coupling chemistry. This process affords a high yield that relies on covalent bonding between the amine and carboxyl functional groups on amorphous carbon and nanodiamond surfaces in the presence of EDC. Thus unlike gold nanoparticles, they can withstand processing and treatment, for many device applications.
Fluorescent (nitrogen vacancy)
Fluorescent properties in nanodiamonds arise from the presence of nitrogen-vacancy (NV) centers, nitrogen atoms next to a vacancy. Fluorescent nanodiamond (FND) was invented in 2005 and has since been used in various fields of study. The invention received a US patent in 2008 , and a subsequent patent in 2012 . NV centers can be created by irradiating nanodiamonds with high-energy particles (electrons, protons, helium ions), followed by vacuum-annealing at 600–800°C. Irradiation forms vaccines in the diamond structure while vacuum-annealing migrates these vacancies, which will get trapped by nitrogen atoms within the nanodiamond. This process produces two types of NV centers. Two types of NV centers are formed—neutral (NV0) and negatively charged (NV–)—and these have different emission spectra. The NV– the center is of particular interest because it has an S = 1 spin ground state that can be spin-polarized by optical pumping and manipulated using electron paramagnetic resonance. Fluorescent nanodiamonds combine the advantages of semiconductor quantum dots (small size, high photostability, bright multicolor fluorescence) with biocompatibility, non-toxicity, and rich surface chemistry, which means that they have the potential to revolutionize Vivo imaging applications.
Drug-delivery and biological compatibility
Nanodiamonds can self-assemble and a wide range of small molecules, proteins antibodies, therapeutics, and nucleic acids can bind to its surface allowing for drug delivery, protein-mimicking, and surgical implants. Other potential biomedical applications are the use of nanodiamonds as support for solid-phase peptide synthesis and as sorbents for detoxification and separation and fluorescent nanodiamonds for biomedical imaging. Nanodiamonds are capable of biocompatibility, the ability to carry a broad range of therapeutics, dispersibility in water and scalability, and the potential for targeted therapy all properties needed for a drug delivery platform. The small size, stable core, rich surface chemistry, ability to self-assemble, and low cytotoxicity of nanodiamonds have led to suggestions that they could be used to mimic globular proteins. Nanodiamonds have been mostly studied as potential injectable therapeutic agents for generalized drug delivery, but it has also been shown that films of Parylene nanodiamond composites can be used for localized sustained release of drugs over periods ranging from two days to one month.
Nanolithography
Nanolithography is the technique to pattern materials and build devices under nano-scale. Nanolithography is often used together with thin-film-deposition, self-assembly, and self-organization techniques for various nanofabrications purpose. Many practical applications make use of nanolithography, including semiconductor chips in computers. There are many types of nanolithography, which include:
Photolithography
Electron-beam lithography
X-ray lithography
Extreme ultraviolet lithography
Light coupling nanolithography
Scanning probe microscope
Nanoimprint lithography
Dip-Pen nanolithography
Soft lithography
Each nanolithography technique has varying factors of the resolution, time consumption, and cost. There are three basic methods used by nanolithography. One involves using a resist material that acts as a "mask", known as photoresists, to cover and protect the areas of the surface that are intended to be smooth. The uncovered portions can now be etched away, with the protective material acting as a stencil. The second method involves directly carving the desired pattern. Etching may involve using a beam of quantum particles, such as electrons or light, or chemical methods such as oxidation or Self-assembled monolayers. The third method places the desired pattern directly on the surface, producing a final product that is ultimately a few nanometers thicker than the original surface. To visualize the surface to be fabricated, the surface must be visualized by a nano-resolution microscope, which includes the scanning probe microscopy and the atomic force microscope. Both microscopes can also be engaged in processing the final product.
Photoresists
Photoresists are light-sensitive materials, composed of a polymer, a sensitizer, and a solvent. Each element has a particular function. The polymer changes its structure when it is exposed to radiation. The solvent allows the photoresist to be spun and to form thin layers over the wafer surface. Finally, the sensitizer, or inhibitor, controls the photochemical reaction in the polymer phase.
Photoresists can be classified as positive or negative. In positive photoresists, the photochemical reaction that occurs during exposure, weakens the polymer, making it more soluble to the developer so the positive pattern is achieved. Therefore, the masks contains an exact copy of the pattern, which is to remain on the wafer, as a stencil for subsequent processing. In the case of negative photoresists, exposure to light causes the polymerization of the photoresist so the negative resist remains on the surface of the substrate where it is exposed, and the developer solution removes only the unexposed areas. Masks used for negative photoresists contain the inverse or photographic “negative” of the pattern to be transferred. Both negative and positive photoresists have their own advantages. The advantages of negative photoresists are good adhesion to silicon, lower cost, and a shorter processing time. The advantages of positive photoresists are better resolution and thermal stability.
Nanometer-size clusters
Monodisperse, nanometer-size clusters (also known as nanoclusters) are synthetically grown crystals whose size and structure influence their properties through the effects of quantum confinement. One method of growing these crystals is through inverse micellar cages in non-aqueous solvents. Research conducted on the optical properties of MoS2 nanoclusters compared them to their bulk crystal counterparts and analyzed their absorbance spectra. The analysis reveals that size dependence of the absorbance spectrum by bulk crystals is continuous, whereas the absorbance spectrum of nanoclusters takes on discrete energy levels. This indicates a shift from solid-like to molecular-like behavior which occurs at a reported cluster the size of 4.5 – 3.0 nm.
Interest in the magnetic properties of nanoclusters exists due to their potential use in magnetic recording, magnetic fluids, permanent magnets, and catalysis. Analysis of Fe clusters shows behavior consistent with ferromagnetic or superparamagnetic behavior due to strong magnetic interactions within clusters.
Dielectric properties of nanoclusters are also a subject of interest due to their possible applications in catalysis, photocatalysis, micro capacitors, microelectronics, and nonlinear optics.
Nanothermodynamics
The idea of nanothermodynamics was initially proposed by T. L. Hill in 1960, theorizing the differences between differential and integral forms of properties due to small sizes. The size, shape, and environment of a nanoparticle affect the power law, or its proportionality, between nano and macroscopic properties. Transitioning from macro to nano changes the proportionality from exponential to power. Therefore, nanothermodynamics and the theory of statistical mechanics are related in concept.
Notable researchers
There are several researchers in nanochemistry that have been credited with the development of the field. Geoffrey A. Ozin, from the University of Toronto, is known as one of the "founding fathers of Nanochemistry" due to his four and a half decades of research on this subject. This research includes the study of matrix isolation laser Raman spectroscopy, naked metal clusters chemistry and photochemistry, nanoporous materials, hybrid nanomaterials, mesoscopic materials, and ultrathin inorganic nanowires.
Another chemist who is also viewed as one of the nanochemistry's pioneers is Charles M. Lieber at Harvard University. He is known for his contributions to the development of nano-scale technologies, particularly in the field of biology and medicine. The technologies include nanowires, a new class of quasi-one-dimensional materials that have demonstrated superior electrical, optical, mechanical, and thermal properties and can be used potentially as biological sensors. Research under Lieber has delved into the use of nanowires mapping brain activity.
Shimon Weiss, a professor at the University of California, Los Angeles, is known for his research of fluorescent semiconductor nanocrystals, a subclass of quantum dots, for biological labeling.
Paul Alivisatos, from the University of California, Berkeley, is also notable for his research on the fabrication and use of nanocrystals. This research has the potential to develop insight into the mechanisms of small-scale particles such as the process of nucleation, cation exchange, and branching. A notable application of these crystals is the development of quantum dots.
Peidong Yang, another researcher from the University of California, Berkeley, is also notable for his contributions to the development of 1-dimensional nanostructures. The Yang group has active research projects in the areas of nanowire photonics, nanowire-based solar cells, nanowires for solar to fuel conversion, nanowire thermoelectrics, nanowire-cell interface, nanocrystal catalysis, nanotube nanofluidics, and plasmonics.
References
Selected books
J.W. Steed, D.R. Turner, K. Wallace Core Concepts in Supramolecular Chemistry and Nanochemistry (Wiley, 2007) 315p.
Brechignac C., Houdy P., Lahmani M. (Eds.) Nanomaterials and Nanochemistry (Springer, 2007) 748p.
H. Watarai, N. Teramae, T. Sawada Interfacial Nanochemistry: Molecular Science and Engineering at Liquid-Liquid Interfaces (Nanostructure Science and Technology) 2005. 321p.
Ozin G., Arsenault A.C., Cademartiri L. Nanochemistry: A Chemical Approach to Nanomaterials 2nd Eds. (Royal Society of Chemistry, 2008) 820p.
Nanotechnology
Chemistry
Nanomaterials
Nanoparticles |
Regis McKenna (Born 1939?) was an American marketer in Silicon Valley and introduced some techniques today commonplace among advertisers. He and his firm helped market the first microprocessor (Intel Corporation), Apple's first personal computer (Apple Computer), the first recombinant DNA genetically engineered product (Genentech, Inc.), and the first retail computer store (The Byte Shop).
Among the entrepreneurial start-ups with which he worked during their formative years are America Online, Apple, Compaq, Electronic Arts, Genentech, Intel, Linear Technology, Lotus, Microsoft, National Semiconductor, Silicon Graphics, and 3COM. He has been described as the man who put Silicon Valley on the map. He has been called “Silicon Valley's preeminent public relations man,” a “guru,” a “czar,” a “philosopher king,” a “legendary marketer,” Apple's “marketing guru,” “the fellow that put Intel and Apple on the map,” and “ a pioneer in the semiconductor business in terms of the marketing side of things." Newsweek called him "the Silicon Valley Svengali" and Business Week has called him “one of high-tech's ace trendspotters” and a “marketing wizard in Silicon Valley.”
A 1985 Los Angeles Times remarked, "McKenna is best known for taking the story of Apple Computer's founding in a Los Altos garage by two young entrepreneurs and weaving it into part of our national folklore."
Education and early career
Born and raised in Pittsburgh, Pennsylvania, McKenna attended Saint Vincent College and was a liberal arts graduate of Duquesne University. He later said that he "had a dispute with the university over credits" and that Duquesne "eventually sent me my degree." However "I went to four different universities to get that degree." He ended up receiving an honorary Ph.D. from Duquesne in 1990.
He first went to Silicon Valley in 1962, where he worked in the marketing department of General Microelectronics, a spinoff of Fairchild that started developing MOS technology. He then worked as marketing services manager for National Semiconductor in 1967, a firm that proliferated. He spent "half of my time on the road... in Europe and other places around the world... helping set up operations in Scotland." He said there that he learned a great deal about marketing simply by doing it.
McKenna wrote a 2001 article entitled "Silicon Valley Isn't a Place as Much as It Is an Attitude." Describing the Valley as "this near-mythical garden became the place where anyone could pursue and achieve his or her heart's delight," he said that its early "inventors and entrepreneurs...didn't set out to achieve wealth or even happiness" but "sought the freedom to exercise their talents free of economic, cultural, or tenure constraints." The result was the unplanned evolution of a "new, egalitarian culture."
Regis McKenna, Inc.
In late 1969, McKenna left National and began to seek work as a marketing freelancer, helping Silicon Valley startups "with everything from research to training." He put together a "marketing plan," including a list of "the top ten companies" he wanted to work with, and ended up having them all as clients. The list included Intel, Spectra-Physics, Teledyne, Systron, and Donner.
McKenna founded Regis McKenna, Inc. in 1970. He went on to work for Intel and then Apple. He later recalled that "Apple wasn't happy with the name Apple after they started growing. They looked at IBM and said, 'We don't look like IBM. We're not, you know, dignified.'" McKenna made a two-hour presentation to Apple's employees in which he said: “That's exactly what you do want. You want to be different from IBM. You don't want to be the same. You don't want to emulate them. You want to do all of the things that distinguish you from them.”
He began working with Apple in 1976. That year, Steve Jobs and Steve Wozniak "approached him and asked for help in launching what was to be the world's first personal computer." He agreed because he "liked Apple's vision." A 2012 article explains, "When a young Steve Jobs needed a marketing expert, he called Intel to ask who made their sharp-looking ads and was told 'Regis McKenna.
In addition to marketing consultancy, McKenna also owned an advertising agency and a public relations company. "So not only did we write their first business plan, we also designed the Apple logo and put together their advertising campaigns."
McKenna has said that the biggest mistake of his career was turning down an offer of 20% of Apple stock in lieu of payment for his services. "I was looking at my cash flow. And that's one of the reasons why I turned down Apple's offer." His letter turning down the offer is on display at Apple's headquarters.
McKenna sold his advertising business to Jay Chiat in 1981 and his PR business in 1995.
McKenna came out of retirement to work on the iPhone 4 antenna crisis. "Steve called me from Hawaii and told me he had a big problem," McKenna later explained. "He asked if I would meet him at Apple the next day...I thought it was a media-cycle issue and that they should address it with the data they had and be confident about the outcome rather than be apologetic. That's what Steve did. The issue vanished within probably ten days."
McKenna felt that Walter Isaacson's book about Jobs was "very negative...I never once had any of those confrontations that people talk about, and I knew him since he was 22 years old."
Aside from Intel and Apple, among the startups that the firm assisted in their formative years included America Online, Electronic Arts, Genentech, National Semiconductor, Silicon Graphics, and 3Com Corporation. Over the years, the firm evolved from a high-tech outsourced marketing business focusing on startups to a broad-based marketing strategy firm servicing international clients in many industries. McKenna sold his interest in the firm in 2000.
Andrea Cunningham, the firm's group account manager for Apple, told the Los Angeles Times in 1985, "This agency knows more about Apple Computer than Barbara Krause (Apple's in-house public relations chief)."
McKenna pioneered many of the theories and practices of technology marketing that have become integrated into the marketing mainstream. Some of these include:
The process of diffusing technology across various classes of users ranging from innovators to early adopters to late adopters and laggards and the corresponding evolution of the “whole product.”
The development of industry infrastructure modeling whereby a relatively small number of “influencers” establish and sustain standards. The focus is on “intangibles” as the benefits of technology products.
The development of “other” as a major, growing segment of market share with the result of “choice becoming a higher value than brand.”
The development of the concept of "Real Time," whereby technology compresses time (from want or need to zero), creating "the never-satisfied consumer".
McKenna wrote in 1990, "Technology is transforming choice, and choice is transforming the marketplace. As a result, we are witnessing the emergence of a new marketing paradigm.” In a 2002 article, he declared that “branding (as currently practiced) is dead.”
A 2012 article entitled “How Regis McKenna Defined Real-Time Marketing” explained that real-time marketing “is a way of thinking and philosophy that requires businesses to meet the demands of an always-on digital world” and that “includes the convergence of search, social, and real-time content production and distribution, with an expanded definition of publishing that makes social conversation and interaction as important as actual writing and digital media development.” McKenna, it was explained, “laid the groundwork for real-time marketing back in 1995” in a paper for the Harvard Business Review, and fleshed out the concept in the 1997 book Real Time. Among his influential observations:
“Companies must keep the dialogue flowing and also maintain conversations with suppliers, distributors, and others in the marketplace.”
“[Real-time marketing must replace] the broadcast mentality.”
"[Real-time marketing must focus] on real-time customer satisfaction, providing the support, help, guidance, and information necessary to win customers’ loyalty.”
"Real-time marketing requires...being willing to learn how information technology is changing both customer behavior in marketing and to think in new ways about marketing within the organization.”
"The customer still does all the work, hunting and pecking for information. But a real-time marketer would bring the information to the customer."
Kleiner Perkins Caufield and Byers
In 1986 McKenna became a partner of the venture capital firm Kleiner Perkins Caufield & Byers.
Memberships
McKenna is an investor and board member of several Silicon Valley firms, including BroadWare Technologies, Golden Gate Software, and Nanosys. He is on the advisory board of Xloom. He is also on the International Advisory Board of Toyota Motor Company, and the Advisory Board of the Economic Strategies Institute. He is a founding member and chairman of the board of Advisors for the Santa Clara University Center for Science, Technology, and Society and a trustee of the University. He and his wife, Dianne, are founders and trustees of the Children's Fund of Silicon Valley. He is also on the advisory boards of the Technology, Innovation & New Economy Project of the Progressive Policy Institute and the Tech Museum.
Retirement
Since his retirement from active consulting in 2000, McKenna has lectured about many topics, such as “the social and market effects of technological change.”
Books
Total Access, Giving Customers What They Want in an Anytime, Anywhere World, Harvard Business School Press, 2002. The book “addresses the future of marketing as computers and the network do most of the work, from data gathering to customer care and response.”
Real Time, Preparing for the Age of the Never Satisfied Customer, Harvard Business School Press, 1997. This book “analyzes the effects of technology on the marketplace and describes how high-speed electronics enables ready access to information, products, and services and, in the process, generates increased expectations for immediate satisfaction.” The New Yorker wrote, “McKenna never ceases to challenge the conventional wisdom. The notion of eliminating hierarchy and long-term planning, and creating realtime management that focuses on delivery, results, and customer needs is a key revelation for companies; large and small.” The Wall Street Journal called the book a “magnificent tour through an exciting if uncertain future in which everybody's connected to everybody.”
Relationship Marketing, Addison-Wesley, 1991. Publishers Weekly described it as a “spirited recap of the 1980s” that “traces the rise and occasional fall of many start-up companies in the turbulent, proliferating computer and software industry, including the competition between Apple Inc. and IBM.”
Who's Afraid of Big Blue, Addison-Wesley, 1989. This book “chronicles the strategies for success against industry giants and offers advice to those who want to challenge IBM, and to those who face similar competition in other industries.” Library Journal called it “an interesting three-pronged look at the behemoth of the computer industry.”
The Regis Touch, New Marketing Strategies for Uncertain Times, Addison-Wesley, 1985. In this book, McKenna “shares for the first time his proven strategies for creating new markets, positioning products, gaining recognition, serving customers, and keeping pace with fast-changing environments.”
Articles
McKenna has written many articles for Forbes, Ink, Fortune, and the Harvard Business Review. He has also written poetry.
Honors and awards
McKenna won the Joseph Wharton Award in 1986. He received honorary Ph.Ds from Duquesne University (1990), Saint Vincent College (1991), Santa Clara University (2002), and Stevens College of Engineering; Honorary Ph.D. (2002).
In 1991, he won the International Computers & Communications World Leaders Award.
The San Jose Mercury News included McKenna on its Millennium 100 list, a roster of the 100 people who made Silicon Valley what it is today.
References
Year of birth missing (living people)
Living people
American computer businesspeople
Silicon Valley people
Marketing people
American marketing people
Kleiner Perkins people |
Alfred E. Mann (1925 – February 25, 2016), also known as Al Mann, was an American physicist, inventor, entrepreneur, and philanthropist.
Early life and education
Mann was born and raised to a Jewish family in Portland, Oregon. His father was a grocer who emigrated from England; his mother a pianist and singer who immigrated from Poland.
Business
In 1956, Mann founded Spectrolab, the first of his aerospace companies. While at Spectrolab, an electro optical systems company, he also founded Heliotek, a semiconductor company, that became a major supplier of solar cells for spacecraft. Among other accomplishments during his tenure, Mann's companies provided the electric power for over 100 spacecraft and constructed one of the lunar experiments. Although he sold both companies to Textron in 1960 (merged into one, Spectrolab is now a subsidiary of Boeing Satellite Systems), he continued to manage them until 1972. After he left those companies to found Pacesetter Systems, which focused on cardiac pacemakers, he sold that company in 1985 and managed it until 1992. It is now a part of St. Jude Medical. Mann then went on to establish MiniMed (insulin pumps and continuous glucose devices, now owned by Medtronic) and Advanced Bionics (neuroprosthetics, now focused on cochlear implants and owned by Sonova, while its pain management and other neural stimulation products are now owned by Boston Scientific).
At the time of his death, Mann was involved in several companies, including:
founder and chairman of Second Sight Medical Products, a biomedical company which produces the Argus retinal prosthesis;
founder and chairman of Bioness, a company devoted to applying electrostimulation for functional neural defects such as paralysis;
founder and chairman of the Board of Quallion, LLC, a company producing high reliability batteries for medical products and for the military and aerospace industries;
Chairman of Stellar Microelectronics, an electronic circuit manufacturer for the medical, military and aerospace industries;
Mann also chaired the Southern California Biomedical Council (SCBC or SoCalBio), the trade association that has represented and promoted the growth of biotech, medtech and digital health industries in the Greater Los Angeles region.
In June 2014, the US Food and Drug Administration approved MannKind Corporation's application for a unique inhalable insulin (Afrezza) for the treatment of diabetes. Mannkind subsequently licensed the device to a French pharmaceutical company, Sanofi, for US$925 million. Mann was chairman of the board of MannKind Corporation, a biomedical company, where he also served as chief executive officer until January 12, 2015. In November 2015, Hakan Edstrom stepped down as CEO and president and will remain until July, 2017 to provide other services for the company. Mann again stepped in as interim CEO.
Mann also served on the board of directors and was the largest investor in Eclipse Aviation
Mann was one of the main investors in the development of Mulholland Estates, a gated community in Los Angeles.
Philanthropy
Mann established Alfred E. Mann Institutes for Biomedical Engineering at the University of Southern California (USC), known as AMI/USC ($162 million); at Purdue University known as AMI/Purdue ($100 million); and at the Technion known as AMIT ($104 million) are business incubators for medical device development in preparation for commercialization. The Institutes are essentially fully funded. Three other universities were in late stage discussions as of 2006. AMI was founded in 1998 when Alfred Mann made his first $100 million gift to USC, a major private research university in Los Angeles. The total gifted endowment for AMI/USC is $162 million since then.
The Alfred Mann Foundation for Biomedical Engineering is charged with selecting, establishing and overseeing the institutes, similar to AMI at USC and at other research universities.
Mann was a Life Trustee of the University of Southern California.
Founded in 1985, the Alfred Mann Foundation has several core aims. It aims to work with scientists and research organizations to find bionic solutions for people suffering from debilitating medical impairments.
As an alumnus of UCLA, he tried to make a substantial monetary gift to his alma mater to fund a bioengineering institute. However, the donation failed over Mann's desire to retain control over patents and patent revenues generated by the institute. The $162 million gift eventually went to USC, a private institution that agreed to his terms.
On March 16, 2007, Purdue University received a $100 million endowment from the Mann Foundation for Biomedical Engineering. The endowment was the largest research gift ever at the university and created the Alfred Mann Institute at Purdue. However, AMI Purdue was closed and the unspent portion of the $100 million endowment from the MANN Foundation was rescinded in early 2012.
Personal life
Mann has been married four times and has seven children. His first wife was Beverly Mann. They divorced in 1957 His second wife was Linda Mann. They divorced in 1973. His third wife was Susan Kendall; they divorced in 1997. In 2004, he married his fourth wife, Claude Mann.
Mann died on February 25, 2016, of natural causes in Las Vegas, Nevada at the age of 90.
Recognition
2000, Golden Plate Award of the American Academy of Achievement
2003, Business Journal's Los Angeles Business Person of the Year
2011, MDEA Lifetime Achievement Award
References
External links
Alfred E. Mann Foundation
1925 births
2016 deaths
American billionaires
American manufacturing businesspeople
Philanthropists from Oregon
21st-century American philanthropists
Members of the United States National Academy of Engineering
American people of English-Jewish descent
American people of Polish-Jewish descent
American company founders
20th-century American businesspeople
21st-century American businesspeople
University of California, Los Angeles alumni
Businesspeople from Portland, Oregon
20th-century American philanthropists
21st-century American Jews
Inventors from Oregon |
Shadows on the Sun is the second studio album by American rapper Brother Ali. It was released on May 27, 2003 on Rhymesayers Entertainment. It was produced entirely by Ant of Atmosphere. The album received almost unanimous critical acclaim.
Track listing
All tracks by Brother Ali & Ant except 7 and 14 by the formers and Slug
Personnel
Credits for Shadows on the Sun adapted from Allmusic.
Ant – Scratching, Engineer, Executive Producer, Mixing, Beats
Brother Ali – Vocals, Engineer, Executive Producer, Mixing
Sean Daley – Executive Producer
Emily Lazar – Mastering
Joe Mabbott – Engineer, Mixing
Dan Monick – Photography
Brent "Abu Siddiq" Sayers – Executive Producer
Siddiq – Design, Layout Design
References
External links
Shadows on the Sun at Discogs
Rhymesayers Entertainment
2003 albums
Brother Ali albums
Rhymesayers Entertainment albums
Albums produced by Ant (producer) |
In 2005, the McMaster School of Computational Engineering and Science was the first program launched in Canada dedicated in developing expertise in the third wave of scientific research involving simulation, modeling and optimization. The new school brings together 50 faculty from engineering, science, business and health science to collaboratively conduct research and advance education.
The three major research thrusts of the school are:
Computational Physical Sciences
Computational Optimization, Design and Control
Computational Biosciences
Activities cover a broad spectrum of research areas representing current interdisciplinary activities, future areas that involve existing, but as yet untapped potential, as well as areas to be developed by future appointments.
Programs
The school offer programs at the Master, (M.Eng., M.Sc., M.A.Sc.) with coursework project or thesis. Ph.D. and postdoctoral levels. These will be of an interdisciplinary nature including core courses at the Masters level and module-based topic courses at all levels. Programs are currently being reviewed by the Ontario Council on Graduate Studies and approval is pending. Approval is anticipated in early 2006 with programs beginning in September.
Facilities and Departmental Affiliations
McMaster University is a partner in the recently formed Shared Hierarchical Academic Research Computing Network (SHARCNET), currently among the leading high-performance computing centers in the world.
High-Performance Research Computing Support Group (HPRCS) – HPRCS supports HPC installations as a central part of the university infrastructure and provides various levels of computing support to the research and high-performance computing communities at McMaster.
Affiliated Laboratories
Advanced Optimization Group and Laboratory
Advanced Signal Processing Laboratory
Applied and Industrial Mathematical Sciences Laboratory
Bioinformatics Research Laboratory
Communications Research Laboratory
McMaster Advanced Control Consortium
Neural Computation Laboratory
Simulation Optimization Systems Research Laboratory
Notes and references
Program Structure
McMaster University
Computer science departments in Canada
Educational institutions established in 2005
2005 establishments in Canada |
In engineering, a process is a series of interrelated tasks that, together, transform inputs into a given output. These tasks may Not be carried out by people, nature or machines using various resources; an engineering process must be considered in the context of the agents carrying out the tasks and the resource attributes involved.<ref>Systems engineering normative documents and those related to Maturity Models are typically based on processes, for example, systems engineering processes of the EIA-632 and processes involved in the Capability Maturity Model Integration (CMMI) institutionalization and improvement approach. Constraints imposed on the tasks and resources required to implement them are essential for executing the tasks mentioned.
Semiconductor industry
Semiconductor process engineers face the unique challenge of transforming raw materials into high-tech devices. Common semiconductor devices include Integrated Circuits (ICs), Light-Emitting Diodes (LEDs), solar cells, and solid-state lasers. To produce these and other semiconductor devices, semiconductor process engineers rely heavily on interconnected physical and chemical processes.
A prominent example of these combined processes is the use of ultra-violet photolithography which is then followed by wet etching, the process of creating an IC pattern that is transferred onto an organic coating and etched onto the underlying semiconductor chip. Other examples include the ion implantation of dopant species to tailor the electrical properties of a semiconductor chip and the electrochemical deposition of metallic interconnects (e.g. electroplating). Process Engineers are generally involved in the development, scaling, and quality control of new semiconductor processes from lab bench to manufacturing floor.
Chemical engineering
A chemical process is a series of unit operations used to produce a material in large quantities.
In the chemical industry, chemical engineers will use the following to define or illustrate a process:
Process flow diagram (PFD)
Piping and instrumentation diagram (P&ID)
Simplified process description
Detailed process description
Project management
Process simulation
CPRET
The Association Française d'Ingénierie Système has developed a process definition dedicated to Systems engineering (SE), but open to all domains.
The CPRET representation integrates the process Mission and Environment in order to offer an external standpoint. Several models may correspond to a single definition depending on the language used (UML or another language).
Note: process definition and modeling are interdependent notions but different the one from the other.
Process
A process is a set of transformations of input elements into products: respecting constraints,
requiring resources,
meeting a defined mission, corresponding to a specific purpose adapted to a given environment.
Environment
Natural conditions and external factors impacting a process.
Mission
Purpose of the process tailored to a given environment.
This definition requires a process description to include the Constraints, Products, Resources, Input Elements and Transformations. This leads to the CPRET acronym to be used as name and mnemonic for this definition.
Constraints
Imposed conditions, rules or regulations.
Products
All whatever is generated by transformations. The products can be of the desired or not desired type (e.g., the software system and bugs, the defined products and waste).
Resources
Human resources, energy, time and other means required to carry out the transformations.
Elements as inputs
Elements submitted to transformations for producing the products.
Transformations
Operations organized according to a logic aimed at optimizing the attainment of specific products from the input elements, with the allocated resources and on compliance with the imposed constraints.
CPRET through examples
The purpose of the following examples is to illustrate the definitions with concrete cases. These examples come from the Engineering field but also from other fields to show that the CPRET definition of processes is not limited to the System Engineering context.
Examples of processes
An engineering (EIA-632, ISO/IEC 15288, etc.)
A concert
A polling campaign
A certification
Examples of environment
Various levels of maturity, technicality, equipment
An audience
A political system
Practices
Examples of mission
Supply better quality products
Satisfy the public, critics
Have candidates elected
Obtain the desired approval
Examples of constraints
Imposed technologies
Correct acoustics
Speaking times
A reference model (ISO, CMMI, etc.)
Examples of products
A mobile telephone network
A show
Vote results
A quality label
Examples of resources
Development teams
An orchestra and its instruments
An organization
An assessment team
Examples of elements as inputs
Specifications
Scores
Candidates
A company and its practices
Examples of transformations
Define an architecture
Play the scores
Make people vote for a candidate
Audit the organization
Conclusions
The CPRET formalized definition systematically addresses the input Elements, Transformations, and Products but also the other essential components of a Process, namely the Constraints and Resources. Among the resources, note the specificity of the Resource-Time component which passes inexorably and irreversibly, with problems of synchronization and sequencing.
This definition states that environment is an external factor which cannot be avoided: as a matter of fact, a process is always interdependent with other phenomena including other processes.
References
Bibliography
Process engineering |
Shimoga district, officially known as Shivamogga district, is a district in the Karnataka state of India. A major part of Shimoga district lies in the Malnad region or the Sahyadri. Shimoga city is its administrative centre. Jog Falls view point is a major tourist attraction. As of 2011 Shimoga district has a population of 17,52,753. There are seven taluks: Soraba, Sagara, Hosanagar, Shimoga, Shikaripura, Thirthahalli, and Bhadravathi. Channagiri and Honnali were part of Shimoga district until 1997 when they became part of the newly formed Davanagere district.
Origin of name
Shivamogga was previously known as Mandli. There are legends about how the name Shivamogga has evolved. According to one, the name Shivamogga is related to the Hindu God Shiva. Shiva-Mukha (Face of Shiva), Shivana-Moogu (Nose of Shiva) or Shivana-Mogge (Flowers to be offered to Shiva) can be the origins of the name "Shivamogga". Another legend indicates that the name Shimoga is derived from the word Sihi-Moge which means sweet pot. According to this legend, Shivamogga once had the ashram of the sage Durvasa. He used to boil sweet herbs in an earthen pot. Some cowherds, found this pot and after tasting the sweet beverage named this place Sihi-Moge.
History
During Treta Yuga, Lord Rama killed Maricha, who was in the disguise of a deer at Mrugavadhe near Thirthahalli. The Shimoga region formed a part of the Mauryan empire during the 3rd century. The district came into the control of Satavahanas. The Satakarni inscription has been found in the Shikaripur taluk. After the fall of the Shatavahana empire around 200 CE, the area came under the control of the Kadambas of Banavasi around 345 CE. The Kadambas were the earliest kingdom to give administrative status to the Kannada language. Later the Kadambas became feudatories of the Badami Chalukyas around 540 CE.
In the 8th century Rashtrakutas ruled this district. The Kalyani Chalukyas overthrew the Rashtrakutas, and the district came into their rule. Balligavi was a prominent city during their rule. In the 12th century, with the weakening of the Kalyani Chalukyas, the Hoysalas annexed this area. After the fall of the Hoysalas, the entire region came under the Vijayanagar Empire. When the Vijayanagar empire was defeated in 1565 CE in the battle of Tallikota, the Keladi Nayakas who were originally feudatory of the Vijayanagar empire took control, declared sovereignty, and ruled as an independent kingdom for about two centuries. In 1763 Haider Ali captured the capital of Keladi Nayakas and as a result the district came into the rule of the Kingdom of Mysore and remained a part of it until India acquired independence from the British.
Geography
Shimoga district is a part of the Malnad region of Karnataka and is also known as the 'Gateway to Malnad' or 'Malenaada Hebbagilu' in Kannada. The district is landlocked and bounded by Haveri, Davanagere, Chikmagalur, Udupi and Uttara Kannada districts. The district ranks 9th in terms of the total area among the districts of Karnataka. It is spread over an area of 8465 km2.
Shimoga lies between the latitudes 13°27' and 14°39' N and between the longitudes 74°38' and 76°04' E at a mean altitude of 640 metres above sea level. The peak Kodachadri hill at an altitude of 1343 metres above sea level is the highest point in this district. Rivers Kali, Gangavati, Sharavati and Tadadi originate in this district. The two major rivers that flow through this district are Tunga and Bhadra which meet at Koodli near Shimoga city to gain the name of Tungabhadra, which later joins River Krishna.
Climate
As the district lies in the tropical region, rainy season occurs from June to October. In the years 1901–1970, Shimoga received an average annual rainfall of 1813.9 mm with an average of 86 days in the year being rainy days. The average annual temperature of Shimoga district is around 26 °C. The average temperature has increased substantially over the years. In some regions of the district, the day temperature can reach 40 °C during summer. This has led to water crisis and other problems.
Geology
The major soil forms found in the Shimoga district are red gravelly clay soil; red clay soil; lateritic gravelly clay soil; lateritic clay soil; medium deep black soil; non-saline and saline alluvio-colluvial soil; brown forest soil.
The major minerals found in the district are limestone; white quartz; kaolin; kyanite; manganese.
The plain land of the district is suitable for agriculture.
Economy
Foundry, agriculture and animal husbandry are the major contributors to the economy of Shimoga district. The crops cultivated in this district are paddy, arecanut, cotton, maize, oil seeds, cashewnut, pepper, chili, ginger, Ragi. Karnataka is the largest producer of arecanut in India, the majority of which is cultivated in the Shimoga district. The farmers have cultivated crops like Vanilla and Jatropha that has yielded high monetary benefits. Spices like, clove, pepper, cinnamon, cardamom are grown along with areaca plants. This multi cropping can help in maximum utilisation of land space and improve soil fertility. As spices have high commercial value it provides additional income to farmers.
Industries
Iron, agriculture, Textiles and engineering are the major industries in Shimoga district. Foundry activity has a long history there and Pearlite Liners (P) Led., one of the oldest industries of Karnataka (earlier known as Bharath Foundry), is the largest private-sector employer in the district. , there were about 9800 industrial units in Shimoga District (small, medium and large), with more than 41,000 employees.
Major investments are made in food; beverages, engineering, and mechanical goods. Other rural industries in this district are carpentry, blacksmith, leather, pottery, beekeeping, stone cutting, handlooms, agarbathi, and sandalwood carving.
Karnataka government has created industrial regions to encourage industrialisation of the district: KIADB Nidige Industrial area in Bhadravathi taluk; Machinahaali Industrial Area. Mandli-Kallur Industrial area in Shimoga taluk; Shimoga Industrial estate in Shimoga; Kallahalli Industrial estate in Shimoga. KIADB Devakathikoppa Industrial Area. KSSIDC Siddlipura Industrial Area. Major industries in Shimoga district are VISL and MPM.
Administrative divisions
Shimoga district is divided into seven taluks: Soraba, Bhadravathi, Thirthahalli, Sagara, Shikaripura, Shimoga and Hosanagara.
The district administration is headed by the deputy commissioner who has the additional role of a district magistrate. Assistant commissioners, tahsildars, shirastedars, revenue inspectors and village accountants help the deputy commissioner in the administration of the district. The headquarters is Shimoga city.
The Shimoga Lok Sabha constituency comprises the entire Shimoga district and also covers parts of Nalluru and Ubrani hoblis of Channagiri taluk of Davanagere district. As of 2005 it had 1,286,181 voters: Scheduled Castes and Scheduled Tribes account for 2.2 lakhs; Lingayats account for two lakhs; Deevaru (Idiga)account for 1.8 lakh; (Madivala) account for 1.2 lakh;Muslims account for 1.6 lakh; Brahmins and Vokkaligas account for 1.25 lakh each. Seven members are elected to the Legislative assembly of the state of Karnataka. The assembly constituencies in Shimoga district are:
Soraba
Sagara
Shimoga
Shimoga Rural
Shikaripura
Bhadravathi
Thirthahalli
Demographics
According to the 2011 census Shimoga district has a population of 1,752,753, which is roughly equal to population of the nation Gambia and the state of Nebraska of the United States. The district ranks 275th in India out of a total of 640 districts. The district has a population density of . Its population growth rate over the decade 2001–2011 was 6.88%. Shimoga has a sex ratio of 995 females per 1000 males and a literacy rate of 80.5%. 35.59% of the population lives in urban areas. Scheduled Castes and Scheduled Tribes make up 17.58% and 3.73% of the population respectively.
Shimoga taluk has the highest population with Hosanagara taluk having the lowest. The district has a sex ratio of 977 females to 1000 males. Shimoga Taluk having 991 females to 1000 males has the lowest sex-ratio.
At the time of the 2011 census, 70.20% of the population spoke Kannada, 12.71% Urdu, 4.17% Tamil, 4.07% Telugu, 2.95% Lambadi, 2.10% Marathi and 1.47% Konkani as their first language.
Culture
Heritage and architecture
Ballegavi, also known as 'Dakshina kedara' was the capital of Banavasi rulers during the 12th century CE. There are many temples in Ballegavi, some constructed as per Late Chalukyan architecture: Kedareshvara temple, Tripurantakeshvara temple, and Prabhudeva temple. They are known for architecture and sculpture. Shivappa Naik palace is located in Shimoga on the banks of river Tunga; it was constructed by Shivappa nayaka of Keladi. The Lakshminarsimha temple in Bhadravathi was built as per Hoysala architecture. Keladi and Ikkeri were the capital cities during the time of Keladi Nayakas. There are three temples in Keldai: Rameshvara temple, Veerbhadreshvara temple, and Parvati temple. The Aghoreshvara temple is in Ikkeri. The Sacred Heart church, constructed in the 1990s, is second largest church of Asia.
Poetry and literature
Shimoga district has produced several Kannada writers and poets:
Kuvempu was born in the village Kuppalli in Thirthahalli Taluk.
G. S. Shivarudrappa born in Shikaripur.
U.R. Ananthamurthy was born in Melige village in Thirthahalli Taluk.
P. Lankesh born in Konagavalli.
K. V. Subbanna from Sagara
M. K. Indira
Na D'Souza from Sagara
H. M. Nayak from Thirthahalli
Poornachandra Tejaswi, the son of Kuvempu.
In December 2006, the 73rd Kannada Sahitya Sammelana took place in Shimoga. K.S.Nissar Ahmed was the president of the event. This was the third Kannada Sahiya Sammelana held at Shimoga: The first one was held in 1946 (president: Da.Ra.Bendre) and second one in 1976 (president: S.V.Ranganna).
Ninasam
Nilakanteshwara Natya Seva Sangha is located in a village called Heggodu in Sagara. It was established by K. V. Subbanna in 1958. Ninasam is a drama institute. The headquarters is in Heggodu. It has a library, rehearsal hall, guesthouse and theatre. Shivarama Karantha Rangamandira is an auditorium for Ninasam. It was opened in 1972. Ninasam started a Theatre-in-education project called Shalaranga with the help from the government of India during 1991–1993. Ford Foundation has volunteered in establishing a rural theatre and film culture project called Janaspandana. Ninasam conducts a summer workshop for youngsters. Ninasam chitrasamaja is an organisation to encourage film culture and to hold film festivals.
Handicrafts and sculpture
Gudigars are a clan of craftsmen who are specialised in carving intricate designs on wood, mainly sandalwood. They are concentrated in the Sagara and Soraba taluks. The articles they make are sold at government emporiums. Ashok Gudigar is one of the sculptors from this clan. A 41-foot Bahubali statue is one of his works. He has won the Vishwakarma award for his Chalukyan-style Ganesha sculpture. He has won the National award in 1992 for his Hoysala-styled Venugopala sculpture.
Dance
Dollu Kunitha and Yakshagana are some of the dance forms which are prevalent in this district. Yakshagana has a long history in the district and Dr. Kota Shivarama Karantha suggests that origin of the 'badaguthittu' form of Yakshagana took place in the region between Ikkeri of Shimoga district and Udupi.
Fairs
Dasara is celebrated every year in Shimoga. Many cultural programmes are held during this time. A folk fair was organised in Shimoga in 2006. Marikamba festival is celebrated in Sagara once in 3 years.
Cinema
The tele-serial Malgudi days which was based on a novel written by R K Narayan was shot in Agumbe. It was directed by the Kannada actor and director Shankar Nag. The film Kanoora heggadathi which was based on the novel written by Kuvempu was shot in Thirthahalli taluk. It was directed by Girish Karnad. B. V. Karanth composed music for this film. The film Samskara, based on the novel written by U. R. Anantha Murthy, was shot in a village in the Shimoga district.
Cinema personalities born in Shimoga district:
Girish Kasaravalli: Film director who has won several Swarna Kamal awards for Kannada art movies.
P. Lankesh: Editor of the tabloid Lankesh Patrike and director of a few films.
Ashok Pai: Psychiatrist, script writer and film producer who produced Kannada film Kadina Benki and others.
Sudeep Kannada actor born in Shimoga
Arun Sagar A Kannada Actor from Sagara
Cuisine
Rice is the staple food for majority of the people in Shimoga district. The food in this district is somewhat similar to Udupi cuisine. However, exclusive dishes specified to Malenadu are a part of Shimoga District.
The cooking in the Malnad region of Shimoga district includes items like midigayi-uppinakai (tender-mango pickle), sandige (similar to pappadum), avalakki (beaten rice) and akki rotti. Havyaka people have their own cuisine consisting of such varied items like genesale (sweet made of jaggery, rice and coconut), thotadevvu (sweet made of rice and sugarcane juice) and thambli (a curd preparation containing other ingredients like ginger, turmeric root, jasmine and rose sprouts).
Flora and fauna
The Malnad region is a biodiversity hotspot with a rich diversity of flora and fauna. The region has protected areas classified as wildlife sanctuaries to ensure the protection of these species:
Gudavi Bird Sanctuary is in Sorab Taluk. The sanctuary is spread over an area of
There are many species of flora found here: Vitex leucoxylon, Phyllanthus polyphyllus, Terminalia bellerica, Terminalia paniculata, Terminalia chebula, Lagerstroemia lanceolata, Dalbergia latifolia, Haldina cordifolia, Xylia xylocarpa, Caryota urens, Ficus benghalensis, Ficus religiosa, Butea monosperma, Santalum album, Diospyros melanoxylon; Madhuca longifolia, Kirganelia reticulata.
191 species of fauna have been recorded here: 63 are water-dependent; 20 species are known to breed here. Water birds in the sanctuary include black-headed ibis, Darters, Little Cormorant, Indian shag, Cattle Egret, Little Egret, Large Egret, Spoonbill, Grey Heron, purple heron, Pond Heron, night heron, Coot, Pheasant-tailed Jacana, Purple Swamphen, Common Sandpiper, Little ringed plover, Little Grebe, Cotton Teal. An average of about 8000 White Ibis visit the sanctuary every year.
Sharavathi Valley Wildlife Sanctuary is in Sagar Taluk. It has evergreen and semi-evergreen forests with its eastern portion adjoining the Linganamakki reservoir.
The species of flora found here: Dipterocarpus indicus, Calophyllum tomentosum, Persea macrantha, Caryota urens, Aporosa lindleyana, Calycopteris floribunda, Entada scandens, Acacia concinna, Gnetum scandens. In the semi-evergreen and moist deciduous forests, common species include: Lagerstroemia microcarpa, Hopea parviflora, Dalbergia latifolia, Dillenia pentagyna, Careya arborea, Emblica officinalis, Randia, Terminalia, Vitex altissima.
The animals found here: gaurs, Lion-tailed Macaque, Tiger, leopard (black panther), Wild Dog, jackal, Sloth Bear, Spotted Deer, sambar deer, barking deer, Mouse Deer, Wild Boar, common langur, bonnet macaque, Malabar giant squirrel, giant flying squirrel, Porcupine, Otters; Pangolins. Reptiles include King Cobra, python, rat snake, Crocodile, Monitor Lizard. Avian species found here: three species of hornbills; Asian paradise flycatcher; Racket-tailed Drongo; lories and lorikeets.
Shettihalli Wildlife Sanctuary lies adjacent to Shimoga city and has forests ranging from dry deciduous to semi-evergreen and is spread over an area of . Large areas of forests have been destroyed due to fire.
Trees of the dry deciduous parts include Terminalia tomentosa, Terminalia bellerica, Gmelina arborea, Tectona grandis, Anogeissus latifolia, Lagerstroemia lanceolata, Wrightia tinctoria, Cassia fistula; and Emblica officinalis. In the moist deciduous forest species like Adina cordifolia, Xylia xylocarpa, Grewia tilaefolia, Kydia calycina, bamboo Dendrocalamus strictus, Bambusa arundinacea. The semi-evergreen forests are represented by Dipterocarpus, Michelia, Hopea, Schleichera, and Bambusa. Plants like Acacia auriculiformis, Tectona grandis, and Grevillea robusta also exist in the sanctuary.
Mammals in the sanctuary include Tigers, Leopards, Wild Dogs, Jackals, Gaurs, Elephants; Sloth Bears, Sambar Deer, Spotted Deer, Wild Boar, Common Langurs; Bonnet macaques, Common Mongoose, Striped-necked Mongoose, Porcupine, Malabar giant squirrel, giant flying squirrel, Pangolin.
Python, cobra, king cobra, rat snake, marsh crocodile are among the reptiles found in the sanctuary.
Birds include Hornbills, Kingfishers, Bulbuls, Parakeets, Doves, Pigeons, babblers, Flycatchers, munias, Swallows, Woodpeckers, Peafowl, Jungle fowl, Partridges. A tiger and lion safari at Tyavarekoppa was created in the northeastern part of the sanctuary in 1988.
Bhadra Wildlife Sanctuary was started in 1951 as Jagara valley game sanctuary covering an area of about . It was combined with the surrounding Lakkavalli forests in 1972 and given its present name of Bhadra Wildlife Sanctuary. It now spans an area of . Some of the wild animals found in this sanctuary are Tiger, Leopard, Wild Dog, Jackal, Elephant, Gaur, Sloth Bear, Sambar Deer, Spotted Deer, Monitor Lizard, Barking Deer, Wild Boar, Common Langur, bonnet macaque, Slender Loris, the Malabar giant squirrel.
Some of the bird species found here are Malabar whistling thrush; species of Bulbuls; Woodpeckers, Hornbills, pigeons, Drongos, Asian paradise flycatcher. The sanctuary has been recently adopted under a tiger-conservation project called Project Tiger which is an initiative from the Indian government.
Mandagadde Bird Sanctuary is a sanctuary from Shimoga town on the way to Thirthahalli. This is a small island surrounded by Tunga river. The birds found here are median egret, cormorant, darter, snakebird.
Sakrebailu Elephant Camp lies 14 km. from Shimoga town on the way to Thirthahalli. This is a training camp where elephants undergo training from mahouts.
Tyavarekoppa Lion and Tiger Safari lies about from Shimoga town on the way to Sagar. The safari has lions, tigers and deer.
Education
Shimoga district has a literacy rate of 80.2%. The district has two engineering colleges, two medical colleges, an ayurvedic medical college, dental college, veterinary College and an agricultural college. There are 116 pre-university colleges in the district out of which 51 government pre-university colleges. There are 41 educational institutions managed by National education society. There are 1106 lower primary schools and 1185 higher primary schools.
Primary and high school education
There are 1106 lower primary schools, 1185 higher primary schools and 393 high schools in Shimoga district. There are 1323 anganawadis. National education society has 31 educational institutions including pre-university and first grade colleges. There are five CBSE schools, including Jnanadeepa school. National Residential school is another CBSE school in Thirthalli. Hongirana School Of Excellence is a CBSE School in Sagar, Karnataka. B G S Central School which is affiliated CBSE is at Karehalli Bhadravathi
Government High School, Jade
Government High School, Jade is one of the top three high schools in Soraba Taluk. This high school has the biggest playground and more than 500 students from Jade, and surrounding up to 10 km villages are studying in this school. GHS JADE have won several computation organised by Department Of Education, like sports, Prathiba Karanji in every year, In 2015 this high school started to offer English-medium classes for 8th, 9th and 10th students.
Pre-university education
There are 116 pre-university colleges in the district. There are 51 government colleges, 3 bifurcated colleges, 47 unaided colleges and 15 aided colleges. In the 2012 second year pre-university examination, the district ranked 5th with 54.31% of passed candidates.
Diploma courses
There are 8 Polytechnics in the district. Major polytechnics among them are Government Polytechnic - Bhadravathi, Government Women's Polytechnic - Gopala, Sahyadri Polytechnic, Sanjay Memorial Polytechnic-Sagara, DVS Polytechnic.
Undergraduate education
There are 12 colleges affiliated to Kuvempu University, 5 B.Ed and B.P.Ed colleges and 3 constituent colleges. Sahyadri science college is located in Shimoga city. It was established in 1940 and was upgraded to first grade college in 1956. It offers two undergraduate courses: BSc and B.C.A. There are two engineering colleges in the district: Jawaharlal Nehru national college of engineering and P.E.S. Institute of Technology and Management. Jawaharlal Nehru national college of engineering was established in 1980 by the National education society. The college offers 7 courses in B.E. PES institute of technology and management was established in 2007. The college offers 5 undergraduate programmes in B.E. National College of Pharmacy in the center of the city is one of the oldest college in Karnataka state and students across the nation has studied here.
Shimoga Institute of Medical Sciences was started in 2005. It is on the premises of the McGann Hospital in Shimoga, established in memory of British Surgeon Dr. T.G.McGann. The college is affiliated to Rajiv Gandhi University of Health Sciences, Karnataka. There are 21 departments in the college. Bapuji Ayurvedic Medical college, established in 1996, is in Shimoga, which offers B.A.M.S. Ayurvedacharya degree. T.M.A.E. Society's Ayurved College, established in 1992, is located in Shimoga, which also offers B.A.M.S Ayuvedacharya degree. Both colleges are affiliated to Rajiv Gandhi University of Health Sciences. Sharavathi Dental college, established in the year 1992, is in Shimoga and has been approved by DCI. It offers B.D.S. in Dental surgery. It is affiliated to Rajiv Gandhi University of Health Sciences.
Postgraduate education
Sahyadri science college offers two post graduate programmes: M.Sc. and MTA. Jawaharlal Nehru national college of engineering has 7 post-graduate programmes: Master of computer applications; Master of business administration; M.Tech. in Computer Science and Engineering; M.Tech. in Network & Internet Engineering; M.Tech. in Design Engineering; M.Tech. in Transportation Engineering and Management; M.Tech. in Digital Electronics and Communication Systems. PES Shimoga offers post-graduation in business studies, Master of Business Administration. The Kuvempu University offers courses in Languages, Literature and Fine Arts; Social Sciences; Economic and Business studies; Physical Sciences; Chemical sciences; Bio Sciences; Earth and Environmental Science; Law; Education; M.Tech. in Nanoscience and Technology.
Sports
Shimoga district has three cricket stadiums: Nehru stadium, Jawaharlal Nehru college of engineering cricket ground and PES Institute of Technology Cricket ground. The first match played on the Nehru stadium was in 1974. Since then 13 matches have been played out of which 3 are Ranji matches. The Ranji match between Karnataka and Uttar Pradesh was hosted on the Jawaharlal Nehru cricket ground.
Sagara has a very good cricket stadium called Gopalagowda Stadium, It is the only best leather pitch stadium in the district.
The work on an international cricket stadium has started near Navule. The VISL cricket stadium is located in Bhadravathi. Shivamogga Lions represents the Shimoga zone in the KPL. Shimoga, Hassan and Chickmagalur districts come under the Shimoga zone in the Karnataka premier league.
Gundappa Viswanath is a cricketer from Bhadravathi. He has played test cricket for India from 1969 to 1983 making 91 appearances. Bharat Chipli is a cricketer from Sagar who plays for Deccan Chargers. The 18th Junior National Athletic Championship was held in Shimoga.
State-level kho kho and volleyball competitions are held in the district. The volleyball tournaments are held on the Kuvempu University campus and Nehru stadium. VTU inter-collegiate cricket, football, volleyball and handball tournaments are held in the districts. The district football team has won inter-district football tournaments. Shimoga was the host for the CBSE National Handball Championship in 2009. City-level basketball tournaments are conducted in Sahyadri College premises. Other sports tournaments held in Shimoga are table tennis; badminton; kabaddi; chess. There are proposals to upgrade the Nehru stadium in Shimoga. The upgraded stadium would contain a swimming pool of international standards, an indoor stadium, basketball court and a synthetic track. There are proposals to build sports stadium at Thirthahalli, Shikaripura and Soraba.
Tourism
Waterfalls
Jog Falls is the highest waterfall in India and second highest in Asia. The river Sharavathi falls into the gorge in four distinct flows which are termed Raja, Rani, Rover, and Rocket. Jog falls lies in Sagar taluk and is 30 km. from the city of Sagar.
Kunchikal Falls is the 11th highest waterfall in India and 313th highest in the world with a height of 455 meters and ranks 116 in the list of highest waterfalls in the world. This waterfall is near Mastikatte and is formed by the Varahi River.
Barkana Falls is near Agumbe and 80 km from Thirthahalli town. Barkana Falls is the 10th highest waterfall in India and ranks 308 in the world.
Achakanya Falls is located near a village called Aralsuruli, 10 km from Thirthahalli on the way to Hosanagara. The falls is formed by the Sharavathi river.
Vanake-Abbey Falls is in the heart of Malnad forests, 4 km from Agumbe.
Hidlamane Falls is near Nittur in Hosanagara Taluk. The only way to reach it is by trekking.
Dabbe Falls, Sagara is located near Hosagadde in Sagar taluk. On the road from Sagar to Bhatkal, Hosagadde lies about 20 km from the town of Kargal. From Hosagadde a walk of 6–8 km into the forest leads to Dabbe Falls.
Dams
Linganamakki dam is built across the Sharavathi river in Sagar taluk and is 6 km from Jog Falls. It is the main feeder reservoir for the Mahatma Gandhi hydro-electric project. It has two power generating units of 27.5 MW. It is the biggest dam in Karnataka of 151.75 Tmcft.
Bhadra river dam is built across Bhadra river at Lakkavalli at distance of 20 km from Bhadravathi city. The dam was constructed by Sir. M. Vishweshwaraiah, the then chief engineer of Karnataka state. The dam mainly serves the purpose of irrigation in and around Bhadravathi taluk and Tarikere taluk of Chikkamagaluru district.
Gajanur dam is built across the river Tunga in a village called Gajanur 12 km from Shimoga city.
Rivers
Tunga and Bhadra originates at Varaha mountains. They meet at Koodli and become Tungabhadra river. Koodli is 16 km from Shimoga city and the Smartha monastery in Koodli was founded in 1576 CE by Jagadguru Narsimha Bharathi swami of Sringeri.
Ambuteertha is located 10 km from Thirthahalli on the Thirthahalli-Hosanagara road. River Sharavathi originates at this place.
Varadamoola is 6 km from Sagar town. River Varada originates at this place. Varada flows through the town of Banavasi before joining Tungabhadra.
Hill stations
Agumbe is 90 km west of Shimoga city. It is known as the Cherrapunji of South India. Agumbe is 650 meters above sea level. The place is famous for its sunset view.
Kavaledurga is a fort on a hill above sea level.
Kodachadri hills are 115 km from Shimoga city. The hills are 1343 m above sea level.
Kundadri is a hill near Thirthahalli. It is famous for its rock formations.
Cultural heritage
Shivappa Nayaka palace and museum is in the city of Shimoga. The palace was built by Shivappa Nayaka during the 17th century CE. Kote Seetharamanjaneya temple is beside it.
Sacred Heart church, built in the 1990s and second largest church of Asia, is in the city of Shimoga. It has features of Roman and Ghothic styles of architecture.
The Lakshminarasimha temple is located in the Bhadravathi city. It has been built in the Hoysala style called 'trikutachala'.
Chandragutti fort is near Balligavi which was built by Banavasi Kadambas. The Renukamba temple is in this village.
Humcha is a Jain pilgrimage place with a Panchakuta Basadi, Humcha which was built during 10th and 11th century CE.
The Kedareshvara temple is located in Kubetoor. It has been built in the Chalukyan style.
Nagara, which was earlier called Bidarur, was the last capital of the Keladi kings and later taken by Hyder Ali during 1763. The Hyder Ali tank, Neelakanteshwara temple and Venkataramana temple are located in this city.
Keladi and Ikkeri were the capitals of Keladi Nayakas. The places are near Sagar.
Talagunda is a village in the Shikaripura taluk. The Talagunda inscription on a stone pillar is in Prakrit language. The author of the inscription was Kubja, court poet of Shantivarman.
Notable people
U. R. Ananthamurthy, a Jnanapeeta Awardee
Sarekoppa Bangarappa, an Indian politician who was the 12th Chief Minister of Karnataka from 1990 to 1992
Diganth, Kannada actor
Shantaveri Gopala Gowda, a Socialist Leader
Dattatreya Hosabale, Indian social worker
M. K. Indira, writer and poet
Justice M. Rama Jois, a former Chief Justice of the Punjab and Haryana High Court, a former member of Rajya Sabha, a former Governor of Jharkhand and Bihar states and a senior advocate in the Supreme Court of India
Kaviraj (lyricist), poet, lyricist and director in Kannada film industry
Kuvempu, a Jnanpeeta Awardee
P. Lankesh, poet, journalist
Akka Mahadevi, poet, social reformer
Kadidal Manjappa, a veteran freedom fighter and a former Chief minister of Karnataka
Anupama Niranjana, noted writer
J. H. Patel, former chief minister
Allama Prabhu, social reformer
S. Rudregowda, industrialist and Member of Legislative Council
Khadi Shankarappa, a veteran freedom fighter.
Abhilash Shetty, a film director in Kannada film industry
G. S. Shivarudrappa, poet, one of the three Rashtrakavis in Kannada
K. V. Subbanna, artist and writer
Shimoga Subbanna, a playback singer
Sudeep, actor and director of Kannada cinema
Poornachandra Tejaswi, writer
Gundappa Viswanath, a former cricketer
B. S. Yeddyurappa, politician and Chief Minister of Karnataka
Notes
External links
Shimoga 2011 census report
Shimoga Zilla Panchayat
Shimoga district official website
Map of Shimoga District
Districts of Karnataka |
Sun Microsystems' UltraSPARC T2 microprocessor is a multithreading, multi-core CPU. It is a member of the SPARC family, and the successor to the UltraSPARC T1. The chip is sometimes referred to by its codename, Niagara 2. Sun started selling servers with the T2 processor in October 2007.
New features
The T2 is a commodity derivative of the UltraSPARC series of microprocessors, targeting Internet workloads in computers, storage and networking devices. The processor, manufactured in 65 nm, is available with eight CPU cores, and each core is able to handle eight threads concurrently. Thus the processor is capable of processing up to 64 concurrent threads. Other new features include:
Speed bump for each thread, which increased the frequency from 1.2 GHz to 1.6 GHz
One PCI Express port (x8 1.0) vs. the T1's JBus interface
Two Sun Neptune 10 Gigabit Ethernet ports (embedded into the T2 processor) with packet classification and filtering
L2 cache size increased to 4 MB (8-banks, 16-way associative) from 3 MB
Improved thread scheduling and instruction prefetching to achieve higher single-threaded performance
Two integer ALUs per core instead of one, each one being shared by a group of four threads
One floating point unit per core, up from just one FPU for the entire chip
Eight encryption engines, with each supporting DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
Hardware random number generator
Four dual-channel FBDIMM memory controllers
Core pipeline
There are 8 stages for integer operations, instead of 6 in the T1.
Systems
The T2 processor can be found in the following products from Sun and Fujitsu Computer Systems:
Sun/Fujitsu/Fujitsu Siemens SPARC Enterprise T5120 and T5220 servers
Sun Blade T6320 Server Module
Sun Netra CP3260 Blade
Sun Netra T5220 Rackmount Server
Sun also licensed the T2 processor to Themis Computer, which introduced the first non-Sun T2-based servers in 2008:
Themis T2BC Blade Server, which supports the entire family IBM BladeCenter chassis
UltraSPARC T2 Plus
In April 2008, Sun released servers based on the UltraSPARC T2 Plus processor, an SMP capable version of UltraSPARC T2.
Sun released the UltraSPARC T2 Plus processor with the following changes:
Ability to be used in 2 or 4 processor configurations (first CoolThreads processor capable of multi-processor capability)
Loss of on-chip embedded 10 Gigabit Ethernet controller
T2 Plus systems
UltraSPARC T2 Plus processors can be found in the following products from Sun and Fujitsu Computer Systems:
Two-way SMP servers:
Sun/Fujitsu/Fujitsu Siemens SPARC Enterprise T5140
Sun/Fujitsu/Fujitsu Siemens SPARC Enterprise T5240
Four-way SMP server:
Sun/Fujitsu/Fujitsu Siemens SPARC Enterprise T5440
Compute cluster
The High Performance Computing Virtual Laboratory in Canada built a compute cluster using 78 Sun SPARC Enterprise T5140 servers. With two 1.2 GHz T2 Plus chips in each T5140 server, the cluster has close to 10,000 compute threads, making it ideal for high-throughput workloads.
Virtualization
Like the T1, the T2 supports the Hyper-Privileged execution mode. The SPARC Hypervisor runs in this mode and can partition a T2 system into 64 Logical Domains, and a two-way SMP T2 Plus system into 128 Logical Domains, each of which can run an independent operating system instance.
Performance improvement versus T1
The UltraSPARC T2 offers a variety of performance improvements over the former UltraSPARC T1 processor
Integer throughput and throughput/watt (>2x improvement)
Integer single-thread performance (>1.4x improvement)
Better floating-point throughput (>10x improvement)
Better floating-point single-thread performance (>5x improvement)
Increased performance of cryptography through additional cyphers included in the embedded crypto cores
Two world-record single-chip SPEC CPU results, based on tests that delivered 78.5 SPECint_rate2006 and 62.3 SPECfp_rate2006
Other UltraSPARC T2 performance related tunings are documented on Oracle engineers' blogs.
Power consumption
Peak power consumption can go as high as 123 watts, but the T2 typically consumes 95 watts during nominal system operation. This is up from 72 watts from the T1. Sun explains that this is due to a higher degree of system integration onto the chip.
Release history
On April 12, 2006, Sun announced the tape-out of the UltraSPARC T2.
Sun announced the T2's release on 7 August 2007, billing it as "the world's fastest microprocessor".
On April 9, 2008, Sun announced the UltraSPARC T2 Plus.
Open design
On December 11, 2007, Sun made the UltraSPARC T2 processor design publicly available under the GNU General Public License via the OpenSPARC project. The release includes:
Verilog RTL source code of the design
Verification environment
Diagnostics tests
Open source tools, scripts and Sun internal tools needed to simulate the design
ISA specification (UltraSPARC Architecture 2007)
Solaris 10 OS simulation images
References
External links
OpenSPARC T2 and Specifications
OpenSPARC Overview
CMT Comes Of Age: Sun engineers give the inside scoop on the new UltraSPARC T2 systems
CoolThreads Overview
Niagara II: The Hydra Returns
Sun microprocessors
Open microprocessors
SPARC microprocessors
64-bit microprocessors |
Nicholas DeWolf (July 12, 1928 – April 16, 2006) was co-founder of Teradyne, a Boston, Massachusetts-based manufacturer of automatic test equipment. He founded the company in 1960 with Alex d'Arbeloff, a classmate at MIT.
Early life and education
DeWolf was born in Philadelphia, Pennsylvania and graduated with an S.B. in EECS from MIT in 1948.
Career
During his eleven years as CEO of Teradyne, DeWolf is credited with designing more than 300 semiconductor and other test systems, including the J259, the world's first computer-operated integrated circuit tester.
After leaving Teradyne in 1971, DeWolf moved to Aspen, Colorado, where in 1979, he teamed with artist Travis Fulton to create Aspen's "dancing fountain". DeWolf also designed a computer system without hard disks or fans; this system (the ON! computer) booted up in seconds, a much faster time than even the computers of today.
Awards
1979: Semiconductor Equipment and Materials International SEMI Award for North America.
2001: Telluride Tech Festival Award of Technology, Boulder, CO.
2005: inducted into the Aspen Hall of Fame with wife Maggie DeWolf.
Photography
DeWolf was also a keen and prolific photographer. His son-in-law and archivist, Steve Lundeen, is scanning DeWolf's complete archive and making it available on Flickr.
Death
DeWolf died in Aspen, Colorado at the age of 77.
Quotes
"What the customer demands is last year's model, cheaper. To find out what the customer needs you have to understand what the customer is doing as well as he understands it. Then you build what he needs and you educate him to the fact that he needs it."
"To select a component, size a product, design a system or plan a new company, first test the extremes and then have the courage to resist what is popular and the wisdom to choose what is best".
References
External links
The photographic archive of Nick DeWolf on Flickr
'Nicholas DeWolf: The Father of ATE (Automatic Test Equipment)' biography at The Chip History Center
'SEMI Oral History Interview - Nicholas DeWolf - September 24, 2005, Aspen Colorado - Interviewed by Craig Addison'
Nick DeWolf A/V Artifacts
1928 births
2006 deaths
Computer hardware engineers
American company founders |
Bernard Widrow (born December 24, 1929) is a U.S. professor of electrical engineering at Stanford University. He is the co-inventor of the Widrow–Hoff least mean squares filter (LMS) adaptive algorithm with his then doctoral student Ted Hoff. The LMS algorithm led to the ADALINE and MADALINE artificial neural networks and to the backpropagation technique. He made other fundamental contributions to the development of signal processing in the fields of geophysics, adaptive antennas, and adaptive filtering.
Publications
1965 "A critical comparison of two kinds of adaptive classification networks", K. Steinbuch and B. Widrow, IEEE Transactions on Electronic Computers, pp. 737–740.
1985 B. Widrow and S. D. Stearns. Adaptive Signal Processing. New Jersey: Prentice-Hall, Inc., 1985.
1994 B. Widrow and E. Walach. Adaptive Inverse Control. New Jersey: Prentice-Hall, Inc., 1994.
2008 B. Widrow and I. Kollar. Quantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communications. Cambridge University Press, 2008.
Honors
Elected Fellow IEEE, 1976
Elected Fellow AAAS, 1980
IEEE Centennial Medal, 1984
IEEE Alexander Graham Bell Medal, 1986
IEEE Neural Networks Pioneer Medal, 1991
Inducted into the National Academy of Engineering, 1995
IEEE Signal Processing Society Award, 1999
IEEE Millennium Medal, 2000
Benjamin Franklin Medal, 2001
International Neural Network Society (INNIS) Board member 2004
He was one of the Board of Governors of the International Neural Network Society (INNIS) in 2003.
References
1929 births
Artificial intelligence researchers
IEEE Centennial Medal laureates
Living people
Members of the United States National Academy of Engineering
Place of birth missing (living people)
Stanford University faculty
Benjamin Franklin Medal (Franklin Institute) laureates
Massachusetts Institute of Technology alumni |
The Broadcast Protection Discussion Group (BPDG) is a working group of content providers, television broadcasters, consumer electronics manufacturers, information technology companies, interested individuals and consumer activists. The group was formed specifically for the purpose of evaluating the suitability of the broadcast flag for preventing unauthorized redistribution (including unauthorized redistribution over the Internet of unencrypted digital terrestrial broadcast television (DTV)) and to determine whether there was substantial support for the broadcast flag. The group completed its mission with the release of the BPDG Report.
The BPDG has reached a consensus on the use of a technical broadcast flag standard for digital broadcast copy protection. The broadcast flag is an electronic marker embedded in over-the-air digital broadcast signals that would block or limit the ability of consumer electronics devices to make copies of the programs. The broadcast flag would also prevent the redistribution of such programs over the Internet. Despite reaching a consensus on this standard, the BPDG did not reach any agreement concerning how to implement the use of the flag or enforce it.
Digital TV programs protection using broadcast flags
The group proposed that digital TV programs be embedded with a "broadcast flag." All digital devices would be required to recognize the flag, which would prevent the protected content from being distributed on the Internet. The report states, "The proposed technical solution does not interfere with the ability of consumers to both make copies of DTV content, and to exchange such content among devices connected within a digital home network."
BPDG publications
After several meetings, the BPDG has published some reports in order to enforce the idea of the broadcast flag. Those publications are:
BPDG Final Reportl
Summary of EFF Report on BPDG
Full EFF Report on BPDG
Table A
What is EFF?
The Electronic Frontier Foundation (EFF) is a nonprofit group of passionate people-—lawyers, technologists, volunteers, and visionaries-—working to protect digital rights.
Blending the expertise of lawyers, policy analysts, activists, and technologists, EFF achieves significant victories on behalf of consumers and the general public. EFF fights for freedom primarily in the courts, bringing and defending lawsuits even when that means taking on the US government or large corporations. By mobilizing more than 50,000 concerned citizens through our Action Center, EFF beats back bad legislation. In addition to advising policymakers, EFF educates the press and public. Sometimes just defending technologies is not enough, so EFF also supports the development of freedom-enhancing inventions.
Policy group is not a CPTWG sub-group?
Several CPTWG (Copy Protection Technical Working Group) participants indicated at CPTWG's June 5 meeting that the "parallel group" or "policy group" is "not a sub-group of CPTWG or "not part of CPTWG".
Broadcast flag is not a watermark
Some recent press coverage of BPDG refers to the BPDG proposal as recommending a watermark in digital TV broadcasts. This is a misperception of the nature of the broadcast flag. (There is a distinct proposal called the broadcast watermark which was not discussed extensively within BPDG and is not part of the BPDG's published recommendations.)
A watermark is commingled directly with the signal it marks, and thereby alters the signal (ideally, in an imperceptible way). By contrast, the broadcast flag exists side-by-side with video content it marks.
Terms to describe the broadcast flag, rather than watermark, might include "bit", "indicator", "flag", "descriptor", "tag", "header field", or "notice". But use of "watermark" is sure to generate confusion, especially because watermark proposals distinct from BPDG do exist. Watermarking is likely to be a big issue soon in a public forum—but not as a part of BPDG's proposal.
Misconceptions about BPDG
An article by John Dvorak seems to contain a misconception: that the result of BPDG's work will be the obsolescence of current digital TV receivers. As Dvorak writes:
"it appears that the new copy-protection schemes being dreamed up by Hollywood will make every single HDTV set sold to date obsolete. And buyers of new sets are not being told about this situation in a dubious attempt to dump very expensive inventory."
What happened was that the Hollywood folks, who are just freaked over the possibility that we'll be copying HDTV movies, have promoted copy protection that requires the decode circuit to be built into the display, not into the set-top box. This requires the set-top box to send a signal to a connector that new HDTV sets will have. If you're thinking of buying an HDTV, don't, unless it has this connector and circuit-whenever they are finalized."
One view is that Dvorak has got the situation backwards. Old equipment will continue to work. This is because BPDG is not planning to encrypt broadcasts at all—merely to cause them to include a "broadcast flag", and to obtain legislation forcing all manufacturers to comply with its rules.
The result of this would be that old equipment would be better and more useful than new equipment. Not only would it work properly, but it wouldn't have been crippled by having to comply with the Compliance and Robustness Rules. This is to say that old equipment would be more functional, not less functional, than new equipment.
Alphabet soup
"BPDG wants the Federal Communications Commission (FCC) to mandate Digital Rights Management (DRM) for ATSC DTB receivers
In the body of this article, there are expansions for about 80 of the most common acronyms used in discussions about this issue. (The acronyms expanded include every acronym which appears in the BPDG's Draft Compliance and Robustness Rules, among others.)
Of course, this it not enough to appreciate the context behind these acronyms. For example, knowing that PCMCIA stands for Personal Computer Memory Card International Association gives no clue that the Association in question published a standard for tiny removable cards used in laptops. Hearing that 8VSB means "8-level vestigial side band" explains nothing about 8VSB's role in digital television broadcasting (that is DTB for the initiated).
The following list contains some of the most important acronyms related to this subject:
4C 4 companies
5C 5 companies
8/VSB 8 level vestigial side band
AC3 audio code 3
ADC analog to digital converter, analog to digital conversion
AGP accelerated graphics port
AHRA audio home recording act
ASIC application-specific integrated circuit
ATSC advanced television systems committee
BF broadcast flag
BPDG broadcast protection discussion group
BW broadcast watermark
CA conditional access
CBDTPA consumer broadband and digital television promotion act
CE consumer electronics
CEA consumer electronics association
CIG computer industry group
CMI copyright management information
CP copy protection, content protection
CPRM content protection for recordable media
CPTWG copy protection technical working group
CRT cathode ray tube
CSS content scramble system
D-VHS digital VHS
DAC digital to analog converter, digital to analog conversion
DMCA digital millennium copyright act
DRM digital rights management
DT digital terrestrial
DTB digital terrestrial broadcasting, digital terrestrial broadcast
DTCP digital transmission content protection
DTLA digital transmission licensing administrator
DTV digital television
DVD digital versatile disc
DVDCCA DVD copy control association
DVI digital video interface
ECM entitlement control message
EEPROM electrically erasable programmable read-only memory
EFF electronic frontier foundation
EIT event information table
EPN encryption plus non-assertion
FCC federal communications commission
FPGA field-programmable gate array
HD high definition
HDCP high-bandwidth digital content protection
HDTV high-definition television
HRRC home recording rights coalition
IEC international electrotechnical commission
IF intermediate frequency
ISO International Organization for Standardization
IP intellectual property
IP internet protocol
IT information technology
LAN local-area network
LMI license management incorporated
MEI Matsushita Electrical Industrial Corporation
MPAA motion picture association of America
MPEG motion picture experts group
NAB national association of broadcasters
NCTA national cable and telecommunications association
NTSC national television standards committee
OOB out of band
OTA over the air
PAL phase alternating line
PC personal computer
PC printed circuit
PCI peripheral component interconnect
PCM pulse code modulation
PCMCIA personal computer memory card international association
PMT program map table
POD point of deployment
PSIP program and system information protocol
PVR personal video recorder
QAM quadrature amplitude modulation
RC redistribution control [descriptor]
RD redistribution descriptor
RF radiofrequency
SCMS serial copy management system
SCR software-controlled radio
SD standard definition
SDR software-defined radio
SI system information
SPDIF Sony/philips digital interface
SSSCA security systems standards and certification act
STB set-top box
TPM technological protection measure
TS transport stream
TSP transport stream processor, transport stream processing
TV television
VCR videocassette recorder
VHDL vhsic hardware description language
VOD video on demand
What is Table A?
Many of the practical consequences of the BPDG proposal for consumers (and for competition in the marketplace) lie in a yet-to-be-written appendix to the specification. This appendix, called Table A, enumerates the kinds of digital outputs which are allowed on devices which can receive digital television signals.
The idea is that a device which receives a TV program with the broadcast flag set is not allowed to output the content of that program in digital form, except via a technology specifically mentioned on Table A.
This raises three questions: first, why should this be so? (What's wrong with letting device manufacturers choose for themselves what kinds of outputs their devices will have? If consumers want a particular kind of output, why shouldn't they have it? Why should legislation determine the capabilities of future digital televisions?) Second, what technologies will be permitted? Third, how is that decision going to be made?
The first question goes to the heart of the BPDG proposal and is addressed elsewhere (at least, by skeptics of BPDG; there has not been much in the way of a public defense of this mandate, which is being represented as a fait accompli in most circles).
The second and third questions are empirical matters. An earlier draft of the BPDG Compliance and Robustness Rules divided Table A into Authorized Digital Outputs and Authorized Digital Removable Media Recording Methods. The two Authorized Outputs mentioned were Digital Transmission Content Protection (DTCP) and High-bandwidth Digital Content Protection (HDCP); the two Recording Methods mentioned were Content Protection for Recordable Media (CPRM) and D-VHS.
DTCP is a copy-control scheme for digital video devised by five companies (called the "5C consortium"). HDCP is a similar copy-control scheme devices by only four companies (the "4C consortium"). Both of these schemes restrict what a consumer can do with digital video; both require a license if a device manufacturer is going to be able to implement them; both constrain the functionality of products in which they are incorporated. Both cost money to implement—the licenses are not free. DTCP encrypts video transmitted over a digital bus called IEEE 1394 (or "FireWire"). HDCP encrypts video transmitted over a different—and video-specific—bus called Digital Visual Interface ("DVI"). The encryption, in both cases, is meant to "protect" the content against the consumer, and to restrict playback of the content to "authorized", licensed devices.
Content Protection for Recordable Media (CPRM) is an encryption scheme for recordable media which is also meant to prevent media from being played back in devices other than those licensed by the 4C consortium. D-VHS is a new digital videotape spec which—you guessed it—also prevents media from being played back, except in licensed devices.
So here the suggestion was that four particular copy-control technologies, all closed standards and all of which have "compliance and robustness rules" of their own, were to be permitted as outputs from digital television receivers; all other video standards, and all other recording media, were to be banned by default.
Since the BPDG was formed by companies from the 5C and 4C consortia, it is difficult to imagine that it would recommend that their technologies not be permitted. Subsequently, the specific technology list was removed from Table A; the current discussion draft from BPDG does not contain any specific technologies at all, though it still bans "unauthorized" technologies by default. But now Table A has been left blank, and a discussion has begun about a proper procedure for choosing technologies to be added. (This shift took place as a result of a discussion at the last BPDG in-person meeting in Los Angeles.)
All current proposals for filling in Table A seem to involve agreement by some number of major movie studios—that is, members of the Motion Picture Association of America (MPAA) -- and, perhaps, agreement by some number of major electronics companies or other corporations. No agreement has been reached within BPDG, but various "vehicles" or "methods" for approving technologies have been suggested. These typically employ a formula such as "n% of Major Studios and m% of manufacturers". No studio proposal, has yet contemplated the possibility that technologies could be approved without any Hollywood sign-off. Thus, the discussion appears to be centered on choosing values for the percentages to be plugged into these formulas.
See also
Watermarking
Bandera de transmisión
References
External links
Broadcast Protection Discussion Group home page
EFF home page
Communications and media organizations based in the United States
Organizations established in 1977
1977 establishments in the United States |
Richard Bruce Farleigh (born Richard Buckland Smith, 9 November 1960) is an Australian private investor and reality television personality. He is currently a member of the Business Review Weekly Rich 200 list, a list of the 200 wealthiest Australian individuals. In 2012, he took on the role as Chancellor of London South Bank University. Farleigh featured in series 3 and 4 of BBC's Dragons' Den. He currently resides in London, United Kingdom and previously lived in Monte Carlo, Monaco.
Early life
Born Richard Buckland Smith in Kyabram, Victoria, Australia. His foster family gave him the surname Farleigh. He is a sixth generation Australian. His father was a labourer and sheep shearer. His parents sent him and his other siblings to foster homes when he was aged two. He was one of eleven siblings. Richard was taken into foster care by a family from Peakhurst, Sydney. He attended Narwee Boys' High School, excelled at maths and competitive chess, and then won a scholarship to study economics at the University of New South Wales.
After graduating with honours in the early 1980s, he worked at the Reserve Bank of Australia, then joined Bankers Trust Australia in Sydney when 23 as an investment banker and trader, where he stayed for ten years.
Business
Farleigh left Australia in the nineties. He was then hired to run a hedge fund in Bermuda and moved there with his wife and baby son. There, he became friends with David Norwood, a chess grand master, and three years later, he decided to retire, aged 34, and moved to Monte Carlo. He then spent much time with Norwood investigating research from Oxford University in the UK that had potential commercial applications. IndexIT was the company formed to fund some of these ventures; it was later sold to Beeson Gregory for £20m. At this time he invested his own capital in British technology companies.
In 1999, Farleigh invested £2m in the renovation of the old French Embassy mansion in London's Portman Square, turning it into the private members club Home House.
In 2005, he published a guide to personal investing entitled Taming the Lion: 100 Secret Strategies for Investing ().
The Rich 200 list estimated his personal wealth at around A$160,000,000. He is ranked as the 876th on the Sunday Times Rich List 2006 with an estimated net worth of £66 million.
Several companies Farleigh invested in include: ClearSpeed, Evolution Group, IP2IPO, Proximagen, Home House and Wolfson Microelectronics.
In 2010 Farleigh launched H2O Markets, an advisory firm.
Dragons' Den
Farleigh was selected in 2006 to appear as an investor on the British version of the business-related TV programme Dragons' Den for the show's third series. Farleigh said he would be seeking further investments through the show, saying he was looking to "hopefully uncover the next big thing". It was announced on 21 May 2007 that Richard Farleigh had been dropped from the series. He was replaced by James Caan.
Chess
Farleigh played for Bermuda in the 31st Chess Olympiad in Moscow 1994 and for Monaco in the 34th Chess Olympiad in Istanbul 2000. While Farleigh's chess results are relatively modest, he is well-known in the international chess community for sponsoring and running chess Bermuda Parties at chess Olympiads.
References
External links
The Telegraph (2006). Business profile: From swagman to sapphires. Retrieved 2006-04-23.
The Times (2007). Double blow for UK nanotech company.
The Telegraph (2007). Farleigh's Oxonica in legal fight with supplier.
The Financial Times (2008). ‘Dragons’ Den’ chief feels heat in court.
The Guardian (2008). Court rules against firm backed by former Dragons' Den star.
Business Matters (2010). Interview with Richard Farleigh on success & failure in business .
1960 births
Living people
Australian chess players
Australian financial analysts
Australian hedge fund managers
Australian investors
Stock and commodity market managers
University of New South Wales alumni
Chess Olympiad competitors
Australian expatriates in Monaco
Australian expatriates in England
People associated with London South Bank University |
The (UGA, French: meaning "Grenoble Alps University") is a public research university in Grenoble, France. Founded in 1339, it is the third largest university in France with about 60,000 students and over 3,000 researchers.
Established as the University of Grenoble by Humbert II of Viennois, it split in 1970 following the wide-spread civil unrest of May 1968. Three of the University of Grenoble's successors—Joseph Fourier University, Pierre Mendès-France University, and Stendhal University—merged in 2016 to restore the original institution under the name . In 2020, the Grenoble Institute of Technology, the Grenoble Institute of Political Studies, and the Grenoble School of Architecture also merged with the original university.
The university is organized around two closely located urban campuses: Domaine Universitaire, which straddles Saint-Martin-d'Hères and Gières, and Campus GIANT in Grenoble. UGA also owns and operates facilities in Valence, Chambéry, Les Houches, Villar-d'Arêne, Mirabel, Échirolles, and La Tronche.
The city of Grenoble is one of the largest scientific centers in Europe, hosting facilities of every existing public research institution in France. This enables UGA to have hundreds of research and teaching partnerships, including close collaboration with the French National Centre for Scientific Research (CNRS) and the French Alternative Energies and Atomic Energy Commission (CEA). After Paris, Grenoble as a city is the largest research center in France with 22,800 researchers. In April 2019, UGA was selected to host one of the four French institutes in artificial intelligence.
UGA is traditionally known for its research and education in the natural sciences and engineering, but also law, institutional economics, linguistics, and psychology. It has been cited among the best and most innovative universities in Europe. It is also renowned for its academic research in the humanities and political sciences, hosting some of the largest research centers in France in the fields of political science, urban planning and the sociology of organizations.
History
Early history (1339–1800)
The University of Grenoble was founded on 12 May 1339 by Humbert II of Viennois, the last independent ruler of Dauphiné, a state of the Holy Roman Empire. Its purpose was to teach civil and canon law, medicine, and the liberal arts. It was considered a leader in the Renaissance revival of the classics and development of liberal arts.
Humbert's actions were inspired by his granduncle Robert, King of Naples, at whose royal court Humbert spent his youth. King Robert, known as the Wise, skillfully developed Naples from a small port into a lavish city and had a reputation of a cultured man and a generous patron of the arts, friends with such great minds as Petrarch, Boccaccio, and Giotto.
Such rich experience contributed to Humbert's intention to create a university in his own state, and to do so he visited Pope Benedict XII to get a papal bull of approval.
Humbert cared deeply about his students, offering generous aid, protection, and even providing a hundred of them with free housing. Humbert's financial losses during the Smyrniote crusades, Black Death, and Dauphiné's attachment to France greatly decreased the activity of the university leading to its closure, since a small mountainous town couldn't support its activity on its own.
It was reopened again by Louis XI of France in 1475 in Valence under the name University of Valence, while the original university was restored in Grenoble in 1542 by Francis de Bourbon, Count of St. Pol. The two universities were finally reunited in 1565. At that point Grenoble was an important center of law practice in France, thus law practice was at the center of the university education.
The French Revolution, with its focus on the end to inherited privilege, led to the suppression of most universities in France. To revolutionaries, universities embodied bastions of corporatism and established interests. Moreover, lands owned by the universities represented a source of wealth and therefore were confiscated, just as property possessed by the Church.
Modern period (1800–1968)
In 1805–1808, Napoleon reestablished faculties of law, letters, and science. The Bourbon Restoration had temporarily suppressed the Faculty of Letters and the Faculty of Law, but by the 1850s the university's activity had begun rapidly developing again.
The development of the sciences at the university was spearheaded by the transformation of Grenoble from a regional center to a major supplier of industrial motors and electrical equipment in 1880s. The faculties were formally inaugurated as the University of Grenoble in 1879 in the newly constructed Place de Verdun. There were around 3000 students in 1930. Significant enrollment growth in the 1960s created pressures on the academic infrastructure of the university; the Suzanne Dobelmann library helped expand facilities, especially those relating to science and medicine.
Recent history (1968–present)
Following riots among university students in May 1968, a reform of French education occurred. The Orientation Act (Loi d’Orientation de l’Enseignement Superieur) of 1968 divided the old faculties into smaller subject departments, decreased the power of the Ministry of National Education, and created smaller universities, with strengthened administrations.
Thus, sharing the fate of all French universities in 1970s, the University of Grenoble was split into four institutions. Each university had different areas of concentration of study and the faculties were divided as follows:
The Scientific and Medical University of Grenoble, which in 1987 was renamed Joseph Fourier University (UJF), for sciences, health, and technology
The University of Economics and Law, which in 1987 was renamed Pierre Mendès-France University (UPMF), for social sciences and humanities
The Grenoble Institute of Political Studies, affiliated with UPMF and focusing on political science
The University of Languages and Letters, which in 1987 was renamed Stendhal University, for arts and languages
The Grenoble Institute of Technology (Grenoble-INP) for engineering
On 1 January 2016, the first three institutions reunited to restore the original common institution under the name Université Grenoble Alpes. Although Grenoble-INP remains separate, it is an active member of the Community Université Grenoble Alpes and cooperates very closely with the university not only in research projects, but also by sharing labs, offering mutual courses and training for students and researchers.
On 1 January 2020, the Grenoble Institute of Technology (Grenoble-INP), together with the Grenoble Institute of Political Studies, the ENSAG School of Architecture, and the Community Université Grenoble Alpes merged with the University Grenoble Alpes.
Campus
UGA facilities are mainly located in the Grenoble Agglomeration, centered around the Domaine Universitaire campus, GIANT campus, and La Tronche medical campus. However, there are many facilities that are located in other places in and outside of Grenoble, including the Valence campus and an important number of laboratories and research centres.
Domaine Universitaire (Grenoble)
The Domaine Universitaire, also known as the University Campus and Campus de Saint-Martin-d'Hères, is the main UGA campus covering an area of 175 hectares. It is an autonomous part of the Grenoble-Alpes Métropole agglomeration and a part of Saint-Martin-d'Hères commune. The Domaine Universitaire hosts a major part of educational facilities and an important part of research laboratories of the university.
The Domaine Universitaire campus has a distinct feature of being an isolated part of the agglomeration dedicated solely to academics and student activities. This is an exemption from the typical model of French universities where university facilities are scattered throughout the city. Such organization was an experimental model applied in 1960s to accommodate the rapidly growing university. Over the years, due to such a distinct form of organization it earned the reputation of an "American campus". Another French university that follows this model is Paris-Saclay University although it is located 20 km away from Paris and not in a direct proximity to the city.
The campus boast 3 000 trees, including Arboretum Robert Ruffier-Lanche with over 250 different species of trees and shrubs from around the world. Due to its rich vegetation, surrounded by Isère (river), in proximity of three mountain chains, and in immediate adjacency to the city, the campus is known for student quality of life. The university is ranked among the most beautiful universities and campuses in France and Europe. The campus has a rich network of public transport, including the Grenoble tramway, several bus lines, easy access the main highway and a network of bike lines. Grenoble is traditionally recognized as one of the best student cities in France.
La Tronche campus is located one tramway stop away from the Domaine Universitaire campus. It is primarily specialized in medical studies and is home to the Grenoble Alpes University Hospital.
Campus GIANT (Grenoble)
Campus GIANT (Grenoble Innovation for Advanced New Technologies) is an inter-organizational campus located on the old military grounds of a presque-isle between Isère and Drac that formed Polygone Scientifique. The Campus hosts several educational institutions, primarily UGA (particularly the INPG) and the Grenoble School of Management. Among other members of the campus are also large state research organizations CNRS and CEA. The GIANT campus hosts Minatec, as well as several European large scale Instruments including European Synchrotron Radiation Facility, European Molecular Biology Laboratory, and Institut Laue–Langevin. Major industrial companies have facilities on campus, including bioMérieux, Schneider Electric, Siemens, and STMicroelectronics.
Contrary to the Domaine Universitaire campus, which hosts UGA and shares both educational and research roles in a wide variety of disciplines, the GIANT Campus is inter-organizational and leans heavily towards research-industry collaboration in natural and applied sciences.
Valence Campus
The Valence campus is home to over 4000 students in undergraduate and post-graduate programs. It is located in the department of Drôme, 90 km away from Grenoble.
The Valence campus is the successor of the Université de Valence founded in 1452 by Dauphin Louis, future King Louis XI. The University of Valence was closed in 1792 sharing the fate of most French universities during the French Revolution.
Other locations
University facilities are also located outside of main campuses, including Grenoble INP facilities, Grenoble IUT, as well as multiple laboratories and research centers. An alpine botanical garden Jardin botanique alpin du Lautaret spans over a 2 hectares area in Col du Lautaret.
Governance
The Université Grenoble Alpes is a Public Institution of Scientific, Cultural, and Professional Relevance (French: Établissement public à caractère scientifique, culturel et professionnel"). It is governed by a board of directors and an academic council elected every four years. The president of the university is elected by the board of Directors after each renewal, and is eligible for re-election once. On 3 December 2015, staff and students from Joseph Fourier University, Pierre Mendès-France University, and Stendhal University voted to elect representatives to the central councils of the new university. On 7 January 2016, the Board of Directors of the Université Grenoble Alpes elected Lise Dumasy as president. It was the first time a woman has been elected to head a merged university in France.
The university was one of the central members of the Community Université Grenoble Alpes, a COMUE under the presidency of Patrick Lévy. The association allowed the humanities and social sciences and natural and formal sciences to be represented in the governance of the entire university system of Grenoble.
On 1 January 2020 the ComUE merged with the university, together with the Grenoble Institute of Technology, the Grenoble Institute of Political Studies, and the Grenoble School of Architecture ENSAG. The merger was organized using the newly created legal form of "établissements expérimentaux" created by the French government to promote the development of leading national universities. Yassine Lakhnech became the President of the newly merged university.
Academics
The Université Grenoble Alpes is made up of multiple departments, schools and institutes.
Faculty of sciences
Department of Chemistry and Biology
IM2AG - Department of Computer Science, Mathematics and Applied Mathematics of Grenoble (IM2AG)
PhITEM - Department of Physics, Engineering, Earth & Environmental Sciences, Mechanics
OSUG - Grenoble Observatory for Sciences of the Universe
DLST - Department for Undergraduate Degree of Sciences and Technology
Grenoble INP
Ense3 - Engineering school of Energy, Water and Environmental sciences
Ensimag - Engineering school of Applied mathematics and Computer Science
Esisar - Engineering school of Advanced Systems and Networks
Génie industriel - School of Industrial engineering and Management
Pagora - Engineering school of Paper, Print media and Biomaterials
Phelma - Engineering school of Physics, Electronics and Materials Science
Grenoble IAE - Graduate School of Management
Polytech Grenoble - Polytechnic Engineering School
Faculty of humanities, health, sports, society (H3S)
ARSH - Department of Arts and Humanities
LE - Department of foreign languages
LLASIC - Department of Languages, Literature, Performing Arts, Information and Communication
SHS - Department of Humanities and Social Sciences
STAPS - Department of physical and sports activities
Faculty of Medicine
Faculty of Pharmacology
Faculties and departments outside of regrouping
Institute of Urban Planning and Alpine Geography (IUGA)
Grenoble Law School
Grenoble Faculty of Economics
Sciences Po Grenoble - Grenoble Institute of Political Studies
ENSAG - Grenoble School of Architecture
University Institutes of Technology
IUT Grenoble 1 - University Institutes of Technology 1
IUT Grenoble 2 - University Institutes of Technology 2
IUT de Valence - Valence University Institutes of Technology
Transverse structures
DSDA - Drôme Ardèche Department of Sciences
CUEF - University Centre for French Studies
INSPE - Institute of Education and Teaching
SDL - Languages Office
Doctoral College
Research
Covering all disciplinary fields, the Université Grenoble Alpes has 106 research departments spread out in six centres bringing together different types of organizations (joint research departments, host teams, platforms, etc.) in the same scientific field.
Humanities and Social Science Centre (Pôle SHS)
Chemistry, Biology and Health Centre (Pôle CBS)
Mathematics, Information and Communication Sciences and Technologies Centre (Pôle MSTIC)
Particle Physics, Astrophysics, Geosciences, the Environment and Ecology Centre (Pôle PAGE)
Physics, Engineering and Materials Centre (Pôle PEM)
Social Sciences Centre (Pôle SS)
Multiple research labs are attached to the university.
University Grenoble Alpes, though Grenoble INP, cofounded Minatec, an international center on micro-nano technologies, uniting over 3000 researchers and 1200 students.
The university hosts one of 4 French national Institutes of Artificial Intelligence.
PhD training is administered and governed by the Doctoral College, which creates rules and standards for UGA's 13 doctoral schools.
Notable people
UGA has a considerable number of notable alumni in several different fields, ranging from academics to political leaders, executives, and artists.
Politics
Many European politicians have studied law, economics, and languages at UGA, including:
Reinhold Maier, Helene Weber, Walther Schreiber, Michel Destot, Louis Besson, Bernard Accoyer, Marlène Schiappa, Thierry Repentin, André Vallini and Geoffrey Acland.
Other political leaders include: Gaétan Barrette, Minister of Health and Social Services of Canada; Paul Kaba Thieba, Prime Minister of Burkina Faso; Abderrahmane Benkhalfa, Minister of Finance of Algeria; Hazem El Beblawi, Prime Minister of Egypt; Richard E. Hoagland, US Ambassador; Abdoulaye Wade, President of Senegal; Driss Basri, Interior Minister of Morocco; Ahmedou Ould-Abdallah, Ambassador for Mauritania; Şenkal Atasagun, Chief of the National Intelligence Organization of Turkey; Ignas Jonynas, Lithuanian diplomat; Bill Morneau, Canadian Minister of Finance; Souvanna Phouma, Prime Minister of Laos; Ali Al Shami, Minister of Foreign Affairs of Lebanon; Fathallah Sijilmassi, Moroccan politician and economist; Mohammed al-Dairi Minister of Foreign Affairs of Libya.
UGA alumni also include American journalist Warren D. Leary, French journalists Éric Conan, Olivier Galzi, Mélissa Theuriau Françoise Joly, Laurent Mauduit, Marc Dugain, Philippe Robinet, Caroline Roux, British Joanna Gosling and Safia Shah, and German Jona von Ustinov, who worked for MI5 during the time of the Nazi regime.
Among social activists who attended UGA, one could find Léo-Paul Lauzon, Léa Roback, Austin Mardon, and the former CEO of the Chicago Urban League James Compton.
Mathematics and sciences
Numerous prominent scientists have studied at the Université Grenoble Alpes since the development of the hydro-power in the region in 1880s. Prominent fields include physics, material sciences, and computer sciences with alumni like Yves Bréchet, member of the French Academy of Sciences; Rajaâ Cherkaoui El Moursli, who worked on the Higgs Boson discovery; Patrick Cousot, French computer scientist; Joseph Sifakis, Turing Award laureate; Claude Boutron, French glaciologist; Jean-Louis Coatrieux, French researcher in medical imaging; Michel Cosnard, French computer scientist; Paul Trendelenburg, German pharmacologist; Yousef Saad, computer scientist; Gérard Mourou Nobel Prize laureate, Maurice Nivat, Catherine Ritz, French Antarctic researcher; Eric Goles, Chilean mathematician; Pierre Colmez, French mathematician; René Alphonse Higonnet, French engineer; Marlon Dumas, Honduran computer scientist; Claire Berger, French physicist and Michel Campillo, French seismologist.
References
External links
01
Universities and colleges in Grenoble
Public universities in France
Science and technology in Grenoble
1339 establishments in Europe
1330s establishments in France
Educational institutions established in the 14th century |
James Donald Meindl (April 20, 1933 – June 7, 2020) was director of the Joseph M. Pettit Microelectronics Research Center and the Marcus Nanotechnology Research Center and Pettit Chair Professor of Microelectronics at the Georgia Institute of Technology in Atlanta, Georgia. He won the 2006 IEEE Medal of Honor "for pioneering contributions to microelectronics, including low power, biomedical, physical limits and on-chip interconnect networks.”
Education
He received his Bachelor of Science, Master of Science and Doctor of Philosophy degrees in Electrical Engineering from Carnegie-Mellon University in 1955, 1956 and 1958 respectively.
Career
From 1965 to 1967, he was the founding Director of the Integrated Electronics Division at the Fort Monmouth, New Jersey, US Army Electronics Laboratories. In 1967 he was appointed John M. Fluke Professor of Electrical Engineering at Stanford University before becoming vice provost of research.
He went on to serve as Associate Dean for Research in the School of Engineering; Director of the Center for Integrated Systems; and was the founding Director of the Integrated Circuits Laboratory. He was appointed Senior Vice President for Academic Affairs and Provost of Rensselaer Polytechnic Institute in 1986 and served in there until 1993.
Meindl's fellowships include the IEEE and the AAAS and he was elected a member of the National Academy of Engineering in 1978.
He is also a co-founder of Telesensory Systems, Inc., a manufacturer of electronic reading aids for the blind. Meindl also served on the board of directors of SanDisk Corporation and Zoran Corporation and previously of Stratex Networks.
Notable students
Among his more than 80 doctoral students include T. J. Rodgers, founder of Cypress Semiconductor, William R. Brody, president of Johns Hopkins University, Levy Gerzberg, founder of Zoran Corporation, Roger Melen founder of Cromemco, Jim Plummer, dean of engineering at Stanford University, L. Rafael Reif, President of MIT, Richard Swanson, founder of SunPower Corporation, Steve Combs, founder of Maxim Integrated Products, Nicky Lu, founder of Etron Technology and Krishna Saraswat, professor at Stanford University.
References
External links
Georgia Institute of Technology profile
Rensselaer Polytechnic Institute profile
James D. Meindl, "The Wizard of Watts, IEEE Spectrum, May 31, 2006. https://spectrum.ieee.org/semiconductors/design/wizard-of-watts
1933 births
2020 deaths
IEEE Medal of Honor recipients
Carnegie Mellon University College of Engineering alumni
Georgia Tech faculty
Rensselaer Polytechnic Institute faculty
Members of the United States National Academy of Engineering
Stanford University faculty |
George Campbell School of Technology is a public high school specialising in technical education, located in Durban, KwaZulu-Natal, South Africa. The school was founded as George Campbell Technical High School in 1963 and today has a co-educational student body of over 1100 pupils. The curriculum includes the compulsory subjects of Mathematics, Physical Science & Chemistry, Engineering Graphics and Design, English and Afrikaans or IsiZulu.
Electives offered are:
Woodworking
Civil Construction
Civil Services
Fitting and Machining
Automotive
Welding
Electrical Technology (Light Current)
Electrical Technology (Heavy Current)
Digital Electronics
Facilities
The Media Centre is available to all students to use during breaks and after school. Besides books, there are computers connected to the Internet, printers and photocopy facilities. The school employs a full-time librarian.
The swimming pool is 25m long and is used extensively by the school swimming and water polo teams.
The Information Technology Centre is divided into two sections so that two classes can be accommodated at the same time. One section has 32 computers and the other 34. In 2006 a third computer room was added with 34 computers. The computers in the centre are networked, linked to high speed printers and the Internet. All students do ICDL or Computer Literacy classes in grades 8 and 9 where they learn about the parts, construction and development of the computer, the Internet, and programs.
In Grade 10 learners have the choice of doing either IT, where they learn programming, such as Java, or CAT (Computer Applications Technology) where they learn computer applications such as databases, word processors and webpage design. These subjects are done through to grade 12.
The school is a registered ICDL (International Computer Driving Licence) centre where learners and staff can be trained and tested to obtain an ICDL, an internationally recognised computer qualification.
Electrical Technology Centre
The Electrical Technology Centre is a meticulously equipped institution capable of accommodating a diverse range of disciplines as outlined in its curriculum, including Electrical (Heavy Current), Electronics (Light Current), Digital Systems, and Communication Systems. Notably, the Electrical Department places significant emphasis on the practical component, recognizing its essential role in reinforcing theoretical knowledge.
Civil Technology Centre
The institute offers dedicated facilities tailored to the distinct Civil Technology domains encompassed within its campus curriculum, encompassing Woodworking, Plumbing, and Construction. These meticulously outfitted workshops provide students with invaluable hands-on opportunities to acquire practical proficiencies in each of these specialized areas.
Within the FET curriculum, students are afforded a comprehensive exposure to the realm of Civil Technology. This educational framework enables students to delve into the intricacies of architectural design and construction, facilitating an exploration of the diverse materials integral to the construction of various structures.
Mechanical Technology Centre
The introduction of the NCS document brought about a significant addition to the syllabus, in the form of a novel subject known as Mechanical Technology. This subject centered on technological processes, encompassing design, systematic problem-solving, and the practical application of scientific principles. Notably, Mechanical Technology consolidated the previously distinct subjects of Motor Mechanics, Fitting and Turning, and Technica Mechanica into a unified curriculum.
In a more recent development, the school has undertaken a diversification of its Mechanical Technology discipline. This evolution is evidenced by the reintegration of innovative and hands-on technical subjects within the Mechanical Technology framework, including Automotive Technology, Welding, and Fitting and Machining.
AutoCAD Centre
The CAD center operates a network of 35 computers, all equipped with AutoCAD software. The facility is staffed by a team of four proficient educators, each possessing the requisite qualifications to instruct Grade 10 through Grade 12 students in the usage of AutoCAD. Notably, the curriculum extends to Grade 12 learners, where in addition to AutoCAD instruction, they are also provided with comprehensive training in 3D modeling techniques.
Sport
On Saturdays, a total of 14 teams participate in competitive matches, with notable achievements observed in the Under 14 and 15 divisions, where victories have been secured against some of the premier schools within the top 3 rankings in KwaZulu-Natal.
In the year 2007, the school experienced a resurgence in rugby prowess, evidenced by the first team's remarkable performance. Out of the 15 games played, an impressive 11 victories were clinched.
Additionally, the school had the distinction of hosting the prestigious annual FNB KZN-GAUTENG Tournament, while also serving as a training venue for the revered Springboks rugby team. The Springboks utilized the school's rugby fields as a training ground before embarking on their journey to France.
George Campbell Technical High School has also garnered a noteworthy reputation for its outstanding proficiency in football within the KwaZulu-Natal region. The First XI, led by the astute coaching of Nhlanhla Bulose and under the captaincy of Ntokozo Madela, achieved a remarkable accomplishment by sustaining an undefeated record throughout the duration of the KZN Inter-Schools League from 2017 to 2019.
The educational institution further provides a range of sporting pursuits, including but not limited to:
Swimming
Rugby
Rugby Sevens
Football
Water Polo
Hockey
Cricket
Surfing & Bodyboarding
Netball
Chess
The educational institution further provides a range of extracurricular pursuits, including but not limited to:
Poetry Club
Drama Club
Choir Club
Revolutions Club
Interact Club
Durban Youth Council
External links
Education in Durban
Schools of technology in KwaZulu-Natal
Educational institutions established in 1963
1963 establishments in South Africa |
The history of mass spectrometry has its roots in physical and chemical studies regarding the nature of matter. The study of gas discharges in the mid 19th century led to the discovery of anode and cathode rays, which turned out to be positive ions and electrons. Improved capabilities in the separation of these positive ions enabled the discovery of stable isotopes of the elements. The first such discovery was with the element neon, which was shown by mass spectrometry to have at least two stable isotopes: 20Ne (neon with 10 protons and 10 neutrons) and 22Ne (neon with 10 protons and 12 neutrons). Mass spectrometers were used in the Manhattan Project for the separation of isotopes of uranium necessary to create the atomic bomb.
Prout's Hypothesis
Prout's hypothesis was an early 19th-century attempt to explain the properties of the chemical elements using the internal structure of the atom. In 1815, the English chemist William Prout observed that the atomic weights that had been measured were integer multiples of the atomic weight of hydrogen. Prout's hypothesis remained influential in chemistry throughout the 1820s. However, more careful measurements of the atomic weights, such as those compiled by Jöns Jakob Berzelius in 1828 or Edward Turner in 1832, appeared to disprove it. In particular the atomic weight of chlorine, which is 35.45 times that of hydrogen, could not at the time be explained in terms of Prout's hypothesis. It would take the better part of a century for this problem to be resolved.
Canal rays
In the mid-nineteenth century, Julius Plücker investigated the light emitted in discharge tubes and the influence of magnetic fields on the glow. Later, in 1869, Johann Wilhelm Hittorf studied discharge tubes with energy rays extending from a negative electrode, the cathode. These rays produced a fluorescence when they hit a tube's glass walls, and when interrupted by a solid object they cast a shadow.
Canal rays, also called anode rays, were observed by Eugen Goldstein, in 1886. Goldstein used a gas discharge tube which had a perforated cathode. The rays are produced in the holes (canals) in the cathode and travels in a direction opposite to the "cathode rays," which are streams of electrons. Goldstein called these positive rays "Kanalstrahlen" - canal rays.
Discovery of isotopes
In 1913, as part of his exploration into the composition of canal rays, J. J. Thomson channeled a stream of ionized neon through a magnetic and an electric field and measured its deflection by placing a photographic plate in its path. Thomson observed two patches of light on the photographic plate (see image on left), which suggested two different parabolas of deflection. Thomson concluded that the neon gas was composed of atoms of two different atomic masses (neon-20 and neon-22).
Thomson's student Francis William Aston continued the research at the Cavendish Laboratory in Cambridge, building the first full functional mass spectrometer that was reported in 1919. He was able to identify isotopes of chlorine (35 and 37), bromine (79 and 81), and krypton (78, 80, 82, 83, 84 and 86), proving that these natural occurring elements are composed of a combination of isotopes. The use of electromagnetic focusing in mass spectrograph which rapidly allowed him to identify no fewer than 212 of the 287 naturally occurring isotopes. In 1921, F. W. Aston became a fellow of the Royal Society and received a Nobel Prize in Chemistry in the following year.
His work on isotopes also led to his formulation of the Whole Number Rule which states that "the mass of the oxygen isotope being defined [as 16], all the other isotopes have masses that are very nearly whole numbers," a rule that was used extensively in the development of nuclear energy. The exact mass of many isotopes was measured leading to the result that hydrogen has a 1% higher mass than expected by the average mass of the other elements. Aston speculated about the subatomic energy and the use of it in 1936.
In 1918, Arthur Jeffrey Dempster reported on his mass spectrometer and established the basic theory and design of mass spectrometers that is still used to this day. Dempster's research over his career centered around the mass spectrometer and its applications, leading in 1935 to his discovery of the uranium isotope 235U. This isotope's ability to cause a rapidly expanding fission nuclear chain reaction allowed the development of the atom bomb and nuclear power.
In 1932, Kenneth Bainbridge developed a mass spectrometer with a resolving power of 600 and a relative precision of one part in 10,000. He used this instrument to verify the equivalence of mass and energy, E = mc2.
Manhattan Project
A Calutron is a sector mass spectrometer that was used for separating the isotopes of uranium developed by Ernest O. Lawrence during the Manhattan Project and was similar to the Cyclotron invented by Lawrence. Its name is a concatenation of Cal. U.-tron, in tribute to the University of California, Lawrence's institution and the contractor of the Los Alamos laboratory. They were implemented for industrial scale uranium enrichment at the Oak Ridge, Tennessee Y-12 plant established during the war and provided much of the uranium used for the "Little Boy" nuclear weapon, which was dropped onto Hiroshima in 1945.
Development of gas chromatography-mass spectrometry
The use of a mass spectrometer as the detector in gas chromatography was developed during the 1950s by Roland Gohlke and Fred McLafferty.
The development of affordable and miniaturized computers has helped in the simplification of the use of this instrument, as well as allowed great improvements in the amount of time it takes to analyze a sample.
Fourier transform mass spectrometry
Fourier transform ion cyclotron resonance mass spectrometry was developed by Alan G. Marshall and Melvin B. Comisarow at the University of British Columbia in 1974. The inspiration was earlier developments in conventional ICR and Fourier Transform Nuclear Magnetic Resonance (FT-NMR) spectroscopy.
Soft ionization methods
Field desorption ionization was first reported by Beckey in 1969. In field ionization, a high-potential electric field is applied to an emitter with a sharp surface, such as a razor blade, or more commonly, a filament from which tiny "whiskers" have been grown. This produces a very high electric field in which electron tunneling can result in ionization of gaseous analyte molecules. FI produces mass spectra with little or no fragmentation, dominated by molecular radical cations M+. and occasionally protonated molecules .
Chemical ionization was developed in the 1960s. Ionization of sample (analyte) is achieved by interaction of its molecules with reagent ions. The analyte is ionized by ion-molecule reactions during collisions in the source. The process may involve transfer of an electron, a proton or other charged species between the reactants. This is a less energetic procedure than electron ionization and the ions produced are, for example, protonated molecules: [M + H]+. These ions are often relatively stable, tending not to fragment as readily as ions produced by electron ionization.
Matrix-assisted laser desorption/ionization (MALDI) is a soft ionization technique used in mass spectrometry, allowing the analysis of biomolecules (biopolymers such as proteins, peptides and sugars) and large organic molecules (such as polymers, dendrimers and other macromolecules), which tend to be fragile and fragment when ionized by more conventional ionization methods. It is most similar in character to electrospray ionization both in relative softness and the ions produced (although it causes much fewer multiply charged ions). The term was first used in 1985 by Franz Hillenkamp, Michael Karas and their colleagues. These researchers found that the amino acid alanine could be ionized more easily if it was mixed with the amino acid tryptophan and irradiated with a pulsed 266 nm laser. The tryptophan was absorbing the laser energy and helping to ionize the non-absorbing alanine. Peptides up to the 2843 Da peptide melittin could be ionized when mixed with this kind of “matrix”.
The breakthrough for large molecule laser desorption ionization came in 1987 when Koichi Tanaka of Shimadzu Corp. and his co-workers used what they called the “ultra fine metal plus liquid matrix method” that combined 30 nm cobalt particles in glycerol with a 337 nm nitrogen laser for ionization. Using this laser and matrix combination, Tanaka was able to ionize biomolecules as large as the 34,472 Da protein carboxypeptidase-A. Tanaka received one-quarter of the 2002 Nobel Prize in Chemistry for demonstrating that, with the proper combination of laser wavelength and matrix, a protein can be ionized. Karas and Hillenkamp were subsequently able to ionize the 67 kDa protein albumin using a nicotinic acid matrix and a 266 nm laser. Further improvements were realized through the use of a 355 nm laser and the cinnamic acid derivatives ferulic acid, caffeic acid and sinapinic acid as the matrix. The availability of small and relatively inexpensive nitrogen lasers operating at 337 nm wavelength and the first commercial instruments introduced in the early 1990s brought MALDI to an increasing number of researchers. Today, mostly organic matrices are used for MALDI mass spectrometry.
Timeline
19th century
1886
Eugen Goldstein observes canal rays.
1898
Wilhelm Wien demonstrates that canal rays can be deflected using strong electric and magnetic fields. He shows that the mass-to-charge ratio of the particles have opposite polarity and is much larger compared to the electron. He also realizes that the particle mass is similar to the one of hydrogen particle.
1898
J. J. Thomson measures the mass-to-charge ratio of electrons.
20th century
1901
Walter Kaufmann uses a mass spectrometer to measure the relativistic mass increase of electrons.
1905
J. J. Thomson begins his study of positive rays.
1906
Thomson is awarded the Nobel Prize in Physics "in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases"
1913
Thomson is able to separate particles of different mass-to-charge ratios. He separates the 20Ne and the 22Ne isotopes, and he correctly identifies the m/z = 11 signal as a doubly charged 22Ne particle.
1919
Francis Aston constructs the first velocity focusing mass spectrograph with mass resolving power of 130.
1922
Aston is awarded the Nobel Prize in chemistry "for his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule."
1931
Ernest O. Lawrence invents the cyclotron.
1934
Josef Mattauch and Richard Herzog develop the double-focusing mass spectrograph.
1936
Arthur J. Dempster develops the spark ionization source.
1937
Aston constructs a mass spectrograph with resolving power of 2000.
1939
Lawrence receives the Nobel Prize in Physics for the cyclotron.
1942
Lawrence develops the Calutron for uranium isotope separation.
1943
Westinghouse markets its mass spectrometer and proclaims it to be "A New Electronic Method for fast, accurate gas analysis".
1946
William Stephens presents the concept of a time-of-flight mass spectrometer.
1953
Wolfgang Paul and Helmut Steinwedel introduce the quadrupole mass filter.
1954
A. J. C. Nicholson (Australia) proposes a hydrogen transfer reaction that will come to be known as the McLafferty rearrangement.
1959
Researchers at Dow Chemical interface a gas chromatograph to a mass spectrometer.
1964
British Mass Spectrometry Society established as first dedicated mass spectrometry society. It holds its first meeting in 1965 in London.
1966
F. H. Field and M. S. B. Munson develop chemical ionization.
1968
Malcolm Dole develops electrospray ionization.
1969
H. D. Beckey develops field desorption.
1974
Comisarow and Marshall develop Fourier Transform Ion Cyclotron Resonance mass spectrometry.
1976
Ronald MacFarlane and co-workers develop plasma desorption mass spectrometry.
1984
John Bennett Fenn and co-workers use electrospray to ionize biomolecules.
1985
Franz Hillenkamp, Michael Karas and co-workers describe and coin the term matrix-assisted laser desorption ionization (MALDI).
1987
Koichi Tanaka uses the “ultra fine metal plus liquid matrix method” to ionize intact proteins.
1989
Wolfgang Paul receives the Nobel Prize in Physics "for the development of the ion trap technique".
1999
Alexander Makarov presents the Orbitrap mass spectrometer.
21st century
2002
John Bennett Fenn and Koichi Tanaka are awarded one-quarter of the Nobel Prize in chemistry each "for the development of soft desorption ionisation methods ... for mass spectrometric analyses of biological macromolecules."
2005
Commercialization of Orbitrap MS
2008
ASMS Distinguished Contribution in Mass Spectrometry Award
See also
Mass spectrometry
History of chemistry
History of physics
References
Bibliography
Measuring Mass: From Positive Rays to Proteins by Michael A. Grayson (Editor) ()
External links
History of Mass Spectrometry - Pioneers - University of New South Wales Sydney
Five Mass Spectrometry Nobel Prize Pioneers - Bristol University
History of Mass Spectrometry - Scripps Institute
Mass spectrometry |
Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.
Formalization
A corner can be defined as the intersection of two edges. A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighbourhood of the point.
An interest point is a point in an image which has a well-defined position and can be robustly detected. This means that an interest point can be a corner but it can also be, for example, an isolated point of local intensity maximum or minimum, line endings, or a point on a curve where the curvature is locally maximal.
In practice, most so-called corner detection methods detect interest points in general, and in fact, the term "corner" and "interest point" are used more or less interchangeably through the literature. As a consequence, if only corners are to be detected it is necessary to do a local analysis of detected interest points to determine which of these are real corners. Examples of edge detection that can be used with post-processing to detect corners are the Kirsch operator and the Frei-Chen masking set.
"Corner", "interest point" and "feature" are used interchangeably in literature, confusing the issue. Specifically, there are several blob detectors that can be referred to as "interest point operators", but which are sometimes erroneously referred to as "corner detectors". Moreover, there exists a notion of ridge detection to capture the presence of elongated objects.
Corner detectors are not usually very robust and often require large redundancies introduced to prevent the effect of individual errors from dominating the recognition task.
One determination of the quality of a corner detector is its ability to detect the same corner in multiple similar images, under conditions of different lighting, translation, rotation and other transforms.
A simple approach to corner detection in images is using correlation, but this gets very computationally expensive and suboptimal. An alternative approach used frequently is based on a method proposed by Harris and Stephens (below), which in turn is an improvement of a method by Moravec.
Moravec corner detection algorithm
This is one of the earliest corner detection algorithms and defines a corner to be a point with low self-similarity. The algorithm tests each pixel in the image to see if a corner is present, by considering how similar a patch centered on the pixel is to nearby, largely overlapping patches. The similarity is measured by taking the sum of squared differences (SSD) between the corresponding pixels of two patches. A lower number indicates more similarity.
If the pixel is in a region of uniform intensity, then the nearby patches will look similar. If the pixel is on an edge, then nearby patches in a direction perpendicular to the edge will look quite different, but nearby patches in a direction parallel to the edge will result in only a small change. If the pixel is on a feature with variation in all directions, then none of the nearby patches will look similar.
The corner strength is defined as the smallest SSD between the patch and its neighbours (horizontal, vertical and on the two diagonals). The reason is that if this number is high, then the variation along all shifts is either equal to it or larger than it, so capturing that all nearby patches look different.
If the corner strength number is computed for all locations, that it is locally maximal for one location indicates that a feature of interest is present in it.
As pointed out by Moravec, one of the main problems with this operator is that it is not isotropic: if an edge is present that is not in the direction of the neighbours (horizontal, vertical, or diagonal), then the smallest SSD will be large and the edge will be incorrectly chosen as an interest point.
The Harris & Stephens / Shi–Tomasi corner detection algorithms
Harris and Stephens improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly, instead of using shifted patches. (This corner score is often referred to as autocorrelation, since the term is used in the paper in which this detector is described. However, the mathematics in the paper clearly indicate that the sum of squared differences is used.)
Without loss of generality, we will assume a grayscale 2-dimensional image is used. Let this image be given by . Consider taking an image patch over the area and shifting it by . The weighted sum of squared differences (SSD) between these two patches, denoted , is given by:
can be approximated by a Taylor expansion. Let and be the partial derivatives of , such that
This produces the approximation
which can be written in matrix form:
where A is the structure tensor,
In words, we find the covariance of the partial derivative of the image intensity with respect to the and axes.
Angle brackets denote averaging (i.e. summation over ). denotes the type of window that slides over the image. If a Box filter is used the response will be anisotropic, but if a Gaussian is used, then the response will be isotropic.
A corner (or in general an interest point) is characterized by a large variation of in all directions of the vector . By analyzing the eigenvalues of , this characterization can be expressed in the following way: should have two "large" eigenvalues for an interest point.
Based on the magnitudes of the eigenvalues, the following inferences can be made based on this argument:
If and then this pixel has no features of interest.
If and has some large positive value, then an edge is found.
If and have large positive values, then a corner is found.
Harris and Stephens note that exact computation of the eigenvalues is computationally expensive, since it requires the computation of a square root, and instead suggest the
following function , where is a tunable sensitivity parameter:
Therefore, the algorithm does not have to actually compute the eigenvalue decomposition of the matrix and
instead it is sufficient to evaluate the determinant and trace of to find
corners, or rather interest points in general.
The Shi–Tomasi corner detector directly computes because under certain assumptions, the corners are more stable for tracking. Note that this method is also sometimes referred to as the Kanade–Tomasi corner detector.
The value of has to be determined empirically, and in the literature values in the range 0.04–0.15 have been reported as feasible.
One can avoid setting the parameter by using Noble's corner measure which amounts to the harmonic mean of the eigenvalues:
being a small positive constant.
If can be interpreted as the precision matrix for the corner position, the covariance matrix for the corner position is , i.e.
The sum of the eigenvalues of , which in that case can be interpreted as a generalized variance (or a "total uncertainty") of the corner position, is related to Noble's corner measure by the following equation:
The Förstner corner detector
In some cases, one may wish to compute the location of a corner with subpixel accuracy. To achieve an approximate solution, the Förstner algorithm solves for the point closest to all the tangent lines of the corner in a given window and is a least-square solution. The algorithm relies on the fact that for an ideal corner, tangent lines cross at a single point.
The equation of a tangent line at pixel is given by:
where is the gradient vector of the image at .
The point closest to all the tangent lines in the window is:
The distance from to the tangent lines is weighted by the gradient magnitude, thus giving more importance to tangents passing through pixels with strong gradients.
Solving for :
are defined as:
Minimizing this equation can be done by differentiating with respect to and setting it equal to 0:
Note that is the structure tensor. For the equation to have a solution, must be invertible, which implies that must be full rank (rank 2). Thus, the solution
only exists where an actual corner exists in the window .
A methodology for performing automatic scale selection for this corner localization method has been presented by Lindeberg by minimizing the normalized residual
over scales. Thereby, the method has the ability to automatically adapt the scale levels for computing the image gradients to the noise level in the image data, by choosing coarser scale levels for noisy image data and finer scale levels for near ideal corner-like structures.
Notes:
can be viewed as a residual in the least-square solution computation: if , then there was no error.
this algorithm can be modified to compute centers of circular features by changing tangent lines to normal lines.
The multi-scale Harris operator
The computation of the second moment matrix (sometimes also referred to as the structure tensor) in the Harris operator, requires the computation of image derivatives in the image domain as well as the summation of non-linear combinations of these derivatives over local neighbourhoods. Since the computation of derivatives usually involves a stage of scale-space smoothing, an operational definition of the Harris operator requires two scale parameters: (i) a local scale for smoothing prior to the computation of image derivatives, and (ii) an integration scale for accumulating the non-linear operations on derivative operators into an integrated image descriptor.
With denoting the original image intensity, let denote the scale space representation of obtained by convolution with a Gaussian kernel
with local scale parameter :
and let and denote the partial derivatives of .
Moreover, introduce a Gaussian window function with integration scale parameter . Then, the multi-scale second-moment matrix can be defined as
Then, we can compute eigenvalues of in a similar way as the eigenvalues of and define the multi-scale Harris corner measure as
Concerning the choice of the local scale parameter and the integration scale parameter , these scale parameters are usually coupled by a relative integration scale parameter such that , where is usually chosen in the interval . Thus, we can compute the multi-scale Harris corner measure at any scale in scale-space to obtain a multi-scale corner detector, which responds to corner structures of varying sizes in the image domain.
In practice, this multi-scale corner detector is often complemented by a scale selection step, where the scale-normalized Laplacian operator
is computed at every scale in scale-space and scale adapted corner points with automatic scale selection (the "Harris-Laplace operator") are computed from the points that are simultaneously:
spatial maxima of the multi-scale corner measure
local maxima or minima over scales of the scale-normalized Laplacian operator :
The level curve curvature approach
An earlier approach to corner detection is to detect points where the curvature of level curves and the gradient magnitude are simultaneously high.
A differential way to detect such points is by computing the rescaled level curve curvature (the product of the level curve curvature and the gradient magnitude raised to the power of three)
and to detect positive maxima and negative minima of this differential expression at some scale in the scale space representation of the original image.
A main problem when computing the rescaled level curve curvature entity at a single scale however, is that it may be sensitive to noise and to the choice of the scale level. A better method is to compute the -normalized rescaled level curve curvature
with and to detect signed scale-space extrema of this expression, that are points and scales that are positive maxima and negative minima with respect to both space and scale
in combination with a complementary localization step to handle the increase in localization error at coarser scales. In this way, larger scale values will be associated with rounded corners of large spatial extent while smaller scale values will be associated with sharp corners with small spatial extent. This approach is the first corner detector with automatic scale selection (prior to the "Harris-Laplace operator" above) and has been used for tracking corners under large scale variations in the image domain and for matching corner responses to edges to compute structural image features for geon-based object recognition.
Laplacian of Gaussian, differences of Gaussians and determinant of the Hessian scale-space interest points
LoG is an acronym standing for Laplacian of Gaussian, DoG is an acronym standing for difference of Gaussians (DoG is an approximation of LoG), and DoH is an acronym standing for determinant of the Hessian. These scale-invariant interest points are all extracted by detecting scale-space extrema of scale-normalized differential expressions, i.e., points in scale-space where the corresponding scale-normalized differential expressions assume local extrema with respect to both space and scale
where denotes the appropriate scale-normalized differential entity (defined below).
These detectors are more completely described in blob detection. The scale-normalized Laplacian of the Gaussian and difference-of-Gaussian features (Lindeberg 1994, 1998; Lowe 2004)
do not necessarily make highly selective features, since these operators may also lead to responses near edges. To improve the corner detection ability of the differences of Gaussians detector, the feature detector used in the SIFT system therefore uses an additional post-processing stage, where the eigenvalues of the Hessian of the image at the detection scale are examined in a similar way as in the Harris operator. If the ratio of the eigenvalues is too high, then the local image is regarded as too edge-like, so the feature is rejected. Also Lindeberg's Laplacian of the Gaussian feature detector can be defined to comprise complementary thresholding on a complementary differential invariant to suppress responses near edges.
The scale-normalized determinant of the Hessian operator (Lindeberg 1994, 1998)
is on the other hand highly selective to well localized image features and does only respond when there are significant grey-level variations in two image directions and is in this and other respects a better interest point detector than the Laplacian of the Gaussian. The determinant of the Hessian is an affine covariant differential expression and has better scale selection properties under affine image transformations than the Laplacian operator
(Lindeberg 2013, 2015). Experimentally this implies that determinant of the Hessian interest points have better repeatability properties under local image deformation than Laplacian interest points, which in turns leads to better performance of image-based matching in terms higher efficiency scores and lower 1−precision scores.
The scale selection properties, affine transformation properties and experimental properties of these and other scale-space interest point detectors are analyzed in detail in (Lindeberg 2013, 2015).
Scale-space interest points based on the Lindeberg Hessian feature strength measures
Inspired by the structurally similar properties of the Hessian matrix of a function and the second-moment matrix (structure tensor) , as can e.g. be manifested in terms of their similar transformation properties under affine image deformations
,
,
Lindeberg (2013, 2015) proposed to define four feature strength measures from the Hessian matrix in related ways as the Harris and Shi-and-Tomasi operators are defined from the structure tensor (second-moment matrix).
Specifically, he defined the following unsigned and signed Hessian feature strength measures:
the unsigned Hessian feature strength measure I:
the signed Hessian feature strength measure I:
the unsigned Hessian feature strength measure II:
the signed Hessian feature strength measure II:
where and
denote the trace and the determinant of the Hessian matrix of the scale-space representation at any scale ,
whereas
denote the eigenvalues of the Hessian matrix.
The unsigned Hessian feature strength measure responds to local extrema by positive values and is not sensitive to saddle points, whereas the signed Hessian feature strength measure does additionally respond to saddle points by negative values. The unsigned Hessian feature strength measure is insensitive to the local polarity of the signal, whereas the signed Hessian feature strength measure responds to the local polarity of the signal by the sign of its output.
In Lindeberg (2015) these four differential entities were combined with local scale selection based on either scale-space extrema detection
or scale linking. Furthermore, the signed and unsigned Hessian feature strength measures and were combined with complementary thresholding on .
By experiments on image matching under scaling transformations on a poster dataset with 12 posters with multi-view matching over scaling transformations up to a scaling factor of 6 and viewing direction variations up to a slant angle of 45 degrees with local image descriptors defined from reformulations of the pure image descriptors in the SIFT and SURF operators to image measurements in terms of Gaussian derivative operators (Gauss-SIFT and Gauss-SURF) instead of original SIFT as defined from an image pyramid or original SURF as defined from Haar wavelets, it was shown that scale-space interest point detection based on the unsigned Hessian feature strength measure allowed for the best performance and better performance than scale-space interest points obtained from the determinant of the Hessian . Both the unsigned Hessian feature strength measure , the signed Hessian feature strength measure and the determinant of the Hessian allowed for better performance than the Laplacian of the Gaussian . When combined with scale linking and complementary thresholding on , the signed Hessian feature strength measure did additionally allow for better performance than the Laplacian of the Gaussian .
Furthermore, it was shown that all these differential scale-space interest point detectors defined from the Hessian matrix allow for the detection of a larger number of interest points and better matching performance compared to the Harris and Shi-and-Tomasi operators defined from the structure tensor (second-moment matrix).
A theoretical analysis of the scale selection properties of these four Hessian feature strength measures and other differential entities for detecting scale-space interest points, including the Laplacian of the Gaussian and the determinant of the Hessian, is given in Lindeberg (2013) and an analysis of their affine transformation properties as well as experimental properties in Lindeberg (2015).
Affine-adapted interest point operators
The interest points obtained from the multi-scale Harris operator with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain an interest point operator that is more robust to perspective transformations, a natural approach is to devise a feature detector that is invariant to affine transformations. In practice, affine invariant interest points can be obtained by applying affine shape adaptation where the shape of the smoothing kernel is iteratively warped to match the local image structure around the interest point or equivalently a local image patch is iteratively warped while the shape of the smoothing kernel remains rotationally symmetric (Lindeberg 1993, 2008; Lindeberg and Garding 1997; Mikolajzcyk and Schmid 2004). Hence, besides the commonly used multi-scale Harris operator, affine shape adaptation can be applied to other corner detectors as listed in this article as well as to differential blob detectors such as the Laplacian/difference of Gaussian operator, the determinant of the Hessian and the Hessian–Laplace operator.
The Wang and Brady corner detection algorithm
The Wang and Brady detector considers the image to be a surface, and looks for places where there is large curvature along an image edge. In other words, the algorithm looks for places where the edge changes direction rapidly. The corner score, , is given by:
where is the unit vector perpendicular to the gradient, and determines how edge-phobic the detector is. The authors also note that smoothing (Gaussian is suggested) is required to reduce noise.
Smoothing also causes displacement of corners, so the authors derive an expression for the displacement of a 90 degree corner, and apply this as a correction factor to the detected corners.
The SUSAN corner detector
SUSAN is an acronym standing for smallest univalue segment assimilating nucleus. This method is the subject of a 1994 UK patent which is no longer in force.
For feature detection, SUSAN places a circular mask over the pixel to be tested (the nucleus). The region of the mask is , and a pixel in this mask is represented by . The nucleus is at . Every pixel is compared to the nucleus using the comparison function:
where is the brightness difference threshold, is the brightness of the pixel and the power of the exponent has been determined empirically. This function has the appearance of a smoothed top-hat or rectangular function. The area of the SUSAN is given by:
If is the rectangular function, then is the number of pixels in the mask which are within of the nucleus. The response of the SUSAN operator is given by:
where is named the 'geometric threshold'. In other words, the SUSAN operator only has a positive score if the area is small enough. The smallest SUSAN locally can be found using non-maximal suppression, and this is the complete SUSAN operator.
The value determines how similar points have to be to the nucleus before they are considered to be part of the univalue segment. The value of determines the minimum size of the univalue segment. If is large enough, then this becomes an edge detector.
For corner detection, two further steps are used. Firstly, the centroid of the SUSAN is found. A proper corner will have the centroid far from the nucleus. The second step insists that all points on the line from the nucleus through the centroid out to the edge of the mask are in the SUSAN.
The Trajkovic and Hedley corner detector
In a manner similar to SUSAN, this detector directly tests whether a patch under a pixel is self-similar by examining nearby pixels. is the pixel to be considered, and is point on a circle centered around . The point is the point opposite to along the diameter.
The response function is defined as:
This will be large when there is no direction in which the centre pixel is similar to two nearby pixels along a diameter. is a discretised circle (a Bresenham circle), so interpolation is used for intermediate diameters to give a more isotropic response. Since any computation gives an upper bound on the , the horizontal and vertical directions are checked first to see if it is worth proceeding with the complete computation of .
AST-based feature detectors
AST is an acronym standing for accelerated segment test. This test is a relaxed version of the SUSAN corner criterion. Instead of evaluating the circular disc, only the pixels in a Bresenham circle of radius around the candidate point are considered. If contiguous pixels are all brighter than the nucleus by at least or all darker than the nucleus by , then the pixel under the nucleus is considered to be a feature. This test is reported to produce very stable features. The choice of the order in which the pixels are tested is a so-called Twenty Questions problem. Building short decision trees for this problem results in the most computationally efficient feature detectors available.
The first corner detection algorithm based on the AST is FAST (features from accelerated segment test). Although can in principle take any value, FAST uses only a value of 3 (corresponding to a circle of 16 pixels circumference), and tests show that the best results are achieved with being 9. This value of is the lowest one at which edges are not detected. The order in which pixels are tested is determined by the ID3 algorithm from a training set of images. Confusingly, the name of the detector is somewhat similar to the name of the paper describing Trajkovic and Hedley's detector.
Automatic synthesis of detectors
Trujillo and Olague introduced a method by which genetic programming is used to automatically synthesize image operators that can detect interest points. The terminal and function sets contain primitive operations that are common in many previously proposed man-made designs. Fitness measures the stability of each operator through the repeatability rate, and promotes a uniform dispersion of detected points across the image plane. The performance of the evolved operators has been confirmed experimentally using training and testing sequences of progressively transformed images. Hence, the proposed GP algorithm is considered to be human-competitive for the problem of interest point detection.
Spatio-temporal interest point detectors
The Harris operator has been extended to space-time by Laptev and Lindeberg.
Let denote the spatio-temporal second-moment matrix defined by
Then, for a suitable choice of ,
spatio-temporal interest points are detected from spatio-temporal extrema of the following spatio-temporal Harris measure:
The determinant of the Hessian operator has been extended to joint space-time by Willems et al and Lindeberg, leading to the following scale-normalized differential expression:
In the work by Willems et al, a simpler expression corresponding to and was used. In Lindeberg, it was shown that and implies better scale selection properties in the sense that the selected scale levels obtained from a spatio-temporal Gaussian blob with spatial extent and temporal extent will perfectly match the spatial extent and the temporal duration of the blob, with scale selection performed by detecting spatio-temporal scale-space extrema of the differential expression.
The Laplacian operator has been extended to spatio-temporal video data by Lindeberg, leading to the following two spatio-temporal operators, which also constitute models of receptive fields of non-lagged vs. lagged neurons in the LGN:
For the first operator, scale selection properties call for using and , if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of an onset Gaussian blob. For the second operator, scale selection properties call for using and , if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of a blinking Gaussian blob.
Colour extensions of spatio-temporal interest point detectors have been investigated by Everts et al.
Bibliography
Reference implementations
This section provides external links to reference implementations of some of the detectors described above. These reference implementations are provided by the authors of the paper in which the detector is first described. These may contain details not present or explicit in the papers describing the features.
DoG detection (as part of the SIFT system), Windows and x86 Linux executables
Harris-Laplace, static Linux executables. Also contains DoG and LoG detectors and affine adaptation for all detectors included.
FAST detector, C, C++, MATLAB source code and executables for various operating systems and architectures.
lip-vireo, [LoG, DoG, Harris-Laplacian, Hessian and Hessian-Laplacian], [SIFT, flip invariant SIFT, PCA-SIFT, PSIFT, Steerable Filters, SPIN][Linux, Windows and SunOS] executables.
SUSAN Low Level Image Processing, C source code.
Online Implementation of the Harris Corner Detector - IPOL
See also
blob detection
affine shape adaptation
scale space
ridge detection
interest point detection
feature detection (computer vision)
Image derivative
External links
Brostow, "Corner Detection -- UCL Computer Science"
Feature detection (computer vision) |
is an engineer. He is a global authority of Microelectromechanical systems and serves as the professor of the Tohoku University graduate school engineering graduate course.
Born in Sendai, Japan, in 1949, Masayoshi Esashi received his B.E. degree in electronic engineering in 1971 and a Doctor of Engineering degree in 1976 at Tohoku University.
Esashi served as a research associate from 1976 and an associate professor from 1981 at the Department of Electronic Engineering, Tohoku University. Since 1990 he has been a professor. Currently, he is the director of micro/nanomachining research and education center in Tohoku University. He is an associate director of the Semiconductor Research Institute.
He was a director of the Venture Business Laboratory in Tohoku University (1995–1998), and was a President of Sensor-Micromachine Society in Institute of Electrical Engineers in Japan (2002–2003). He has been a collaboration coordinator for Sendai city since 2004.
He served as a general co-chairman of the 4th IEEE Micro Electro Mechanical Workshop in 1991 held in Nara, Japan, as a general chairman of the 10th International Conference on Solid-State Sensors and Actuators (Transducers 99) in 1999 held in Sendai, Japan and Technical Program Chairman of IEEE Sensors 2006 being held in Daegu, Korea. He has also been studying microsensors and micromachined integrated systems (MEMS).
He was awarded the 2016 IEEE Jun-ichi Nishizawa Medal.
Main works
References
Japanese electronics engineers
1949 births
Living people
People from Sendai
Tohoku University alumni
Academic staff of Tohoku University |
Northwestern Polytechnical University (NWPU; ) is a national public science and engineering university located in Xi'an, China. The university is affiliated with the Ministry of Industry and Information Technology. It is part of Project 985, Project 211, and the Double First Class University Plan.
NPU specializes in education and research in the fields of aeronautical, astronautical and marine engineering. As of 2012, NPU had 13,736 graduate students (3,063 full-time doctorate candidates, 7,087 master candidates, 3,586 professional degree candidates) and 14,395 undergraduate students.
As of 2023, Northwestern Polytechnical University was listed as one of the top 350 universities in the Academic Ranking of World Universities and the U.S. News & World Report Best Global University Ranking.
History
NPU builds upon the legacies of its three major predecessors.
Northwestern Engineering Institute
In 1938, due to the Japanese invasion of China, many universities in the occupied east evacuated to "Free China" in the western hinterland. Among those that fled to Shaanxi, the National Beiyang Engineering Institute, the Engineering School of Beiping University, the Engineering School of the National Northeastern University, and the (private) Jiaozuo Engineering Institute were combined to form the National Northwestern Engineering Institute in Hanzhong, a city surrounded by mountains. In 1946, after the surrender of Japan, Northwestern Engineering Institute was relocated to the city of Xianyang.
East China Aeronautics Institute
In 1952, to meet the demand for concentrated aeronautics research, the departments of aeronautical engineering of the former National Central University (later known as Nanjing University), Jiaotong University and Zhejiang University were relocated to Nanjing and combined to form the East China Aeronautics Institute. This institute was relocated to Xi'an and renamed the Xi'an Aeronautics Institute, which is the second predecessor of the NPU, in 1956.
In 1957, the Northwestern Engineering Institute and the Xi'an Aeronautics Institute were merged to form NPU, which concentrated its research efforts on defense technology for aeronautics, astronautics and marine engineering.
People's Liberation Army Military Engineering Institute
In 1970, due to deteriorating relations with the former USSR, the People's Liberation Army Military Engineering Institute, which was located in Harbin, was disassembled. Many of its departments were relocated from the city near the China-Soviet border; among these, its Department of Aeronautical Engineering (the third predecessor of NPU) was moved to Xi'an and merged with the other two predecessors to form NPU.
Campuses
NPU's campuses comprise about 4.58 km2, with the Youyi Campus, located in Beilin District, Xi'an, comprising about 0.8 km2 and the Chang'an Campus, located in Chang'an District, Xi'an, comprising about 2.6 km2. and the Taicang Campus, located in Suzhou, Jiangsu, comprising about 1.18 km2.
Youyi Campus
The Youyi Campus, often referred as the 'old campus', is divided into three parts: the South, the West and the North by the West Youyi Road and the South Laodong Road. The campus contains education facilities, apartments of teachers and students, stadiums, logistics facilities, a kindergarten and the primary and middle school affiliated with NPU.
Chang'an Campus
The Chang'an Campus, often referred as the 'new campus', is divided into two parts: the East and the West by the Dongxiang Road. This campus contains many newly built administration buildings, school buildings, experiments facilities, sports facilities and so on. It serves as the main base for undergraduate education of NPU.
Taicang Campus
The Taicang Campus, located at the border between Suzhou and Shanghai, is currently under construction and will start to run in September 2021. This campus will contain 10 schools including School of Artificial Intelligence, School of Flexible Electronics, School of Business etc. It will serve around 10,000 students.
Infrastructure development
NPU is building or going to build more facilities on both of these two campuses. Supported by the Ministry of Industry and Information Technology and the former Commission for Science, Technology and Industry for National Defense, NPU has received a 1.58 billion CNY (US$252.8 million) investment in infrastructure constructions from the central government and there're more than 20 construction projects running currently including the Material Science Building, the Innovation Science and Technology Building, the Reconstruction of Dangerous and Old Apartments after the earthquake, the #1 Student Apartment in the Youyi Campus and the New Library, Infrastructure Plan I & II, the High-tech Experimental Research Center and others in the Chang'an Campus.
Academics
NPU has a strong research capacity in engineering. It was confirmed as one of the National Key Universities by the State Council in 1960. In the seventh and eighth Five-year plan, NPU was listed as one of the 15 National Key Developing Universities. In the ninth Five-year plan, NPU joined the Project 211. And, in the tenth Five-year plan, NPU joined the Project 985.
Since its establishment, NPU has educated more than 150,000 high level technicians and researchers for China's defense industry and national economy development. The first PhD from 6 disciplines in China was graduated from NPU. And, among NPU's alumni, there are more than 30 fellows of the CAS and the CAE, 30 generals in PLA and 6 recipients of China's TOP 10 Outstanding Youth Elite.
Accreditation and memberships
NPU is one of the Seven Sons of National Defence and member of SAP University Alliances.
Funding
The university's research funding has been continually rising every year. It reached 1.67 billion RMB in 2010 (US$0.26 billion), ranking fifth among all universities in China with funding per faculty member ranked first. And, in 2011, it reached 1.91 billion RMB.
Rankings and reputation
As of 2023, Northwestern Polytechnical University was listed as one of the world's top 350th universities in the Academic Ranking of World Universities and the U.S. News & World Report Best Global University Ranking.
As of 2022, NPU ranked 25th in the Best Chinese Universities Ranking compiled by Shanghai Ruanke. And, NPU's engineering ranked the tenth among China's engineering universities.
According to the results of the third Evaluation of Disciplines by the Ministry of Education of the People's Republic of China, the ranks of parts of NPU's disciplines are shown below.
Laboratories
The university has seven State Key Laboratories and 28 Province/Ministry-level Key Laboratories. Only the State Key Laboratories are listed below. These labs often specialize in particular areas of academic research and receive government funding:
Materials science
Chemistry
Mathematics and Physics
Geography
Biotechnology
Information technology
Engineering
Medicine
Organization
Education Experimental School
The Education Experimental School is the college with special honors in NPU. Its predecessor is the Education Reform Class established in 1985 which was upgraded to the current school in 2001. The aim of this school is to provide the most elite students of NPU with the best resources so that they can become future leaders in their fields.
Academic schools
NPU has 15 academic schools, 1 educational experimental school( known as Honors College), 1 independent school, 1 joint school and some other administrative schools. The university offers 58 undergraduate programs, 117 master programs, 67 doctorate programs and 14 postdoctoral programs. Currently, there are 2 First-level National Key Disciplines, 7 Second-level National Key Disciplines, 2 National Key (to cultivate) Disciplines, 21 First-level Disciplines for Doctorate Degree Granting and 31 First-level Disciplines for master's degree Granting. Additionally, NPU has 7 State Key Laboratories, 28 Province/Ministry-level Key Laboratories and 19 Province/Ministry-level Engineering Research Centers.
The 15 academic schools of NPU are listed here in the official order.
School of Aeronautics
School of Astronautics
School of Marine Science and Technology
School of Materials Science and Engineering
School of Mechanical Engineering
School of Mechanics, Civil Engineering and Architecture
School of Power and Energy
School of Electronics and Information
School of Automation
School of Computer Science and Technology
School of Science
School of Management
School of Humanities, Economics and Law
School of Software and Microelectronics
School of Life Science
School of Foreign Languages
Queen Mary University of London Engineering School, NPU
Other schools
Besides the 15 major academic schools and the education experimental school, NPU has other schools or institutes. They are:
Engineering Practice Exercise Center
Physical Education Department
Continuing Education School
Internet Education School
International College
National Secrecy School
Mingde College (independent)
Student life
Innovation Study Base
The Innovation Study Base, directed by the Dean's Office of NPU, consists of multiple student competition programs including NPU's university team of Football Robotics, Dancing Robotics, Model United Nations, Mathematical Modeling, Model Airplanes and so on. In the 11th Five-year plan, NPU students won more than 1,400 awards at the international, state and provincial levels including 54 first or second place international level award and 147 first or second national level award.
Student Club Center
The Student Club Center, directed by the Communist Youth League of NPU, serves all university-level student clubs. The center helps to organize and supervise student activities on the campus and acts like a bridge between the administration and students' clubs.
Affiliated secondary school
Notable alumni
Wu Yi – Former Vice Premier of the State Council of the People's Republic of China
Hao Peng – Party Committee Secretary of the State-owned Assets Supervision and Administration Commission (SASAC); former Governor of Qinghai province
Zhang Qingwei – Communist Party Secretary of Heilongjiang province; former Governor of Hebei province
Yang Wei – president of Zhejiang University, aircraft designer in 611
Yuan-Cheng Fung – graduate from department of Engineering of National Central University (Nanjing University, Present), which is one of the predecessors of Northwestern Polytechnical University
Jasen Wang - Founder and CEO of Makeblock
Notable faculty members
Hu Peiquan, Founder of the Department of Engineering Mechanics and the Journal of Northwestern Polytechnical University
Chuah Hean Teik, Consultant Professor to Northwestern Polytechnical University and Former President cum CEO of Universiti Tunku Abdul Rahman
References
External links
Official website
Official website
Non-Official Communities of Northwestern Polytechnical University
Project 211
Project 985
Vice-ministerial universities in China
1938 establishments in China
Universities and colleges established in 1938 |
The University of Computer Studies, Yangon (UCSY) ( ), located in the outskirts of Yangon in Hlawga, is the leading IT and computer science university of Myanmar. The university, administered by the Ministry of Education, offers undergraduate and graduate degree programs in computer science and technology. The language of instruction at UCSY is English. Along with the University of Computer Studies, Mandalay, UCSY is one of two premier universities specializing in computer studies, and also one of the most selective universities in the country.
Many of the country's middle and upper level personnel in government and industry are graduates of
UCSY.
History
UCSY's origins trace back to the founding of the Universities' Computer Center (UCC) in 1971 at the Hlaing Campus of Yangon University. Equipped with ICL ICL 1902S and with the help of distinguished visiting professors from US, UK and Europe, UCC provided computer education and training to university and government employees. In 1973, it began offering a master's degree program (MSc in Computer Science), and a graduate diploma program (Diploma in Automated Computing) in cooperation with the Mathematics Department of Yangon University. The center added DEC PDP-11/70 mini-computers in 1983, and personal computers in 1990. In 1986, the center added B.C.Sc. (Bachelor of Computer Science) and B.C.Tech. (Bachelor of Computer Technology) degree programs.
In March 1988, the Institute of Computer Science and Technology (ICST) was established, and began offering bachelor's degree programs in Computer Science. In 1993, it started an internationally accepted International Diploma in Computer Studies (IDCS) program with the help of UK's the National Computing Centre (NCC). On 1 January 1997, the university's control was transferred from the Ministry of Education to the Ministry Science and Technology. On 1 July 1998, it was renamed the University of Computer Studies, Yangon. A graduate school with master's and PhD degree programs was established in May 2001.
Programs
UCSY offers five-year bachelor's and two-year master's degree programs in computer science and computer technology. The school also offers a two-year post-graduate diploma and a three-year Ph.D. program in computer science and information technology. The school's language of instruction is English.
Faculties
Faculty of Computer Systems and Technologies
Faculty of Computer Science
Faculty of Information Science
Faculty of Computing
Supporting departments
Department of Japanese
Department of English
Department of Natural Science
Department of Information Technology Operations
Research labs
Natural Language Processing Lab
Geographic Information System Lab
Image Processing Lab
Mobile and Wireless Computing Lab
Embedded System Lab
Cyber Security Research Lab
Cisco Network Lab
Cloud Computing Lab
Artificial Intelligence Lab
Computer Graphics and Visualization
Database System Lab
Software Engineering Lab
Numerical Analysis Lab
Operation Research Lab
UCSY-Ishibashi Lab
UCSY Forensics Lab
International Collaboration
The university is known for working with international universities, research institutes and international governmental organizations.
Affiliations
Universities
Keio University of Japan
Nagoya Institute of Technology
University of Miyazaki
Handong Global University
National Institute of Information and Communications Technology
University of Computer Studies, Mandalay (UCSM)
Other universities
The following universities of computer studies are officially affiliated with UCSY. Their qualified graduates can continue their advanced studies at UCSY.
Computer University, Bamaw
Computer University, Dawei
Computer University, Hinthada
Computer University, Kalay
Computer University, Kyaingtong
Computer University, Loikaw
Computer University, Lashio
University of Computer Studies (Maubin)
Computer University, Magway
Computer University, Thaton
University of Computer Studies, Mandalay
Computer University, Mandalay
Computer University, Monywa
Computer University, Myeik
Computer University, Meiktila
Computer University, Myitkyina
Computer University, Pathein
Computer University, Pakokku
Computer University, Hpa-An
Computer University, Pyay
Computer University, Pinlon
Computer University, Sittwe
University of Computer Studies (Taungoo)
University of Computer Studies, Taunggyi
Alumni
There are currently expected to be more than 8000 alumni members. Among the alumni of UCSY, they have become leading educators, developers, PMPs, politicians, businessmen, writers, architects, athletes, actors, musicians, and those that have gained both national and international fame. Among notable alumni, just to name a few, are Dr. Mie Mie Thet Thwin, Rector UCSY; Dr. Saw Sandar Aye, Rector UIT; Dr. Moe Pwint, University of Computer Studies, Mandalay; Dr. Win Aye, Myanmar Institute of Information Technology; Dr. Thinn Thu Naing, University of Computer Studies; Min Maw Kun, Myanmar Academy Award-winning film actor; and many.
References
External links
School site: Official website of UCSY
Educational institutions established in 1988
Universities and colleges in Yangon
Technological universities in Myanmar
1988 establishments in Burma |
Joseph Ó Ruanaidh (aka Joseph Rooney) is a scientist and frequently cited author in the field of digital watermarking.
Early life
He was born in London, England in 1967 and raised in Ballyfermot, Dublin. He attended the O'Connell School in Dublin.
He studied on his own, without supervision, to obtain the necessary qualifications to enter Trinity College Dublin in 1986. In 1988 he was awarded a Trinity College Foundation Scholarship, where he was also jointly awarded the St Patrick's Benevolent Society of Toronto prize for obtaining the highest overall marks in the scholarship examinations in the university that year. In 1990, he graduated with a degree in Engineering from Trinity College Dublin.
He was then awarded three scholarships to go to the University of Cambridge for his PhD where he studied the applications of Bayesian Methods to digital signal processing. The work included novel algorithms for Audio Restoration as well as more general methods for analysing and detecting changes in data. His doctoral dissertation was published in book form by Springer Verlag.
Early career
His postdoctoral work at Trinity College Dublin and at the University of Geneva concentrated on the then-emerging field of digital watermarking. He published seminal papers on image transform domain watermarks and, in particular, rotation and translation invariant watermarks based on the Fourier transform.
Career
He moved to the United States in 1998, where he worked at Siemens Corporate Research in Princeton, New Jersey and where he played a key role in the development of the patented Siemens directional hearing aid.
In 2000, he joined Certus, an Internet start-up, dedicated to making Internet shopping safe.
In 2005 he joined GE Healthcare in Piscataway, New Jersey, where he published four patent applications on optical sectioning, Line Artifact Removal, Brightfield image segmentation and Cell Tracking in microscope images.
He was employed by DE Shaw group, a proprietary trading firm based in New York City, from February 2008 until February 2010. He currently is employed by Apple in Cupertino, California.
His research is well cited as evidenced in CiteSeer, ISI and Google Scholar.
Awards and honours
Anglo-Irish Science Exchange Scholarship
Trinity College Cambridge Research Studentship
IEE Leslie H Paddle Scholarship
Trinity College Dublin Foundation Scholarship
St Patrick's Benevolent Society of Toronto prize
Victor W Graham Prize for Mathematics 1988
Selected publications
J.J.K. Ó Ruanaidh and W. Fitzgerald, Numerical Bayesian Methods Applied to Signal Processing, Springer, New York, 1996,
J.J.K. Ó Ruanaidh and T. Pun, "Rotation, scale and translation invariant spread spectrum digital image watermarking," Signal Processing, vol. 66, no. 5, pp. 303–317, May 1998.
J.J.K. Ó Ruanaidh, R.R. McKay, Y. Zhang, M. Briggs, J. George and Z. Masoumi, "The application of Bayesian spectral analysis to optical sectioning using structured light imaging", Journal of Microscopy, Volume 232 Issue 1, Pages 177–185, Published Online: 25 Sep 2008.
References
1967 births
20th-century Irish scientists
21st-century Irish scientists
Living people
Engineers from Dublin (city)
English emigrants to Ireland
Academics of Trinity College Dublin
Alumni of Trinity College, Cambridge
Alumni of Trinity College Dublin
Irish Air Corps personnel
Irish computer scientists
People educated at O'Connell School
Scholars of Trinity College Dublin |