chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Describe future trends in information systems.
This final chapter will present an overview or advances of some of the new or recently introduced technologies. From wearable technology, virtual reality, Internet of Things, quantum computing to artificial intelligence, this chapter will provide a look forward to what the next few years will bring to potentially transform how we learn, communicate, do business, work, and play.
• 13.1: Introduction
This chapter discusses the future trends enabled by new and improved technologies in many industries, including social media, personalization, mobile, wearable, collaborative, communication, virtual reality, artificial intelligence, and quantum computers.
• 13.2: Collaborative
Discuss collaboration effort among consumers with free content, telecommunication, virtual environment, and 3D printing
• 13.3: Internet of Things (IoT)
This section discusses the future trends of Internet of Things and autonomous devices.
• 13.4: Future of Information Systems
This section discusses potential disruptive innovations that could fundamentally change the current paradigm of how information systems are built and how we work, entertain, learn, and do business.
• 13.5: Study Questions
13: Future Trends in Information Systems
Introduction
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today, devices that we can hold in one hand are more powerful than the computers used to land a man on the moon. The Internet has made the entire world accessible to people, allowing us to communicate and collaborate like never before. In this chapter, we will examine current trends and look ahead to what is coming next.
Global
The first trend to note is the continuing expansion of globalization due to the commercialization of the internet. The use of the Internet is growing worldwide, and with it, the use of digital devices. All regions are forecasted for significant growth, with some regions higher than others, such as Asia and Latin America.
The United Nations June 2020 “Report of the Secretary-General Roadmap for Digital Cooperation” reports that 86.6% of people in the developed countries are online, while only 19% people are online in the least developed countries, with Europe being the region with the highest usage rates and Africa with the lowest usage rate.
Chapter 11 discussed that by Q3 of 2020, approximately 4.9 billion people, or more than half of the world’s population, use the internet and forecast growth of 1,266% for the world total, with Asia being the highest 2136%, Latin America at 2489%. The smallest growth is still forecasted over 200% growth. For more details, please view the data at https://internetworldstats.com/stats.htm.
Social Media
Social media is one of the most popular internet activities worldwide. Statista.com reports that as of January 2020, the global usage rate for social media is 49%, and people spend about 144 minutes per day on social media. Even then, there are still billions of people that remain unconnected, according to datareportal.com. For more details, please read the entire report of Digital 2020.
As of October 2020, Statista.com also reports that Facebook remains the most popular social network globally with about 2.7B monthly active users, YouTube and WhatsApp with 2B, WeChat at 1.2B, Instagram at 1.1B, Twitter at 353M, TikTok at 689M, etc. For more details, please view this report at Statista.com.
Personalization
With the continued increased usage of the internet and e-commerce, users have moved beyond the simple, unique ringtones on mobile phones. They now expect increased personalized experience in the products or services, entertainment, and learning, such as highly targeted, just-in-time recommendations that are finely tuned with their preferences from vendors' data. For example, Netflix recommends what shows they might want to watch. Wearable devices from various vendors such as Apple, Google, Amazon make personalized recommendations for exercises, meditation, diet, among others, based on your current health conditions.
Mobile
Perhaps the most impactful trend in digital technologies in the last decade has been the advent of mobile technologies. Beginning with the simple cell phone in the 1990s and evolving into the smartphones and tablets of today, mobile growth has been overwhelming.
Smartphones were introduced in the 1990s. This new industry has exploded into a trillion-dollar industry with \$484B spent on smartphones, \$176B in mobile advertising, \$118B in Apps, \$77B in accessories, \$25B in wearables (Statista, 2020.) For more details, please view The Trillion-Dollar Smartphone Economy.
Wearables
The wearable market, which is now a \$25B economy, includes specific-purpose products such as fitness bands, smart socks, eyewear, hearing aids. We are now seeing a convergence in general-purpose devices such as computers and televisions and portable devices such as smartwatches and smartphones. It is also anticipated that wearable products will touch different aspects of consumers’ life. For example, smart clothing such as Neviano smart swimsuits, Live’s Jacquard jacket (Lifewire, 2020),
Advances in artificial intelligence, sensors, and robotics will expand to wearables for front-line workers such as Exoskeletons such as Ekso’s EVO to assist workers who have to carry heavy weight items such as firefighters, warehouse workers, or to health industries to provide mobility for people who are limited in mobility. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/13%3A_Future_Trends_in_Information_Systems/13.01%3A_Introduction.txt |
Collaborative
Collaborators as free content-providers
Internet usage has continued to give rise to the collaborative effort among consumers and businesses worldwide. Consumers have gained influence by sharing reviews of products and services. It is common for people to look up other people’s reviews before buying a product, visiting restaurants via sites such as Yelp, instead of believing the information from vendors directly. Businesses have leveraged consumers’ collaboration to contribute to the content of a product. For example, the smartphone app Waze is a community-based tool that keeps track of the route you are traveling and how fast you are making your way to your destination. In return for providing your data, you can benefit from the data being sent from all other app users. Waze will route you around traffic and accidents based upon real-time reports from other users. These businesses rely on users spending their free time to write free reviews to be shared with other people in these examples. In essence, they monetize people’s time and content.
Shared economy collaborators
New types of companies such as Airbnb and Uber incorporate consumers into their business model and share a fraction of the revenues. These companies monetize everyday person’s owned assets. For example, Airbnb uses its technology platform to rent out rooms, houses to people by people who actually own these assets. Uber popularized the gig economy by having people use their own cars as drivers. This trend is expected to continue and expand in other industries such as advertising.
Telecommunication
Personal communication
Video communication technologies such as Voice-over-IP (VoIP) have given consumers a means to communicate with each other for free instead of paying for expensive traditional phone lines through free services such as Microsoft Skype and WhatApp The combined use of smartphones, VoIP, more powerful servers, among others, have made landlines outdated and expensive. By 2019, the number of landlines had decreased to less than 40% from 90% in 2004 (Statista.com, 2019.)
Entertainment
The above trend continues to affect other industries, such as the consumers’ exodus of cable services or pay-TV to streaming services, a phenomenon called ‘cutting the cord’ due to the rise of companies such as Netflix and Hulu. By 2022, it is estimated that the number of households not paying for TV services in North America will grow to around 55.1 million (Statista.com, 2019). The convergence of TV, computers, and entertainment will continue as technologies become easier to use and the infrastructure such as 5G networks, to deliver data becomes faster.
Virtual environment
Tele-work
Telecommuting has been a trend that ebbs and flows as companies experiment with technologies to allow their workers to work from home. However, with the Covid-19 pandemic, telecommuting became essential as people worldwide worked from home to comply with national or regional stay-at-home orders. The debate over the merit of telework has been set aside, and its adoption spread to many industries that have eschewed this use of technology. For example, therapy counseling, medical visits with primary care providers can now be done remotely. The Post-pandemic work environment may not necessarily be the same as it was. Now, organizations have gained valuable insights about having most, if not all, of their entire workforce work from home. In one year, Zoom, the name of a relatively unknown company providing video communications, became a household word, gaining 37% in usage rate, with Microsoft Teams trailing at 19%, Skype at 17%, Google Hangouts at 9%, and slack at 7% (Statista, 2020)
Immersion - virtual reality
Tele-work allows us to see other people while we remain in our physical world. Virtual reality (VR) gives us a perception of being physically in another world. Research in building VR has been going on since the 1990s or even earlier. One example is CAVE2, also known as the Next-Generation CAVE (NG-CAVE), a research project funded by the National Science Foundation in 1992 to allow researchers to ‘walk around in a human brain or fly on Mars, etc.”. Please watch this video on YouTube or search for the phrase with the keyword ‘CAVE2’ for more details.
Technologies are not yet mature enough to give us a 100% immersive experience. They may be good enough for some products recently on a smaller scale in gaming or training. For example, if we use a VR goggle to play a game, we become a character. The same technology can be used in training for police officers.
3D Printing
3D printing completely changes our current thinking of what a printer is or the notion of printing. We typically use printers to print reports, letters, or pictures on physical papers. A 3-D printer allows you to print virtually any 3-D object based on a model of that object designed on a computer. 3-D printers work by creating layer upon layer of the model using malleable materials, such as different types of glass, metals, wax, or even food ingredients
3-D printing is quite useful for prototyping the designs of products to determine their feasibility and marketability. 3-D printing has also been used to create working prosthetic legs or handguns. Icon can print a 500sqt home in 48 hours for \$10,000. NASA wants to print pizzas for astronauts, and we can now print cakes too. In 2020, The US Air Force produces the first 3D-printed metal part for aircraft engines.
This technology can potentially affect the global value chain to develop products, and entrepreneurs can build prototypes in their garage or provide solutions to some social challenges. For example, producing a prototype of a 3D object for research and engineering can now be done in-house using a 3D printer which speeds up the development time. Tiny homes can be provided at a fraction of a cost of a traditional home.
With the rising need from consumers for more personalization (as discussed earlier), this technology may help businesses deliver on this need through shoes, clothing, and even 3D printed cars. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/13%3A_Future_Trends_in_Information_Systems/13.02%3A_Collaborative.txt |
Internet of Things (IoT)
Rouse (2019) explains that IoT is implemented as a set of web-enabled physical objects or things embedded with software, hardware, sensors, processors to collect and send data as they acquire from their environments. A ‘thing’ could be just about anything, a machine, an object, an animal, or even people as long as each thing has an embedded unique ID and is web-enabled.
In a report by McKinsey & Company on the Internet of Things (Chui et al., 2010), six broad applications are identified:
IoT has evolved since the 1970s, and by 2020, it is now most associated with smart homes. Products such as smart thermostats, smart doors, lights, home security systems, home appliances, etc. For example, Amazon Echo, Google Home, Apple’s HomePod are smart home hubs to manage all the smart IoT in the home. More and more IoT devices will continue to be offered as vendors seek to make everything ‘smart.’
Autonomous
A trend that is emerging is autonomous robots and vehicles. By combining software, sensors, and location technologies, devices that can operate themselves to perform specific functions are being developed. These take the form of creations such as medical nanotechnology robots (nanobots), self-driving cars, self-driving trucks, drones, or crewless aerial vehicles (UAVs).
A nanobot is a robot whose components are on a nanometer scale, which is one-billionth of a meter. While still an emerging field, it is showing promise for applications in the medical field. For example, a set of nanobots could be introduced into the human body to combat cancer or a specific disease. In March of 2012, Google introduced the world to their driverless car by releasing a video on YouTube showing a blind man driving the car around the San Francisco area (or search for "Self-Driving Car Test: Steve Mahan). The car combines several technologies, including a laser radar system, worth about \$150,000.
By 2020, 38 states have enacted some legislation allowing various activities from conducting studies, limited pilot testing, full deployment of commercial motor vehicles without a human operator; The details can be found at ghsa.org.
The Society of Automotive Engineers (SAE, 2018) has designed a zero to five rating system detailing the varying levels of automation — the higher the level, the more automated the vehicle is.
Consumers have begun seeing the features in levels 1 and 3 being integrated with today’s non-autonomous cars, and this trend is expected to continue.
A UAV often referred to as a “drone,” is a small airplane or helicopter that can fly without a pilot. Instead of a pilot, they are either run autonomously by computers in the vehicle or operated by a person using a remote control. While most drones today are used for military or civil applications, there is a growing market for personal drones. For a few hundred dollars, a consumer can purchase a drone for personal use.
Commercial use of UAV is beginning to emerge. Companies such as Amazon plan to deliver their packages to customers using drones, Walmart plans to use drones to carry things in their stores. This sector is forecasted to become a \$12.6B worldwide market by 2025 (Statista.com, 2019). | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/13%3A_Future_Trends_in_Information_Systems/13.03%3A_Internet_of_Things_%28IoT%29.txt |
Future of Information Systems
Quantum computer
Today’s computers use bits as data units. A bit value can only be either 0 or 1, as we discussed in Chapter 2. Quantum computers use qubit, which can represent a combination of both 0 and 1 simultaneously, leveraging the principles of quantum physics. This is a game-changer for computing and will disrupt all aspects of information technology. The benefits include a significant speed increase in calculations that will enable solutions for unsolvable problems today. However, there are many technical problems to be solved yet since all the IS elements will need to be re-imagined. Google announced the first real proof of a working quantum computer in 2019 (Menard, et al., 2020). Menard et al. also indicated that the industries that would benefit from this new computer type would be industries with complex problems to solve, such as pharmaceutical, autonomous vehicles, cybersecurity, or intense mathematical modeling such as Finance, Energy. For a full report, please visit McKinsey.com.
Blockchain
A blockchain is a set of blocks or a list of records linked using cryptography to record a transaction and track assets in a network. Anything of value can be considered an asset and be tracked. Examples include a house, cash, patents, a brand. Once a transaction is recorded, it cannot be changed retroactively. Hence, it is considered highly secured.
Blockchain has many applications, but bitcoin is mostly associated with it because it was the first application using blockchain technology. Sometimes bitcoin and blockchain are mistakenly meant to be the same thing, but they are not.
Bitcoin is digital money or a cryptocurrency. It is an open-source application built using blockchain technology. It is meant to eliminate the need for a central bank since people can directly send bitcoins. Simply put, bitcoin keeps track of a list of who sends how many bitcoins to another person. One difference with today’s money is that a bitcoin's value fluctuates since it works like a stock. Anyone can buy different bitcoin cryptocurrencies or other cryptocurrencies on bitcoin exchanges such as Coinbase. Bitcoin and other cryptocurrencies are accepted by a few organizations such as Wikimedia, Microsoft, Wholefoods. However, bitcoin’s adoption is still uncertain. If the adoption by major companies is accelerated, then banking locally and globally will change significantly.
Some early businesses have begun to use blockchain as part of their operations. Kroger uses IBM blockchain to trace food from the farms to its shelves to respond to food recalls quickly (IBM.com.) Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks.
Artificial Intelligence (AI)
Artificial intelligence (AI) comprises many technologies to duplicate the functions of the human brain. It has been in research since the 1950s and has seen an ebb and flow of interest. To understand and duplicate a human brain, AI is a complex interdisciplinary effort that involves multiple fields such as computer science, linguistics, mathematics, neuroscience, biology, philosophy, and psychology. One approach is to organize the technologies as below, and commercial solutions have been introduced:
Consumer products such as the smart vacuum iRobot Roomba are now widely available. The adoption of certain types of robots has accelerated in some industries due to the pandemic: Spot, the dog-like robot from Boston dynamics, is used to patrol for social distancing.
The goal of 100% duplicating a human brain has not been achieved yet since no AI systems have passed the Alan Turing test known as Turing Test to answer the question 'Can a machine think?" Alan is widely considered a founder of the AI field and devises a test to a machine's ability to show the equivalent intelligent behavior to that humans. The test does not look for correct answers but rather answers closely resemble those a human would give.
Even though AI has not been to duplicate a human brain yet, its advances have introduced many AI-based technologies such as AI bot, robotics in many industries. AI progress has contributed to producing many practical business information systems that we discussed throughout this book such as, voice recognition, cameras, robots, autonomous cars, etc. It has also raised concerns over how ethical is the development of some AI technologies as we discussed in previous chapters.
Advances in artificial intelligence depend on the continuous effort to collect vast amounts of data, information, and knowledge, advances in hardware, sophisticated methods to analyze both unconnected and connected large datasets to make inferences to create new knowledge, supported by secured, fast networks.
13.05: Study Questions
Summary
Information systems have changed how we work, play, and learn since the internet was introduced to the mass. We may be at a tipping point now with many significant advances of technologies that have been in research for many decades and are converged roughly at the same time as described in the above trends.
The adoption of many technologies has also been accelerated due to the 2020 Covid-19 Pandemic. Organizations will need to determine how they want to move forward to leverage opportunities and manage risks should any of the above trends become a reality.
As the world of information technology moves forward, we will be constantly challenged by new capabilities and innovations that will both amaze and disgust us. As we learned in chapter 12, many times, the new capabilities and powers that come with these new technologies will test us and require a new way of thinking about the world. Businesses and individuals alike need to be aware of these coming changes and prepare for them. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/13%3A_Future_Trends_in_Information_Systems/13.04%3A_Future_of_Information_Systems.txt |
• 1: What Is an Information System?
The first day of class I ask my students to tell me what they think an information system is. I generally get answers such as “computers,” “databases,” or “Excel.” These are good answers, but definitely incomplete ones. The study of information systems goes far beyond understanding some technologies. Let’s begin our study by defining information systems.
• 2: Hardware
An information system is made up of five components: hardware, software, data, people, and process. The physical parts of computing devices – those that you can actually touch – are referred to as hardware. In this chapter, we will take a look at this component of information systems, learn a little bit about how it works, and discuss some of the current trends surrounding it.
• 3: Software
The second component of an information system is software. Simply put: Software is the set of instructions that tell the hardware what to do. Software is created through the process of programming. Without software, the hardware would not be functional.
• 4: Data and Databases
Imagine if you turned on a computer, started the word processor, but could not save a document. Imagine if you opened a music player but there was no music to play. Imagine opening a web browser but there were no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.
• 5: Networking and Communication
This ability for computers to communicate with one another and, maybe more importantly, to facilitate communication between individuals and groups, has been an important factor in the growth of computing over the past several decades. In the 1990s, when the Internet came of age, Internet technologies began to pervade all areas of the organization. Now, with the Internet a global phenomenon, it would be unthinkable to have a computer that did not include communications capabilities.
• 6: Information Systems Security
In this chapter, we will review the fundamental concepts of information systems security and discuss some of the measures that can be taken to mitigate security threats. We will begin with an overview focusing on how organizations can stay secure. Several different measures that a company can take to improve security will be discussed. We will then follow up by reviewing security precautions that individuals can take in order to secure their personal computing environment.
Unit 1: What Is an Information System
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define what an information system is by identifying its major components;
• describe the basic history of information systems; and
• describe the basic argument behind the article “Does IT Matter?” by Nicholas Carr.
Introduction
If you are reading this, you are most likely taking a course in information systems, but do you even know what the course is going to cover? When you tell your friends or your family that you are taking a course in information systems, can you explain what it is about? For the past several years, I have taught an Introduction to Information Systems course. The first day of class I ask my students to tell me what they think an information system is. I generally get answers such as “computers,” “databases,” or “Excel.” These are good answers, but definitely incomplete ones. The study of information systems goes far beyond understanding some technologies. Let’s begin our study by defining information systems.
Defining Information Systems
Almost all programs in business require students to take a course in something called information systems. But what exactly does that term mean? Let’s take a look at some of the more popular definitions, first from Wikipedia and then from a couple of textbooks:
• “Information systems (IS) is the study of complementary networks of hardware and software that people and organizations use to collect, filter, process, create, and distribute data.”[1]
• “Information systems are combinations of hardware, software, and telecommunications networks that people build and use to collect, create, and distribute useful data, typically in organizational settings.”[2]
• “Information systems are interrelated components working together to collect, process, store, and disseminate information to support decision making, coordination, control, analysis, and viualization in an organization.”[3]
As you can see, these definitions focus on two different ways of describing information systems: the components that make up an information system and the role that those components play in an organization. Let’s take a look at each of these.
The Components of Information Systems
As I stated earlier, I spend the first day of my information systems class discussing exactly what the term means. Many students understand that an information system has something to do with databases or spreadsheets. Others mention computers and e-commerce. And they are all right, at least in part: information systems are made up of different components that work together to provide value to an organization.
The first way I describe information systems to students is to tell them that they are made up of five components: hardware, software, data, people, and process. The first three, fitting under the category technology , are generally what most students think of when asked to define information systems. But the last two, people and process, are really what separate the idea of information systems from more technical fields, such as computer science. In order to fully understand information systems, students must understand how all of these components work together to bring value to an organization.
Technology
Technology can be thought of as the application of scientific knowledge for practical purposes. From the invention of the wheel to the harnessing of electricity for artificial lighting, technology is a part of our lives in so many ways that we tend to take it for granted. As discussed before, the first three components of information systems – hardware, software, and data – all fall under the category of technology. Each of these will get its own chapter and a much lengthier discussion, but we will take a moment here to introduce them so we can get a full understanding of what an information system is.
Hardware
Information systems hardware is the part of an information system you can touch – the physical components of the technology. Computers, keyboards, disk drives, iPads, and flash drives are all examples of information systems hardware. We will spend some time going over these components and how they all work together in chapter 2.
Software
Software is a set of instructions that tells the hardware what to do. Software is not tangible – it cannot be touched. When programmers create software programs, what they are really doing is simply typing out lists of instructions that tell the hardware what to do. There are several categories of software, with the two main categories being operating-system software, which makes the hardware usable, and application software, which does something useful. Examples of operating systems include Microsoft Windows on a personal computer and Google’s Android on a mobile phone. Examples of application software are Microsoft Excel and Angry Birds. Software will be explored more thoroughly in chapter 3.
Data
The third component is data. You can think of data as a collection of facts. For example, your street address, the city you live in, and your phone number are all pieces of data. Like software, data is also intangible. By themselves, pieces of data are not really very useful. But aggregated, indexed, and organized together into a database, data can become a powerful tool for businesses. In fact, all of the definitions presented at the beginning of this chapter focused on how information systems manage data. Organizations collect all kinds of data and use it to make decisions. These decisions can then be analyzed as to their effectiveness and the organization can be improved. Chapter 4 will focus on data and databases, and their uses in organizations.
Networking Communication: A Fourth Technology Piece?
Besides the components of hardware, software, and data, which have long been considered the core technology of information systems, it has been suggested that one other component should be added: communication. An information system can exist without the ability to communicate – the first personal computers were stand-alone machines that did not access the Internet. However, in today’s hyper-connected world, it is an extremely rare computer that does not connect to another device or to a network. Technically, the networking communication component is made up of hardware and software, but it is such a core feature of today’s information systems that it has become its own category. We will be covering networking in chapter 5.
People
When thinking about information systems, it is easy to get focused on the technology components and forget that we must look beyond these tools to fully understand how they integrate into an organization. A focus on the people involved in information systems is the next step. From the front-line help-desk workers, to systems analysts, to programmers, all the way up to the chief information officer (CIO), the people involved with information systems are an essential element that must not be overlooked. The people component will be covered in chapter 9.
Process
The last component of information systems is process. A process is a series of steps undertaken to achieve a desired outcome or goal. Information systems are becoming more and more integrated with organizational processes, bringing more productivity and better control to those processes. But simply automating activities using technology is not enough – businesses looking to effectively utilize information systems do more. Using technology to manage and improve processes, both within a company and externally with suppliers and customers, is the ultimate goal. Technology buzzwords such as “business process reengineering,” “business process management,” and “enterprise resource planning” all have to do with the continued improvement of these business procedures and the integration of technology with them. Businesses hoping to gain an advantage over their competitors are highly focused on this component of information systems. We will discuss processes in chapter 8.
The Role of Information Systems
Now that we have explored the different components of information systems, we need to turn our attention to the role that information systems play in an organization. So far we have looked at what the components of an information system are, but what do these components actually do for an organization? From our definitions above, we see that these components collect, store, organize, and distribute data throughout the organization. In fact, we might say that one of the roles of information systems is to take data and turn it into information, and then transform that into organizational knowledge. As technology has developed, this role has evolved into the backbone of the organization. To get a full appreciation of the role information systems play, we will review how they have changed over the years.
IBM 704 Mainframe (Copyright: Lawrence Livermore National Laboratory)
The Mainframe Era
From the late 1950s through the 1960s, computers were seen as a way to more efficiently do calculations. These first business computers were room-sized monsters, with several refrigerator-sized machines linked together. The primary work of these devices was to organize and store large volumes of information that were tedious to manage by hand. Only large businesses, universities, and government agencies could afford them, and they took a crew of specialized personnel and specialized facilities to maintain. These devices served dozens to hundreds of users at a time through a process called time-sharing. Typical functions included scientific calculations and accounting, under the broader umbrella of “data processing.”
Registered trademark of International Business Machines
In the late 1960s, the Manufacturing Resources Planning (MRP) systems were introduced. This software, running on a mainframe computer, gave companies the ability to manage the manufacturing process, making it more efficient. From tracking inventory to creating bills of materials to scheduling production, the MRP systems (and later the MRP II systems) gave more businesses a reason to want to integrate computing into their processes. IBM became the dominant mainframe company. Nicknamed “Big Blue,” the company became synonymous with business computing. Continued improvement in software and the availability of cheaper hardware eventually brought mainframe computers (and their little sibling, the minicomputer) into most large businesses.
The PC Revolution
In 1975, the first microcomputer was announced on the cover of Popular Mechanics: the Altair 8800. Its immediate popularity sparked the imagination of entrepreneurs everywhere, and there were quickly dozens of companies making these “personal computers.” Though at first just a niche product for computer hobbyists, improvements in usability and the availability of practical software led to growing sales. The most prominent of these early personal computer makers was a little company known as Apple Computer, headed by Steve Jobs and Steve Wozniak, with the hugely successful “Apple II.” Not wanting to be left out of the revolution, in 1981 IBM (teaming with a little company called Microsoft for their operating-system software) hurriedly released their own version of the personal computer, simply called the “PC.” Businesses, who had used IBM mainframes for years to run their businesses, finally had the permission they needed to bring personal computers into their companies, and the IBM PC took off. The IBM PC was named Time magazine’s “Man of the Year” for 1982.
Because of the IBM PC’s open architecture, it was easy for other companies to copy, or “clone” it. During the 1980s, many new computer companies sprang up, offering less expensive versions of the PC. This drove prices down and spurred innovation. Microsoft developed its Windows operating system and made the PC even easier to use. Common uses for the PC during this period included word processing, spreadsheets, and databases. These early PCs were not connected to any sort of network; for the most part they stood alone as islands of innovation within the larger organization.
Client-Server
In the mid-1980s, businesses began to see the need to connect their computers together as a way to collaborate and share resources. This networking architecture was referred to as “client-server” because users would log in to the local area network (LAN) from their PC (the “client”) by connecting to a powerful computer called a “server,” which would then grant them rights to different resources on the network (such as shared file areas and a printer). Software companies began developing applications that allowed multiple users to access the same data at the same time. This evolved into software applications for communicating, with the first real popular use of electronic mail appearing at this time.
Registered trademark of SAP
This networking and data sharing all stayed within the confines of each business, for the most part. While there was sharing of electronic data between companies, this was a very specialized function. Computers were now seen as tools to collaborate internally, within an organization. In fact, these networks of computers were becoming so powerful that they were replacing many of the functions previously performed by the larger mainframe computers at a fraction of the cost. It was during this era that the first Enterprise Resource Planning (ERP) systems were developed and run on the client-server architecture. An ERP system is a software application with a centralized database that can be used to run a company’s entire business. With separate modules for accounting, finance, inventory, human resources, and many, many more, ERP systems, with Germany’s SAP leading the way, represented the state of the art in information systems integration. We will discuss ERP systems as part of the chapter on process (chapter 9).
The World Wide Web and E-Commerce
First invented in 1969, the Internet was confined to use by universities, government agencies, and researchers for many years. Its rather arcane commands and user applications made it unsuitable for mainstream use in business. One exception to this was the ability to expand electronic mail outside the confines of a single organization. While the first e-mail messages on the Internet were sent in the early 1970s, companies who wanted to expand their LAN-based e-mail started hooking up to the Internet in the 1980s. Companies began connecting their internal networks to the Internet in order to allow communication between their employees and employees at other companies. It was with these early Internet connections that the computer truly began to evolve from a computational device to a communications device.
In 1989, Tim Berners-Lee developed a simpler way for researchers to share information over the network at CERN laboratories, a concept he called the World Wide Web.[4] This invention became the launching point of the growth of the Internet as a way for businesses to share information about themselves. As web browsers and Internet connections became the norm, companies rushed to grab domain names and create websites.
Registered trademark of Amazon Technologies, Inc.
In 1991, the National Science Foundation, which governed how the Internet was used, lifted restrictions on its commercial use. The year 1994 saw the establishment of both eBay and Amazon.com, two true pioneers in the use of the new digital marketplace. A mad rush of investment in Internet-based businesses led to the dot-com boom through the late 1990s, and then the dot-com bust in 2000. While much can be learned from the speculation and crazy economic theories espoused during that bubble, one important outcome for businesses was that thousands of miles of Internet connections were laid around the world during that time. The world became truly “wired” heading into the new millenium, ushering in the era of globalization, which we will discuss in chapter 11.
As it became more expected for companies to be connected to the Internet, the digital world also became a more dangerous place. Computer viruses and worms, once slowly propagated through the sharing of computer disks, could now grow with tremendous speed via the Internet. Software written for a disconnected world found it very difficult to defend against these sorts of threats. A whole new industry of computer and Internet security arose. We will study information security in chapter 6.
Web 2.0
As the world recovered from the dot-com bust, the use of technology in business continued to evolve at a frantic pace. Websites became interactive; instead of just visiting a site to find out about a business and purchase its products, customers wanted to be able to customize their experience and interact with the business. This new type of interactive website, where you did not have to know how to create a web page or do any programming in order to put information online, became known as web 2.0. Web 2.0 is exemplified by blogging, social networking, and interactive comments being available on many websites. This new web-2.0 world, in which online interaction became expected, had a big impact on many businesses and even whole industries. Some industries, such as bookstores, found themselves relegated to a niche status. Others, such as video rental chains and travel agencies, simply began going out of business as they were replaced by online technologies. This process of technology replacing a middleman in a transaction is called disintermediation.
As the world became more connected, new questions arose. Should access to the Internet be considered a right? Can I copy a song that I downloaded from the Internet? How can I keep information that I have put on a website private? What information is acceptable to collect from children? Technology moved so fast that policymakers did not have enough time to enact appropriate laws, making for a Wild West–type atmosphere. Ethical issues surrounding information systems will be covered in chapter 12.
The Post-PC World
After thirty years as the primary computing device used in most businesses, sales of the PC are now beginning to decline as sales of tablets and smartphones are taking off. Just as the mainframe before it, the PC will continue to play a key role in business, but will no longer be the primary way that people interact and do business. The limited storage and processing power of these devices is being offset by a move to “cloud” computing, which allows for storage, sharing, and backup of information on a massive scale. This will require new rounds of thinking and innovation on the part of businesses as technology continues to advance.
The Eras of Business Computing
Era Hardware Operating System Applications
Mainframe
(1970s)
Terminals connected to mainframe computer. Time-sharing
(TSO) on MVS
Custom-written
MRP software
PC
(mid-1980s)
IBM PC or compatible. Sometimes connected to mainframe computer via
expansion card.
MS-DOS WordPerfect,
Lotus 1-2-3
Client-Server
(late 80s to early 90s)
IBM PC “clone” on a Novell Network. Windows for Workgroups Microsoft
Word, Microsoft Excel
World
Wide Web (mid-90s to early 2000s)
IBM PC “clone” connected to company intranet. Windows XP Microsoft
Office, Internet Explorer
Web 2.0 (mid-2000s to present) Laptop connected to company Wi-Fi. Windows 7 Microsoft
Office, Firefox
Post-PC
(today and beyond)
Apple iPad iOS Mobile-friendly
websites, mobile apps
Can Information Systems Bring Competitive Advantage?
It has always been the assumption that the implementation of information systems will, in and of itself, bring a business competitive advantage. After all, if installing one computer to manage inventory can make a company more efficient, won’t installing several computers to handle even more of the business continue to improve it?
In 2003, Nicholas Carr wrote an article in the Harvard Business Review that questioned this assumption. The article, entitled “IT Doesn’t Matter,” raised the idea that information technology has become just a commodity. Instead of viewing technology as an investment that will make a company stand out, it should be seen as something like electricity: It should be managed to reduce costs, ensure that it is always running, and be as risk-free as possible.
As you might imagine, this article was both hailed and scorned. Can IT bring a competitive advantage? It sure did for Walmart (see sidebar). We will discuss this topic further in chapter 7.
Sidebar: Walmart Uses Information Systems to Become the World’s Leading Retailer
Walmart is the world’s largest retailer, earning \$15.2 billion on sales of \$443.9 billion in the fiscal year that ended on January 31, 2012. Walmart currently serves over 200 million customers every week, worldwide.[5] Walmart’s rise to prominence is due in no small part to their use of information systems.
Registered trademark of Wal-Mart Stores, Inc. (CC BY SA 3.0 Unported; Jared C. Benedict via Wikipedia)
One of the keys to this success was the implementation of Retail Link, a supply-chain management system. This system, unique when initially implemented in the mid-1980s, allowed Walmart’s suppliers to directly access the inventory levels and sales information of their products at any of Walmart’s more than ten thousand stores. Using Retail Link, suppliers can analyze how well their products are selling at one or more Walmart stores, with a range of reporting options. Further, Walmart requires the suppliers to use Retail Link to manage their own inventory levels. If a supplier feels that their products are selling out too quickly, they can use Retail Link to petition Walmart to raise the levels of inventory for their products. This has essentially allowed Walmart to “hire” thousands of product managers, all of whom have a vested interest in the products they are managing. This revolutionary approach to managing inventory has allowed Walmart to continue to drive prices down and respond to market forces quickly.
Today, Walmart continues to innovate with information technology. Using its tremendous market presence, any technology that Walmart requires its suppliers to implement immediately becomes a business standard.
Summary
In this chapter, you have been introduced to the concept of information systems. We have reviewed several definitions, with a focus on the components of information systems: technology, people, and process. We have reviewed how the business use of information systems has evolved over the years, from the use of large mainframe computers for number crunching, through the introduction of the PC and networks, all the way to the era of mobile computing. During each of these phases, new innovations in software and technology allowed businesses to integrate technology more deeply.
We are now to a point where every company is using information systems and asking the question: Does it bring a competitive advantage? In the end, that is really what this book is about. Every businessperson should understand what an information system is and how it can be used to bring a competitive advantage. And that is the task we have before us.
Study Questions
1. What are the five components that make up an information system?
2. What are three examples of information system hardware?
3. Microsoft Windows is an example of which component of information systems?
4. What is application software?
5. What roles do people play in information systems?
6. What is the definition of a process?
7. What was invented first, the personal computer or the Internet (ARPANET)?
8. In what year were restrictions on commercial use of the Internet first lifted? When were eBay and Amazon founded?
9. What does it mean to say we are in a “post-PC world”?
10. What is Carr’s main argument about information technology?
Exercises
1. Suppose that you had to explain to a member of your family or one of your closest friends the concept of an information system. How would you define it? Write a one-paragraph description in your own words that you feel would best describe an information system to your friends or family.
2. Of the five primary components of an information system (hardware, software, data, people, process), which do you think is the most important to the success of a business organization? Write a one-paragraph answer to this question that includes an example from your personal experience to support your answer.
3. We all interact with various information systems every day: at the grocery store, at work, at school, even in our cars (at least some of us). Make a list of the different information systems you interact with every day. See if you can identify the technologies, people, and processes involved in making these systems work.
4. Do you agree that we are in a post-PC stage in the evolution of information systems? Some people argue that we will always need the personal computer, but that it will not be the primary device used for manipulating information. Others think that a whole new era of mobile and biological computing is coming. Do some original research and make your prediction about what business computing will look like in the next generation.
5. The Walmart case study introduced you to how that company used information systems to become the world’s leading retailer. Walmart has continued to innovate and is still looked to as a leader in the use of technology. Do some original research and write a one-page report detailing a new technology that Walmart has recently implemented or is pioneering.
1. Wikipedia entry on "Information Systems," as displayed on August 19, 2012. Wikipedia: The Free Encyclopedia. San Francisco: Wikimedia Foundation. http://en.Wikipedia.org/wiki/Informa...s_(discipline).
2. Excerpted from Information Systems Today - Managing in the Digital World, fourth edition. Prentice-Hall, 2010.
3. Excerpted from Management Information Systems, twelfth edition, Prentice-Hall, 2012.
4. CERN's "The Birth of the Web." http://public.web.cern.ch/public/en/about/web-en.html
5. Walmart 2012 Annual Report. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/1%3A_What_Is_an_Information_System.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe information systems hardware;
• identify the primary components of a computer and the functions they perform; and
• explain the effect of the commoditization of the personal computer.
Introduction
As we learned in the first chapter, an information system is made up of five components: hardware, software, data, people, and process. The physical parts of computing devices – those that you can actually touch – are referred to as hardware. In this chapter, we will take a look at this component of information systems, learn a little bit about how it works, and discuss some of the current trends surrounding it.
As stated above, computer hardware encompasses digital devices that you can physically touch. This includes devices such as the following:
• desktop computers
• laptop computers
• mobile phones
• tablet computers
• e-readers
• storage devices, such as flash drives
• input devices, such as keyboards, mice, and scanners
• output devices such as printers and speakers.
Besides these more traditional computer hardware devices, many items that were once not considered digital devices are now becoming computerized themselves. Digital technologies are now being integrated into many everyday objects, so the days of a device being labeled categorically as computer hardware may be ending. Examples of these types of digital devices include automobiles, refrigerators, and even soft-drink dispensers. In this chapter, we will also explore digital devices, beginning with defining what we mean by the term itself.
Digital Devices
A digital device processes electronic signals that represent either a one (“on”) or a zero (“off”). The on state is represented by the presence of an electronic signal; the off state is represented by the absence of an electronic signal. Each one or zero is referred to as a bit (a contraction of binary digit); a group of eight bits is a byte. The first personal computers could process 8 bits of data at once; modern PCs can now process 64 bits of data at a time, which is where the term 64-bit processor comes from.
Sidebar: Understanding Binary
As you know, the system of numbering we are most familiar with is base-ten numbering. In base-ten numbering, each column in the number represents a power of ten, with the far-right column representing 10^0 (ones), the next column from the right representing 10^1 (tens), then 10^2 (hundreds), then 10^3 (thousands), etc. For example, the number 1010 in decimal represents: (1 x 1000) + (0 x 100) + (1 x 10) + (0 x 1).
Computers use the base-two numbering system, also known as binary. In this system, each column in the number represents a power of two, with the far-right column representing 2^0 (ones), the next column from the right representing 2^1 (tens), then 2^2 (fours), then 2^3 (eights), etc. For example, the number 1010 in binary represents (1 x 8 ) + (0 x 4) + (1 x 2) + (0 x 1). In base ten, this evaluates to 10.
As the capacities of digital devices grew, new terms were developed to identify the capacities of processors, memory, and disk storage space. Prefixes were applied to the word byte to represent different orders of magnitude. Since these are digital specifications, the prefixes were originally meant to represent multiples of 1024 (which is 210), but have more recently been rounded to mean multiples of 1000.
A Listing of Binary Prefixes
Prefix Represents Example
kilo one thousand kilobyte=one thousand bytes
mega one million megabyte=one million bytes
giga one billion gigabyte=one billion bytes
tera one trillion terabyte=one trillion bytes
Tour of a PC
All personal computers consist of the same basic components: a CPU, memory, circuit board, storage, and input/output devices. It also turns out that almost every digital device uses the same set of components, so examining the personal computer will give us insight into the structure of a variety of digital devices. So let’s take a “tour” of a personal computer and see what makes them function.
Processing Data: The CPU
As stated above, most computing devices have a similar architecture. The core of this architecture is the central processing unit, or CPU. The CPU can be thought of as the “brains” of the device. The CPU carries out the commands sent to it by the software and returns results to be acted upon.
The earliest CPUs were large circuit boards with limited functionality. Today, a CPU is generally on one chip and can perform a large variety of functions. There are two primary manufacturers of CPUs for personal computers: Intel and Advanced Micro Devices (AMD).
The speed (“clock time”) of a CPU is measured in hertz. A hertz is defined as one cycle per second. Using the binary prefixes mentioned above, we can see that a kilohertz (abbreviated kHz) is one thousand cycles per second, a megahertz (mHz) is one million cycles per second, and a gigahertz (gHz) is one billion cycles per second. The CPU’s processing power is increasing at an amazing rate (see the sidebar about Moore’s Law). Besides a faster clock time, many CPU chips now contain multiple processors per chip. These chips, known as dual-core (two processors) or quad-core (four processors), increase the processing power of a computer by providing the capability of multiple CPUs.
Sidebar: Moore’s Law
We all know that computers get faster every year. Many times, we are not sure if we want to buy today’s model of smartphone, tablet, or PC because next week it won’t be the most advanced any more. Gordon Moore, one of the founders of Intel, recognized this phenomenon in 1965, noting that microprocessor transistor counts had been doubling every year.[1] His insight eventually evolved into Moore’s Law, which states that the number of transistors on a chip will double every two years. This has been generalized into the concept that computing power will double every two years for the same price point. Another way of looking at this is to think that the price for the same computing power will be cut in half every two years. Though many have predicted its demise, Moore’s Law has held true for over forty years (see figure below).
A graphical representation of Moore’s Law (CC-BY-SA: Wgsimon)
There will be a point, someday, where we reach the limits of Moore’s Law, where we cannot continue to shrink circuits any further. But engineers will continue to seek ways to increase performance.
Motherboard
The motherboard is the main circuit board on the computer. The CPU, memory, and storage components, among other things, all connect into the motherboard. Motherboards come in different shapes and sizes, depending upon how compact or expandable the computer is designed to be. Most modern motherboards have many integrated components, such as video and sound processing, which used to require separate components.
Motherboard
The motherboard provides much of the bus of the computer (the term bus refers to the electrical connection between different computer components). The bus is an important determiner of the computer’s speed: the combination of how fast the bus can transfer data and the number of data bits that can be moved at one time determine the speed.
Random-Access Memory
When a computer starts up, it begins to load information from the hard disk into its working memory. This working memory, called random-access memory (RAM), can transfer data much faster than the hard disk. Any program that you are running on the computer is loaded into RAM for processing. In order for a computer to work effectively, some minimal amount of RAM must be installed. In most cases, adding more RAM will allow the computer to run faster. Another characteristic of RAM is that it is “volatile.” This means that it can store data as long as it is receiving power; when the computer is turned off, any data stored in RAM is lost.
Memory DIMM
RAM is generally installed in a personal computer through the use of a dual-inline memory module (DIMM). The type of DIMM accepted into a computer is dependent upon the motherboard. As described by Moore’s Law, the amount of memory and speeds of DIMMs have increased dramatically over the years.
Hard Disk
Hard disk enclosure
While the RAM is used as working memory, the computer also needs a place to store data for the longer term. Most of today’s personal computers use a hard disk for long-term data storage. A hard disk is where data is stored when the computer is turned off and where it is retrieved from when the computer is turned on. Why is it called a hard disk? A hard disk consists of a stack of disks inside a hard metal case. A floppy disk (discussed below) was a removable disk that, in some cases at least, was flexible, or “floppy.”
Solid-State Drives
A relatively new component becoming more common in some personal computers is the solid-state drive (SSD). The SSD performs the same function as a hard disk: long-term storage. Instead of spinning disks, the SSD uses flash memory, which is much faster.
Solid-state drives are currently quite a bit more expensive than hard disks. However, the use of flash memory instead of disks makes them much lighter and faster than hard disks. SSDs are primarily utilized in portable computers, making them lighter and more efficient. Some computers combine the two storage technologies, using the SSD for the most accessed data (such as the operating system) while using the hard disk for data that is accessed less frequently. As with any technology, Moore’s Law is driving up capacity and speed and lowering prices of solid-state drives, which will allow them to proliferate in the years to come.
Removable Media
Besides fixed storage components, removable storage media are also used in most personal computers. Removable media allows you to take your data with you. And just as with all other digital technologies, these media have gotten smaller and more powerful as the years have gone by. Early computers used floppy disks, which could be inserted into a disk drive in the computer. Data was stored on a magnetic disk inside an enclosure. These disks ranged from 8″ in the earliest days down to 3 1/2″.
Floppy-disk evolution (8″ to 5 1/4″ to 3 1/2″) (Public Domain; via Wikipedia)
Around the turn of the century, a new portable storage technology was being developed: the USB flash drive (more about the USB port later in the chapter). This device attaches to the universal serial bus (USB) connector, which became standard on all personal computers beginning in the late 1990s. As with all other storage media, flash drive storage capacity has skyrocketed over the years, from initial capacities of eight megabytes to current capacities of 64 gigabytes and still growing.
Network Connection
When personal computers were first developed, they were stand-alone units, which meant that data was brought into the computer or removed from the computer via removable media, such as the floppy disk. Beginning in the mid-1980s, however, organizations began to see the value in connecting computers together via a digital network. Because of this, personal computers needed the ability to connect to these networks. Initially, this was done by adding an expansion card to the computer that enabled the network connection, but by the mid-1990s, a network port was standard on most personal computers. As wireless technologies began to dominate in the early 2000s, many personal computers also began including wireless networking capabilities. Digital communication technologies will be discussed further in chapter 5.
Input and Output
USB connector
In order for a personal computer to be useful, it must have channels for receiving input from the user and channels for delivering output to the user. These input and output devices connect to the computer via various connection ports, which generally are part of the motherboard and are accessible outside the computer case. In early personal computers, specific ports were designed for each type of output device. The configuration of these ports has evolved over the years, becoming more and more standardized over time. Today, almost all devices plug into a computer through the use of a USB port. This port type, first introduced in 1996, has increased in its capabilities, both in its data transfer rate and power supplied.
Bluetooth
Besides USB, some input and output devices connect to the computer via a wireless-technology standard called Bluetooth. Bluetooth was first invented in the 1990s and exchanges data over short distances using radio waves. Bluetooth generally has a range of 100 to 150 feet. For devices to communicate via Bluetooth, both the personal computer and the connecting device must have a Bluetooth communication chip installed.
Input Devices
All personal computers need components that allow the user to input data. Early computers used simply a keyboard to allow the user to enter data or select an item from a menu to run a program. With the advent of the graphical user interface, the mouse became a standard component of a computer. These two components are still the primary input devices to a personal computer, though variations of each have been introduced with varying levels of success over the years. For example, many new devices now use a touch screen as the primary way of entering data.
Besides the keyboard and mouse, additional input devices are becoming more common. Scanners allow users to input documents into a computer, either as images or as text. Microphones can be used to record audio or give voice commands. Webcams and other types of video cameras can be used to record video or participate in a video chat session.
Output Devices
Output devices are essential as well. The most obvious output device is a display, visually representing the state of the computer. In some cases, a personal computer can support multiple displays or be connected to larger-format displays such as a projector or large-screen television. Besides displays, other output devices include speakers for audio output and printers for printed output.
Sidebar: What Hardware Components Contribute to the Speed of My Computer?
The speed of a computer is determined by many elements, some related to hardware and some related to software. In hardware, speed is improved by giving the electrons shorter distances to traverse to complete a circuit. Since the first CPU was created in the early 1970s, engineers have constantly worked to figure out how to shrink these circuits and put more and more circuits onto the same chip. And this work has paid off – the speed of computing devices has been continuously improving ever since.
The hardware components that contribute to the speed of a personal computer are the CPU, the motherboard, RAM, and the hard disk. In most cases, these items can be replaced with newer, faster components. In the case of RAM, simply adding more RAM can also speed up the computer. The table below shows how each of these contributes to the speed of a computer. Besides upgrading hardware, there are many changes that can be made to the software of a computer to make it faster.
Component Speed
measured by
Units Description
CPU Clock
speed
gHz The time it takes to complete a circuit.
Motherboard Bus
speed
mHz How much data can move across the bus simultaneously.
RAM Data
transfer rate
MB/s The time it takes for data to be transferred from memory to system.
Hard Disk Access
time
ms The time it takes before the disk can transfer data.
Data
transfer rate
MBit/s The time it takes for data to be transferred from disk to system.
Other Computing Devices
A personal computer is designed to be a general-purpose device. That is, it can be used to solve many different types of problems. As the technologies of the personal computer have become more commonplace, many of the components have been integrated into other devices that previously were purely mechanical. We have also seen an evolution in what defines a computer. Ever since the invention of the personal computer, users have clamored for a way to carry them around. Here we will examine several types of devices that represent the latest trends in personal computing.
Portable Computers
In 1983, Compaq Computer Corporation developed the first commercially successful portable personal computer. By today’s standards, the Compaq PC was not very portable: weighing in at 28 pounds, this computer was portable only in the most literal sense – it could be carried around. But this was no laptop; the computer was designed like a suitcase, to be lugged around and laid on its side to be used. Besides portability, the Compaq was successful because it was fully compatible with the software being run by the IBM PC, which was the standard for business.
A modern laptop
In the years that followed, portable computing continued to improve, giving us laptop and notebook computers. The “luggable” computer has given way to a much lighter clamshell computer that weighs from 4 to 6 pounds and runs on batteries. In fact, the most recent advances in technology give us a new class of laptop that is quickly becoming the standard: these laptops are extremely light and portable and use less power than their larger counterparts. The MacBook Air is a good example of this: it weighs less than three pounds and is only 0.68 inches thick!
Finally, as more and more organizations and individuals are moving much of their computing to the Internet, laptops are being developed that use “the cloud” for all of their data and application storage. These laptops are also extremely light because they have no need of a hard disk at all! A good example of this type of laptop (sometimes called a netbook) is Samsung’s Chromebook.
Smartphones
The first modern-day mobile phone was invented in 1973. Resembling a brick and weighing in at two pounds, it was priced out of reach for most consumers at nearly four thousand dollars. Since then, mobile phones have become smaller and less expensive; today mobile phones are a modern convenience available to all levels of society. As mobile phones evolved, they became more like small computers. These smartphones have many of the same characteristics as a personal computer, such as an operating system and memory. The first smartphone was the IBM Simon, introduced in 1994.
In January of 2007, Apple introduced the iPhone. Its ease of use and intuitive interface made it an immediate success and solidified the future of smartphones. Running on an operating system called iOS, the iPhone was really a small computer with a touch-screen interface. In 2008, the first Android phone was released, with similar functionality.
Tablet Computers
A tablet computer is one that uses a touch screen as its primary input and is small enough and light enough to be carried around easily. They generally have no keyboard and are self-contained inside a rectangular case. The first tablet computers appeared in the early 2000s and used an attached pen as a writing device for input. These tablets ranged in size from small personal digital assistants (PDAs), which were handheld, to full-sized, 14-inch devices. Most early tablets used a version of an existing computer operating system, such as Windows or Linux.
These early tablet devices were, for the most part, commercial failures. In January, 2010, Apple introduced the iPad, which ushered in a new era of tablet computing. Instead of a pen, the iPad used the finger as the primary input device. Instead of using the operating system of their desktop and laptop computers, Apple chose to use iOS, the operating system of the iPhone. Because the iPad had a user interface that was the same as the iPhone, consumers felt comfortable and sales took off. The iPad has set the standard for tablet computing. After the success of the iPad, computer manufacturers began to develop new tablets that utilized operating systems that were designed for mobile devices, such as Android.
The Rise of Mobile Computing
Mobile computing is having a huge impact on the business world today. The use of smartphones and tablet computers is rising at double-digit rates each year. The Gartner Group, in a report issued in April, 2013, estimates that over 1.7 million mobile phones will ship in the US in 2013 as compared to just over 340,000 personal computers. Over half of these mobile phones are smartphones.[2] Almost 200,000 tablet computers are predicted to ship in 2013. According to the report, PC shipments will continue to decline as phone and tablet shipments continue to increase. [3]
Integrated Computing
Along with advances in computers themselves, computing technology is being integrated into many everyday products. From automobiles to refrigerators to airplanes, computing technology is enhancing what these devices can do and is adding capabilities that would have been considered science fiction just a few years ago. Here are two of the latest ways that computing technologies are being integrated into everyday products:
The Commoditization of the Personal Computer
Over the past thirty years, as the personal computer has gone from technical marvel to part of our everyday lives, it has also become a commodity. The PC has become a commodity in the sense that there is very little differentiation between computers, and the primary factor that controls their sale is their price. Hundreds of manufacturers all over the world now create parts for personal computers. Dozens of companies buy these parts and assemble the computers. As commodities, there are essentially no differences between computers made by these different companies. Profit margins for personal computers are razor-thin, leading hardware developers to find the lowest-cost manufacturing.
There is one brand of computer for which this is not the case – Apple. Because Apple does not make computers that run on the same open standards as other manufacturers, they can make a unique product that no one can easily copy. By creating what many consider to be a superior product, Apple can charge more for their computers than other manufacturers. Just as with the iPad and iPhone, Apple has chosen a strategy of differentiation, which, at least at this time, seems to be paying off.
The Problem of Electronic Waste
Personal computers have been around for over thirty-five years. Millions of them have been used and discarded. Mobile phones are now available in even the remotest parts of the world and, after a few years of use, they are discarded. Where does this electronic debris end up?
Electronic waste (Public Domain)
Often, it gets routed to any country that will accept it. Many times, it ends up in dumps in developing nations. These dumps are beginning to be seen as health hazards for those living near them. Though many manufacturers have made strides in using materials that can be recycled, electronic waste is a problem with which we must all deal.
Summary
Information systems hardware consists of the components of digital technology that you can touch. In this chapter, we reviewed the components that make up a personal computer, with the understanding that the configuration of a personal computer is very similar to that of any type of digital computing device. A personal computer is made up of many components, most importantly the CPU, motherboard, RAM, hard disk, removable media, and input/output devices. We also reviewed some variations on the personal computer, such as the tablet computer and the smartphone. In accordance with Moore’s Law, these technologies have improved quickly over the years, making today’s computing devices much more powerful than devices just a few years ago. Finally, we discussed two of the consequences of this evolution: the commoditization of the personal computer and the problem of electronic waste.
Study Questions
1. Write your own description of what the term information systems hardware means.
2. What is the impact of Moore’s Law on the various hardware components described in this chapter?
3. Write a summary of one of the items linked to in the “Integrated Computing” section.
4. Explain why the personal computer is now considered a commodity.
5. The CPU can also be thought of as the _____________ of the computer.
6. List the following in increasing order (slowest to fastest): megahertz, kilohertz, gigahertz.
7. What is the bus of a computer?
8. Name two differences between RAM and a hard disk.
9. What are the advantages of solid-state drives over hard disks?
10. How heavy was the first commercially successful portable computer?
Exercises
1. Review the sidebar on the binary number system. How would you represent the number 16 in binary? How about the number 100? Besides decimal and binary, other number bases are used in computing and programming. One of the most used bases is hexadecimal, which is base-16. In base-16, the numerals 0 through 9 are supplemented with the letters A (10) through F (15). How would you represent the decimal number 100 in hexadecimal?
2. Review the timeline of computers at the Old Computers website. Pick one computer from the listing and write a brief summary. Include the specifications for CPU, memory, and screen size. Now find the specifications of a computer being offered for sale today and compare. Did Moore’s Law hold true?
3. The Homebrew Computer Club was one of the original clubs for enthusiasts of the first personal computer, the Altair 8800. Read some of their newsletters and then discuss some of the issues surrounding this early personal computer.
4. If you could build your own personal computer, what components would you purchase? Put together a list of the components you would use to create it, including a computer case, motherboard, CPU, hard disk, RAM, and DVD drive. How can you be sure they are all compatible with each other? How much would it cost? How does this compare to a similar computer purchased from a vendor such as Dell or HP?
5. Review the Wikipedia entry on electronic waste. Now find at least two more scholarly articles on this topic. Prepare a slideshow that summarizes the issue and then recommend a possible solution based on your research.
6. As with any technology text, there have been advances in technologies since publication. What technology that has been developed recently would you add to this chapter?
7. What is the current state of solid-state drives vs. hard disks? Do original research online where you can compare price on solid-state drives and hard disks. Be sure you note the differences in price, capacity, and speed.
1. Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2012-10-18.
2. Smartphone shipments to surpass feature phones this year. CNet, June 4, 2013. http://news.cnet.com/8301-1035_3-575...nes-this-year/
3. Gartner Press Release. April 4, 2013. http://www.gartner.com/newsroom/id/2408515 | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/2%3A_Hardware.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the term software;
• describe the two primary categories of software;
• describe the role ERP software plays in an organization;
• describe cloud computing and its advantages and disadvantages for use in an organization; and
• define the term open-source and identify its primary characteristics.
Introduction
The second component of an information system is software. Simply put: Software is the set of instructions that tell the hardware what to do. Software is created through the process of programming (we will cover the creation of software in more detail in chapter 10). Without software, the hardware would not be functional.
Types of Software
Software can be broadly divided into two categories: operating systems and application software. Operating systems manage the hardware and create the interface between the hardware and the user. Application software is the category of programs that do something useful for the user.
Operating Systems
The operating system provides several essential functions, including:
1. managing the hardware resources of the computer;
2. providing the user-interface components;
3. providing a platform for software developers to write applications.
All computing devices run an operating system. For personal computers, the most popular operating systems are Microsoft’s Windows, Apple’s OS X, and different versions of Linux. Smartphones and tablets run operating systems as well, such as Apple’s iOS, Google’s Android, Microsoft’s Windows Mobile, and Blackberry.
Early personal-computer operating systems were simple by today’s standards; they did not provide multitasking and required the user to type commands to initiate an action. The amount of memory that early operating systems could handle was limited as well, making large programs impractical to run. The most popular of the early operating systems was IBM’s Disk Operating System, or DOS, which was actually developed for them by Microsoft.
In 1984, Apple introduced the Macintosh computer, featuring an operating system with a graphical user interface. Though not the first graphical operating system, it was the first one to find commercial success. In 1985, Microsoft released the first version of Windows. This version of Windows was not an operating system, but instead was an application that ran on top of the DOS operating system, providing a graphical environment. It was quite limited and had little commercial success. It was not until the 1990 release of Windows 3.0 that Microsoft found success with a graphical user interface. Because of the hold of IBM and IBM-compatible personal computers on business, it was not until Windows 3.0 was released that business users began using a graphical user interface, ushering us into the graphical-computing era. Since 1990, both Apple and Microsoft have released many new versions of their operating systems, with each release adding the ability to process more data at once and access more memory. Features such as multitasking, virtual memory, and voice input have become standard features of both operating systems.
Linux logo (Copyright: Larry Ewing)
A third personal-computer operating system family that is gaining in popularity is Linux (pronounced “linn-ex”). Linux is a version of the Unix operating system that runs on the personal computer. Unix is an operating system used primarily by scientists and engineers on larger minicomputers. These are very expensive computers, and software developer Linus Torvalds wanted to find a way to make Unix run on less expensive personal computers. Linux was the result. Linux has many variations and now powers a large percentage of web servers in the world. It is also an example of open-source software, a topic we will cover later in this chapter.
Sidebar: Mac vs. Windows
Are you a Mac? Are you a PC? Ever since its introduction in 1984, users of the Apple Macintosh have been quite biased about their preference for the Macintosh operating system (now called OS X) over Microsoft’s. When Microsoft introduced Windows, Apple sued Microsoft, claiming that they copied the “look and feel” of the Macintosh operating system. In the end, Microsoft successfully defended themselves.
Over the past few years, Microsoft and Apple have traded barbs with each other, each claiming to have a better operating system and software. While Microsoft has always had the larger market share (see sidebar), Apple has been the favorite of artists, musicians, and the technology elite. Apple also provides a lot of computers to elementary schools, thus gaining a following among the younger generation.
Sidebar: Why Is Microsoft Software So Dominant in the Business World?
If you’ve worked in the world of business, you may have noticed that almost all of the computers run a version of Microsoft’s Windows operating system. Why is this? On almost all college campuses, you see a preponderance of Apple Macintosh laptops. In elementary schools, Apple reigns as well. Why has this not extended into the business world?
As we learned in chapter 1, almost all businesses used IBM mainframe computers back in the 1960s and 1970s. These same businesses shied away from personal computers until IBM released the PC in 1981. When executives had to make a decision about purchasing personal computers for their employees, they would choose the safe route and purchase IBM. The saying then was: “No one ever got fired for buying IBM.” So over the next decade, companies bought IBM personal computers (or those compatible with them), which ran an operating system called DOS. DOS was created by Microsoft, so when Microsoft released Windows as the next iteration of DOS, companies took the safe route and started purchasing Windows.
Microsoft soon found itself with the dominant personal-computer operating system for businesses. As the networked personal computer began to replace the mainframe computer as the primary way of computing inside businesses, it became essential for Microsoft to give businesses the ability to administer and secure their networks. Microsoft developed business-level server products to go along with their personal computer products, thereby providing a complete business solution. And so now, the saying goes: “No one ever got fired for buying Microsoft.”
Application Software
The second major category of software is application software. Application software is, essentially, software that allows the user to accomplish some goal or purpose. For example, if you have to write a paper, you might use the application-software program Microsoft Word. If you want to listen to music, you might use iTunes. To surf the web, you might use Internet Explorer or Firefox. Even a computer game could be considered application software.
The “Killer” App
When a new type of digital device is invented, there are generally a small group of technology enthusiasts who will purchase it just for the joy of figuring out how it works. However, for most of us, until a device can actually do something useful we are not going to spend our hard-earned money on it. A “killer” application is one that becomes so essential that large numbers of people will buy a device just to run that application. For the personal computer, the killer application was the spreadsheet. In 1979, VisiCalc, the first personal-computer spreadsheet package, was introduced. It was an immediate hit and drove sales of the Apple II. It also solidified the value of the personal computer beyond the relatively small circle of technology geeks. When the IBM PC was released, another spreadsheet program, Lotus 1-2-3, was the killer app for business users.
VisiCalc running on an Apple II. (Public Domain)
Productivity Software
Along with the spreadsheet, several other software applications have become standard tools for the workplace. These applications, called productivity software, allow office employees to complete their daily work. Many times, these applications come packaged together, such as in Microsoft’s Office suite. Here is a list of these applications and their basic functions:
• Word processing: This class of software provides for the creation of written documents. Functions include the ability to type and edit text, format fonts and paragraphs, and add, move, and delete text throughout the document. Most modern word-processing programs also have the ability to add tables, images, and various layout and formatting features to the document. Word processors save their documents as electronic files in a variety of formats. By far, the most popular word-processing package is Microsoft Word, which saves its files in the DOCX format. This format can be read/written by many other word-processor packages.
• Spreadsheet: This class of software provides a way to do numeric calculations and analysis. The working area is divided into rows and columns, where users can enter numbers, text, or formulas. It is the formulas that make a spreadsheet powerful, allowing the user to develop complex calculations that can change based on the numbers entered. Most spreadsheets also include the ability to create charts based on the data entered. The most popular spreadsheet package is Microsoft Excel, which saves its files in the XLSX format. Just as with word processors, many other spreadsheet packages can read and write to this file format.
• Presentation: This class of software provides for the creation of slideshow presentations. Harkening back to the days of overhead projectors and transparencies, presentation software allows its users to create a set of slides that can be printed or projected on a screen. Users can add text, images, and other media elements to the slides. Microsoft’s PowerPoint is the most popular software right now, saving its files in PPTX format.
• Some office suites include other types of software. For example, Microsoft Office includes Outlook, its e-mail package, and OneNote, an information-gathering collaboration tool. The professional version of Office also includes Microsoft Access, a database package. (Databases are covered more in chapter 4.)
Microsoft popularized the idea of the office-software productivity bundle with their release of Microsoft Office. This package continues to dominate the market and most businesses expect employees to know how to use this software. However, many competitors to Microsoft Office do exist and are compatible with the file formats used by Microsoft (see table below). Recently, Microsoft has begun to offer a web version of their Office suite. Similar to Google Drive, this suite allows users to edit and share documents online utilizing cloud-computing technology. Cloud computing will be discussed later in this chapter.
Comparison of office application software suites
Utility Software and Programming Software
Two subcategories of application software worth mentioning are utility software and programming software. Utility software includes software that allows you to fix or modify your computer in some way. Examples include antivirus software and disk defragmentation software. These types of software packages were invented to fill shortcomings in operating systems. Many times, a subsequent release of an operating system will include these utility functions as part of the operating system itself.
Programming software is software whose purpose is to make more software. Most of these programs provide programmers with an environment in which they can write the code, test it, and convert it into the format that can then be run on a computer.
Sidebar: “PowerPointed” to Death
As presentation software, specifically Microsoft PowerPoint, has gained acceptance as the primary method to formally present information in a business setting, the art of giving an engaging presentation is becoming rare. Many presenters now just read the bullet points in the presentation and immediately bore those in attendance, who can already read it for themselves.
The real problem is not with PowerPoint as much as it is with the person creating and presenting. Author and thinker Seth Godin put it this way: “PowerPoint could be the most powerful tool on your computer. But it’s not. It’s actually a dismal failure. Almost every PowerPoint presentation sucks rotten eggs.”[1] The software used to help you communicate should not duplicate the presentation you want to give, but instead it should support it. I highly recommend the book Presentation Zen by Garr Reynolds to anyone who wants to improve their presentation skills.
Software developers are becoming aware of this problem as well. New digital presentation technologies are being developed, with the hopes of becoming “the next PowerPoint.” One innovative new presentation application is Prezi. Prezi is a presentation tool that uses a single canvas for the presentation, allowing presenters to place text, images, and other media on the canvas, and then navigate between these objects as they present. Just as with PowerPoint, Prezi should be used to supplement the presentation. And we must always remember that sometimes the best presentations are made with no digital tools.
Sidebar: I Own This Software, Right? Well . . .
When you purchase software and install it on your computer, are you the owner of that software? Technically, you are not! When you install software, you are actually just being given a license to use it. When you first install a software package, you are asked to agree to the terms of service or the license agreement. In that agreement, you will find that your rights to use the software are limited. For example, in the terms of the Microsoft Office Excel 2010 software license, you will find the following statement: “This software is licensed, not sold. This agreement only gives you some rights to use the features included in the software edition you licensed.”
For the most part, these restrictions are what you would expect: you cannot make illegal copies of the software and you may not use it to do anything illegal. However, there are other, more unexpected terms in these software agreements. For example, many software agreements ask you to agree to a limit on liability. Again, from Microsoft: “Limitation on and exclusion of damages. You can recover from Microsoft and its suppliers only direct damages up to the amount you paid for the software. You cannot recover any other damages, including consequential, lost profits, special, indirect or incidental damages.” What this means is that if a problem with the software causes harm to your business, you cannot hold Microsoft or the supplier responsible for damages.
Applications for the Enterprise
As the personal computer proliferated inside organizations, control over the information generated by the organization began splintering. Say the customer service department creates a customer database to keep track of calls and problem reports, and the sales department also creates a database to keep track of customer information. Which one should be used as the master list of customers? As another example, someone in sales might create a spreadsheet to calculate sales revenue, while someone in finance creates a different one that meets the needs of their department. However, it is likely that the two spreadsheets will come up with different totals for revenue. Which one is correct? And who is managing all of this information?
Enterprise Resource Planning
In the 1990s, the need to bring the organization’s information back under centralized control became more apparent. The enterprise resource planning (ERP) system (sometimes just called enterprise software) was developed to bring together an entire organization in one software application. Simply put, an ERP system is a software application utilizing a central database that is implemented throughout the entire organization. Let’s take a closer look at this definition:
• “A software application”: An ERP is a software application that is used by many of an organization’s employees.
• “utilizing a central database”: All users of the ERP edit and save their information from the data source. What this means practically is that there is only one customer database, there is only one calculation for revenue, etc.
• “that is implemented throughout the entire organization”: ERP systems include functionality that covers all of the essential components of a business. Further, an organization can purchase modules for its ERP system that match specific needs, such as manufacturing or planning.
Registered trademark of SAP
ERP systems were originally marketed to large corporations. However, as more and more large companies began installing them, ERP vendors began targeting mid-sized and even smaller businesses. Some of the more well-known ERP systems include those from SAP, Oracle, and Microsoft.
In order to effectively implement an ERP system in an organization, the organization must be ready to make a full commitment. All aspects of the organization are affected as old systems are replaced by the ERP system. In general, implementing an ERP system can take two to three years and several million dollars. In most cases, the cost of the software is not the most expensive part of the implementation: it is the cost of the consultants!
So why implement an ERP system? If done properly, an ERP system can bring an organization a good return on their investment. By consolidating information systems across the enterprise and using the software to enforce best practices, most organizations see an overall improvement after implementing an ERP. Business processes as a form of competitive advantage will be covered in chapter 9.
Sidebar: Y2K and ERP
The initial wave of software-application development began in the 1960s, when applications were developed for mainframe computers. In those days, computing was expensive, so applications were designed to take as little space as possible. One shortcut that many programmers took was in the storage of dates, specifically the year. Instead of allocating four digits to hold the year, many programs allocated two digits, making the assumption that the first two digits were “19″. For example, to calculate how old someone was, the application would take the last two digits of the current year (for 1995, for example, that would be “95″) and then subtract the two digits stored for the birthday year (“65″ for 1965). 95 minus 65 gives an age of 30, which is correct.
However, as the year 2000 approached, many of these “legacy” applications were still being used, and businesses were very concerned that any software applications they were using that needed to calculate dates would fail. To update our age-calculation example, the application would take the last two digits of the current year (for 2012, that would be “12″) and then subtract the two digits stored for the birthday year (“65″ for 1965). 12 minus 65 gives an age of -53, which would cause an error. In order to solve this problem, applications would have to be updated to use four digits for years instead of two. Solving this would be a massive undertaking, as every line of code and every database would have to be examined.
This is where companies gained additional incentive to implement an ERP system. For many organizations that were considering upgrading to ERP systems in the late 1990s, this problem, known as Y2K (year 2000), gave them the extra push they needed to get their ERP installed before the year 2000. ERP vendors guaranteed that their systems had been designed to be Y2K compliant – which simply meant that they stored dates using four digits instead of two. This led to a massive increase in ERP installations in the years leading up to 2000, making the ERP a standard software application for businesses.
Customer Relationship Management
A customer relationship management (CRM) system is a software application designed to manage an organization’s customers. In today’s environment, it is important to develop relationships with your customers, and the use of a well-designed CRM can allow a business to personalize its relationship with each of its customers. Some ERP software systems include CRM modules. An example of a well-known CRM package is Salesforce.
Supply Chain Management
Many organizations must deal with the complex task of managing their supply chains. At its simplest, a supply chain is the linkage between an organization’s suppliers, its manufacturing facilities, and the distributors of its products. Each link in the chain has a multiplying effect on the complexity of the process: if there are two suppliers, one manufacturing facility, and two distributors, for example, then there are 2 x 1 x 2 = 4 links to handle. However, if you add two more suppliers, another manufacturing facility, and two more distributors, then you have 4 x 2 x 4 = 32 links to manage.
A supply chain management (SCM) system manages the interconnection between these links, as well as the inventory of the products in their various stages of development. A full definition of a supply chain management system is provided by the Association for Operations Management: The design, planning, execution, control, and monitoring of supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance globally.”[2] Most ERP systems include a supply chain management module.
Mobile Applications
Just as with the personal computer, mobile devices such as tablet computers and smartphones also have operating systems and application software. In fact, these mobile devices are in many ways just smaller versions of personal computers. A mobile app is a software application programmed to run specifically on a mobile device.
As we saw in chapter 2, smartphones and tablets are becoming a dominant form of computing, with many more smartphones being sold than personal computers. This means that organizations will have to get smart about developing software on mobile devices in order to stay relevant.
These days, most mobile devices run on one of two operating systems: Android or iOS. Android is an open-source operating system purchased and supported by Google; iOS is Apple’s mobile operating system. In the fourth quarter of 2012, Android was installed on 70.1% of all mobile phones shipped, followed by 21.0% for iOS. Other mobile operating systems of note are Blackberry (3.2%) and Windows (2.6%). [3]
As organizations consider making their digital presence compatible with mobile devices, they will have to decide whether to build a mobile app. A mobile app is an expensive proposition, and it will only run on one type of mobile device at a time. For example, if an organization creates an iPhone app, those with Android phones cannot run the application. Each app takes several thousand dollars to create, so this is not a trivial decision for many companies.
One option many companies have is to create a website that is mobile-friendly. A mobile website works on all mobile devices and costs about the same as creating an app. We will discuss the question of whether to build a mobile app more thoroughly in Chapter 10.
Cloud Computing
Historically, for software to run on a computer, an individual copy of the software had to be installed on the computer, either from a disk or, more recently, after being downloaded from the Internet. The concept of “cloud” computing changes this, however.
To understand cloud computing, we first have to understand what the cloud is. “The cloud” refers to applications, services, and data storage on the Internet. These service providers rely on giant server farms and massive storage devices that are connected via Internet protocols. Cloud computing is the use of these services by individuals and organizations.
You probably already use cloud computing in some forms. For example, if you access your e-mail via your web browser, you are using a form of cloud computing. If you use Google Drive’s applications, you are using cloud computing. While these are free versions of cloud computing, there is big business in providing applications and data storage over the web. Salesforce (see above) is a good example of cloud computing – their entire suite of CRM applications are offered via the cloud. Cloud computing is not limited to web applications: it can also be used for services such as phone or video streaming.
Advantages of Cloud Computing
• No software to install or upgrades to maintain.
• Available from any computer that has access to the Internet.
• Can scale to a large number of users easily.
• New applications can be up and running very quickly.
• Services can be leased for a limited time on an as-needed basis.
• Your information is not lost if your hard disk crashes or your laptop is stolen.
• You are not limited by the available memory or disk space on your computer.
Disadvantages of Cloud Computing
• Your information is stored on someone else’s computer – how safe is it?
• You must have Internet access to use it. If you do not have access, you’re out of luck.
• You are relying on a third-party to provide these services.
Cloud computing has the ability to really impact how organizations manage technology. For example, why is an IT department needed to purchase, configure, and manage personal computers and software when all that is really needed is an Internet connection?
Using a Private Cloud
Many organizations are understandably nervous about giving up control of their data and some of their applications by using cloud computing. But they also see the value in reducing the need for installing software and adding disk storage to local computers. A solution to this problem lies in the concept of a private cloud. While there are various models of a private cloud, the basic idea is for the cloud service provider to section off web server space for a specific organization. The organization has full control over that server space while still gaining some of the benefits of cloud computing.
Virtualization
One technology that is utilized extensively as part of cloud computing is “virtualization.” Virtualization is the process of using software to simulate a computer or some other device. For example, using virtualization, a single computer can perform the functions of several computers. Companies such as EMC provide virtualization software that allows cloud service providers to provision web servers to their clients quickly and efficiently. Organizations are also implementing virtualization in order to reduce the number of servers needed to provide the necessary services. For more detail on how virtualization works, see this informational page from VMWare.
Software Creation
How is software created? If software is the set of instructions that tells the hardware what to do, how are these instructions written? If a computer reads everything as ones and zeroes, do we have to learn how to write software that way?
Modern software applications are written using a programming language. A programming language consists of a set of commands and syntax that can be organized logically to execute specific functions. This language generally consists of a set of readable words combined with symbols. Using this language, a programmer writes a program (called the source code) that can then be compiled into machine-readable form, the ones and zeroes necessary to be executed by the CPU. Examples of well-known programming languages today include Java, PHP, and various flavors of C (Visual C, C++, C#). Languages such as HTML and Javascript are used to develop web pages. Most of the time, programming is done inside a programming environment; when you purchase a copy of Visual Studio from Microsoft, it provides you with an editor, compiler, and help for many of Microsoft’s programming languages.
Software programming was originally an individual process, with each programmer working on an entire program, or several programmers each working on a portion of a larger program. However, newer methods of software development include a more collaborative approach, with teams of programmers working on code together. We will cover information-systems development more fully in chapter 10.
Open-Source Software
When the personal computer was first released, it did not serve any practical need. Early computers were difficult to program and required great attention to detail. However, many personal-computer enthusiasts immediately banded together to build applications and solve problems. These computer enthusiasts were happy to share any programs they built and solutions to problems they found; this collaboration enabled them to more quickly innovate and fix problems.
As software began to become a business, however, this idea of sharing everything fell out of favor, at least with some. When a software program takes hundreds of man-hours to develop, it is understandable that the programmers do not want to just give it away. This led to a new business model of restrictive software licensing, which required payment for software, a model that is still dominant today. This model is sometimes referred to as closed source, as the source code is not made available to others.
There are many, however, who feel that software should not be restricted. Just as with those early hobbyists in the 1970s, they feel that innovation and progress can be made much more rapidly if we share what we learn. In the 1990s, with Internet access connecting more and more people together, the open-source movement gained steam.
Open-source software is software that makes the source code available for anyone to copy and use. For most of us, having access to the source code of a program does us little good, as we are not programmers and won’t be able to do much with it. The good news is that open-source software is also available in a compiled format that we can simply download and install. The open-source movement has led to the development of some of the most-used software in the world, including the Firefox browser, the Linux operating system, and the Apache web server. Many also think open-source software is superior to closed-source software. Because the source code is freely available, many programmers have contributed to open-source software projects, adding features and fixing bugs.
Many businesses are wary of open-source software precisely because the code is available for anyone to see. They feel that this increases the risk of an attack. Others counter that this openness actually decreases the risk because the code is exposed to thousands of programmers who can incorporate code changes to quickly patch vulnerabilities.
There are many arguments on both sides of the aisle for the benefits of the two models. Some benefits of the open-source model are:
• The software is available for free.
• The software source-code is available; it can be examined and reviewed before it is installed.
• The large community of programmers who work on open-source projects leads to quick bug-fixing and feature additions.
Some benefits of the closed-source model are:
• By providing financial incentive for software development, some of the brightest minds have chosen software development as a career.
• Technical support from the company that developed the software.
Today there are thousands of open-source software applications available for download. For example, as we discussed previously in this chapter, you can get the productivity suite from Open Office. One good place to search for open-source software is sourceforge.net, where thousands of software applications are available for free download.
Summary
Software gives the instructions that tell the hardware what to do. There are two basic categories of software: operating systems and applications. Operating systems provide access to the computer hardware and make system resources available. Application software is designed to meet a specific goal. Productivity software is a subset of application software that provides basic business functionality to a personal computer: word processing, spreadsheets, and presentations. An ERP system is a software application with a centralized database that is implemented across the entire organization. Cloud computing is a method of software delivery that runs on any computer that has a web browser and access to the Internet. Software is developed through a process called programming, in which a programmer uses a programming language to put together the logic needed to create the program. While most software is developed using a closed-source model, the open-source movement is gaining more support today.
Study Questions
1. Come up with your own definition of software. Explain the key terms in your definition.
2. What are the functions of the operating system?
3. Which of the following are operating systems and which are applications: Microsoft Excel, Google Chrome, iTunes, Windows, Android, Angry Birds.
4. What is your favorite software application? What tasks does it help you accomplish?
5. What is a “killer” app? What was the killer app for the PC?
6. How would you categorize the software that runs on mobile devices? Break down these apps into at least three basic categories and give an example of each.
7. Explain what an ERP system does.
8. What is open-source software? How does it differ from closed-source software? Give an example of each.
9. What does a software license grant?
10. How did the Y2K (year 2000) problem affect the sales of ERP systems?
Exercises
1. Go online and find a case study about the implementation of an ERP system. Was it successful? How long did it take? Does the case study tell you how much money the organization spent?
2. What ERP system does your university or place of employment use? Find out which one they use and see how it compares to other ERP systems.
3. If you were running a small business with limited funds for information technology, would you consider using cloud computing? Find some web-based resources that support your decision.
4. Download and install Open Office. Use it to create a document or spreadsheet. How does it compare to Microsoft Office? Does the fact that you got it for free make it feel less valuable?
5. Go to sourceforge.net and review their most downloaded software applications. Report back on the variety of applications you find. Then pick one that interests you and report back on what it does, the kind of technical support offered, and the user reviews.
6. Review this article on the security risks of open-source software. Write a short analysis giving your opinion on the different risks discussed.
7. What are three examples of programming languages? What makes each of these languages useful to programmers?
1. From Why are your PowerPoints so bad? available for download at http://www.sethgodin.com/freeprize/reallybad-1.pdf.
2. http://www.apics.org/dictionary/dict...mation?ID=3984
3. Taken from IDC Worldwide Mobile Phone Tracker, February 14, 2013. Full report available at http://www.idc.com/getdoc.jsp?containerId=prUS23946013 | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/3%3A_Software.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe the differences between data, information, and knowledge;
• define the term database and identify the steps to creating one;
• describe the role of a database management system;
• describe the characteristics of a data warehouse; and
• define data mining and describe its role in an organization.
Introduction
You have already been introduced to the first two components of information systems: hardware and software. However, those two components by themselves do not make a computer useful. Imagine if you turned on a computer, started the word processor, but could not save a document. Imagine if you opened a music player but there was no music to play. Imagine opening a web browser but there were no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.
Data, Information, and Knowledge
Data are the raw bits and pieces of information with no context. If I told you, “15, 23, 14, 85,” you would not have learned anything. But I would have given you data.
Data can be quantitative or qualitative. Quantitative data is numeric, the result of a measurement, count, or some other mathematical calculation. Qualitative data is descriptive. “Ruby Red,” the color of a 2013 Ford Focus, is an example of qualitative data. A number can be qualitative too: if I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the result of a measurement or mathematical calculation.
By itself, data is not that useful. To be useful, it needs to be given context. Returning to the example above, if I told you that “15, 23, 14, and 85″ are the numbers of students that had registered for upcoming classes, that would be information. By adding the context – that the numbers represent the count of students registering for specific classes – I have converted data into information.
Once we have put our data into context, aggregated and analyzed it, we can use it to make decisions for our organization. We can say that this consumption of information produces knowledge. This knowledge can be used to make decisions, set policies, and even spark innovation.
The final step up the information ladder is the step from knowledge (knowing a lot about a topic) to wisdom. We can say that someone has wisdom when they can combine their knowledge and experience to produce a deeper understanding of a topic. It often takes many years to develop wisdom on a particular topic, and requires patience.
Examples of Data
Almost all software programs require data to do anything useful. For example, if you are editing a document in a word processor such as Microsoft Word, the document you are working on is the data. The word-processing software can manipulate the data: create a new document, duplicate a document, or modify a document. Some other examples of data are: an MP3 music file, a video file, a spreadsheet, a web page, and an e-book. In some cases, such as with an e-book, you may only have the ability to read the data.
Databases
The goal of many information systems is to transform data into information in order to generate knowledge that can be used for decision making. In order to do this, the system must be able to take data, put the data into context, and provide tools for aggregation and analysis. A database is designed for just such a purpose.
A database is an organized collection of related information. It is an organized collection, because in a database, all data is described and associated with other data. All information in a database should be related as well; separate databases should be created to manage unrelated information. For example, a database that contains information about students should not also hold information about company stock prices. Databases are not always digital – a filing cabinet, for instance, might be considered a form of database. For the purposes of this text, we will only consider digital databases.
Relational Databases
Databases can be organized in many different ways, and thus take many forms. The most popular form of database today is the relational database. Popular examples of relational databases are Microsoft Access, MySQL, and Oracle. A relational database is one in which data is organized into one or more tables. Each table has a set of fields, which define the nature of the data stored in the table. A record is one instance of a set of fields in a table. To visualize this, think of the records as the rows of the table and the fields as the columns of the table. In the example below, we have a table of student information, with each row representing a student and each column representing one piece of information about the student.
Rows and columns in a table
In a relational database, all the tables are related by one or more fields, so that it is possible to connect all the tables in the database through the field(s) they have in common. For each table, one of the fields is identified as a primary key. This key is the unique identifier for each record in the table. To help you understand these terms further, let’s walk through the process of designing a database.
Designing a Database
Suppose a university wants to create an information system to track participation in student clubs. After interviewing several people, the design team learns that the goal of implementing the system is to give better insight into how the university funds clubs. This will be accomplished by tracking how many members each club has and how active the clubs are. From this, the team decides that the system must keep track of the clubs, their members, and their events. Using this information, the design team determines that the following tables need to be created:
• Clubs: this will track the club name, the club president, and a short description of the club.
• Students: student name, e-mail, and year of birth.
• Memberships: this table will correlate students with clubs, allowing us to have any given student join multiple clubs.
• Events: this table will track when the clubs meet and how many students showed up.
Now that the design team has determined which tables to create, they need to define the specific information that each table will hold. This requires identifying the fields that will be in each table. For example, Club Name would be one of the fields in the Clubs table. First Name and Last Name would be fields in the Students table. Finally, since this will be a relational database, every table should have a field in common with at least one other table (in other words: they should have a relationship with each other).
In order to properly create this relationship, a primary key must be selected for each table. This key is a unique identifier for each record in the table. For example, in the Students table, it might be possible to use students’ last name as a way to uniquely identify them. However, it is more than likely that some students will share a last name (like Rodriguez, Smith, or Lee), so a different field should be selected. A student’s e-mail address might be a good choice for a primary key, since e-mail addresses are unique. However, a primary key cannot change, so this would mean that if students changed their e-mail address we would have to remove them from the database and then re-insert them – not an attractive proposition. Our solution is to create a value for each student — a user ID — that will act as a primary key. We will also do this for each of the student clubs. This solution is quite common and is the reason you have so many user IDs!
You can see the final database design in the figure below:
Student Clubs database diagram
With this design, not only do we have a way to organize all of the information we need to meet the requirements, but we have also successfully related all the tables together. Here’s what the database tables might look like with some sample data. Note that the Memberships table has the sole purpose of allowing us to relate multiple students to multiple clubs.
Normalization
When designing a database, one important concept to understand is normalization. In simple terms, to normalize a database means to design it in a way that: 1) reduces duplication of data between tables and 2) gives the table as much flexibility as possible.
In the Student Clubs database design, the design team worked to achieve these objectives. For example, to track memberships, a simple solution might have been to create a Members field in the Clubs table and then just list the names of all of the members there. However, this design would mean that if a student joined two clubs, then his or her information would have to be entered a second time. Instead, the designers solved this problem by using two tables: Students and Memberships.
In this design, when a student joins their first club, we first must add the student to the Students table, where their first name, last name, e-mail address, and birth year are entered. This addition to the Students table will generate a student ID. Now we will add a new entry to denote that the student is a member of a specific club. This is accomplished by adding a record with the student ID and the club ID in the Memberships table. If this student joins a second club, we do not have to duplicate the entry of the student’s name, e-mail, and birth year; instead, we only need to make another entry in the Memberships table of the second club’s ID and the student’s ID.
The design of the Student Clubs database also makes it simple to change the design without major modifications to the existing structure. For example, if the design team were asked to add functionality to the system to track faculty advisors to the clubs, we could easily accomplish this by adding a Faculty Advisors table (similar to the Students table) and then adding a new field to the Clubs table to hold the Faculty Advisor ID.
Data Types
When defining the fields in a database table, we must give each field a data type. For example, the field Birth Year is a year, so it will be a number, while First Name will be text. Most modern databases allow for several different data types to be stored. Some of the more common data types are listed here:
• Text: for storing non-numeric data that is brief, generally under 256 characters. The database designer can identify the maximum length of the text.
• Number: for storing numbers. There are usually a few different number types that can be selected, depending on how large the largest number will be.
• Yes/No: a special form of the number data type that is (usually) one byte long, with a 0 for “No” or “False” and a 1 for “Yes” or “True”.
• Date/Time: a special form of the number data type that can be interpreted as a number or a time.
• Currency: a special form of the number data type that formats all values with a currency indicator and two decimal places.
• Paragraph Text: this data type allows for text longer than 256 characters.
• Object: this data type allows for the storage of data that cannot be entered via keyboard, such as an image or a music file.
There are two important reasons that we must properly define the data type of a field. First, a data type tells the database what functions can be performed with the data. For example, if we wish to perform mathematical functions with one of the fields, we must be sure to tell the database that the field is a number data type. So if we have, say, a field storing birth year, we can subtract the number stored in that field from the current year to get age.
The second important reason to define data type is so that the proper amount of storage space is allocated for our data. For example, if the First Name field is defined as a text(50) data type, this means fifty characters are allocated for each first name we want to store. However, even if the first name is only five characters long, fifty characters (bytes) will be allocated. While this may not seem like a big deal, if our table ends up holding 50,000 names, we are allocating 50 * 50,000 = 2,500,000 bytes for storage of these values. It may be prudent to reduce the size of the field so we do not waste storage space.
Sidebar: The Difference between a Database and a Spreadsheet
Many times, when introducing the concept of databases to students, they quickly decide that a database is pretty much the same as a spreadsheet. After all, a spreadsheet stores data in an organized fashion, using rows and columns, and looks very similar to a database table. This misunderstanding extends beyond the classroom: spreadsheets are used as a substitute for databases in all types of situations every day, all over the world.
To be fair, for simple uses, a spreadsheet can substitute for a database quite well. If a simple listing of rows and columns (a single table) is all that is needed, then creating a database is probably overkill. In our Student Clubs example, if we only needed to track a listing of clubs, the number of members, and the contact information for the president, we could get away with a single spreadsheet. However, the need to include a listing of events and the names of members would be problematic if tracked with a spreadsheet.
When several types of data must be mixed together, or when the relationships between these types of data are complex, then a spreadsheet is not the best solution. A database allows data from several entities (such as students, clubs, memberships, and events) to all be related together into one whole. While a spreadsheet does allow you to define what kinds of values can be entered into its cells, a database provides more intuitive and powerful ways to define the types of data that go into each field, reducing possible errors and allowing for easier analysis.
Though not good for replacing databases, spreadsheets can be ideal tools for analyzing the data stored in a database. A spreadsheet package can be connected to a specific table or query in a database and used to create charts or perform analysis on that data.
Structured Query Language
Once you have a database designed and loaded with data, how will you do something useful with it? The primary way to work with a relational database is to use Structured Query Language, SQL (pronounced “sequel,” or simply stated as S-Q-L). Almost all applications that work with databases (such as database management systems, discussed below) make use of SQL as a way to analyze and manipulate relational data. As its name implies, SQL is a language that can be used to work with a relational database. From a simple request for data to a complex update operation, SQL is a mainstay of programmers and database administrators. To give you a taste of what SQL might look like, here are a couple of examples using our Student Clubs database.
• The following query will retrieve a list of the first and last names of the club presidents:
`SELECT "First Name", "Last Name" FROM "Students" WHERE "Students.ID" = "Clubs.President"`
• The following query will create a list of the number of students in each club, listing the club name and then the number of members:
`SELECT "Clubs.Club Name", COUNT("Memberships.Student ID") FROM "Clubs" LEFT JOIN "Memberships" ON "Clubs.Club ID" = "Memberships.Club ID"`
An in-depth description of how SQL works is beyond the scope of this introductory text, but these examples should give you an idea of the power of using SQL to manipulate relational data. Many database packages, such as Microsoft Access, allow you to visually create the query you want to construct and then generate the SQL query for you.
Other Types of Databases
The relational database model is the most used database model today. However, many other database models exist that provide different strengths than the relational model. The hierarchical database model, popular in the 1960s and 1970s, connected data together in a hierarchy, allowing for a parent/child relationship between data. The document-centric model allowed for a more unstructured data storage by placing data into “documents” that could then be manipulated.
Perhaps the most interesting new development is the concept of NoSQL (from the phrase “not only SQL”). NoSQL arose from the need to solve the problem of large-scale databases spread over several servers or even across the world. For a relational database to work properly, it is important that only one person be able to manipulate a piece of data at a time, a concept known as record-locking. But with today’s large-scale databases (think Google and Amazon), this is just not possible. A NoSQL database can work with data in a looser way, allowing for a more unstructured environment, communicating changes to the data over time to all the servers that are part of the database.
Database Management Systems
Screen shot of the Open Office database management system
To the computer, a database looks like one or more files. In order for the data in the database to be read, changed, added, or removed, a software program must access it. Many software applications have this ability: iTunes can read its database to give you a listing of its songs (and play the songs); your mobile-phone software can interact with your list of contacts. But what about applications to create or manage a database? What software can you use to create a database, change a database’s structure, or simply do analysis? That is the purpose of a category of software applications called database management systems (DBMS).
DBMS packages generally provide an interface to view and change the design of the database, create queries, and develop reports. Most of these packages are designed to work with a specific type of database, but generally are compatible with a wide range of databases.
For example, Apache OpenOffice.org Base (see screen shot) can be used to create, modify, and analyze databases in open-database (ODB) format. Microsoft’s Access DBMS is used to work with databases in its own Microsoft Access Database format. Both Access and Base have the ability to read and write to other database formats as well.
Microsoft Access and Open Office Base are examples of personal database-management systems. These systems are primarily used to develop and analyze single-user databases. These databases are not meant to be shared across a network or the Internet, but are instead installed on a particular device and work with a single user at a time.
Enterprise Databases
A database that can only be used by a single user at a time is not going to meet the needs of most organizations. As computers have become networked and are now joined worldwide via the Internet, a class of database has emerged that can be accessed by two, ten, or even a million people. These databases are sometimes installed on a single computer to be accessed by a group of people at a single location. Other times, they are installed over several servers worldwide, meant to be accessed by millions. These relational enterprise database packages are built and supported by companies such as Oracle, Microsoft, and IBM. The open-source MySQL is also an enterprise database.
As stated earlier, the relational database model does not scale well. The term scale here refers to a database getting larger and larger, being distributed on a larger number of computers connected via a network. Some companies are looking to provide large-scale database solutions by moving away from the relational model to other, more flexible models. For example, Google now offers the App Engine Datastore, which is based on NoSQL. Developers can use the App Engine Datastore to develop applications that access data from anywhere in the world. Amazon.com offers several database services for enterprise use, including Amazon RDS, which is a relational database service, and Amazon DynamoDB, a NoSQL enterprise solution.
Big Data
A new buzzword that has been capturing the attention of businesses lately is big data. The term refers to such massively large data sets that conventional database tools do not have the processing power to analyze them. For example, Walmart must process over one million customer transactions every hour. Storing and analyzing that much data is beyond the power of traditional database-management tools. Understanding the best tools and techniques to manage and analyze these large data sets is a problem that governments and businesses alike are trying to solve.
Sidebar: What Is Metadata?
The term metadata can be understood as “data about data.” For example, when looking at one of the values of Year of Birth in the Students table, the data itself may be “1992″. The metadata about that value would be the field name Year of Birth, the time it was last updated, and the data type (integer). Another example of metadata could be for an MP3 music file, like the one shown in the image below; information such as the length of the song, the artist, the album, the file size, and even the album cover art, are classified as metadata. When a database is being designed, a “data dictionary” is created to hold the metadata, defining the fields and structure of the database.
Data Warehouse
As organizations have begun to utilize databases as the centerpiece of their operations, the need to fully understand and leverage the data they are collecting has become more and more apparent. However, directly analyzing the data that is needed for day-to-day operations is not a good idea; we do not want to tax the operations of the company more than we need to. Further, organizations also want to analyze data in a historical sense: How does the data we have today compare with the same set of data this time last month, or last year? From these needs arose the concept of the data warehouse.
The concept of the data warehouse is simple: extract data from one or more of the organization’s databases and load it into the data warehouse (which is itself another database) for storage and analysis. However, the execution of this concept is not that simple. A data warehouse should be designed so that it meets the following criteria:
• It uses non-operational data. This means that the data warehouse is using a copy of data from the active databases that the company uses in its day-to-day operations, so the data warehouse must pull data from the existing databases on a regular, scheduled basis.
• The data is time-variant. This means that whenever data is loaded into the data warehouse, it receives a time stamp, which allows for comparisons between different time periods.
• The data is standardized. Because the data in a data warehouse usually comes from several different sources, it is possible that the data does not use the same definitions or units. For example, our Events table in our Student Clubs database lists the event dates using the mm/dd/yyyy format (e.g., 01/10/2013). A table in another database might use the format yy/mm/dd (e.g., 13/01/10) for dates. In order for the data warehouse to match up dates, a standard date format would have to be agreed upon and all data loaded into the data warehouse would have to be converted to use this standard format. This process is called extraction-transformation-load (ETL).
There are two primary schools of thought when designing a data warehouse: bottom-up and top-down. The bottom-up approach starts by creating small data warehouses, called data marts, to solve specific business problems. As these data marts are created, they can be combined into a larger data warehouse. The top-down approach suggests that we should start by creating an enterprise-wide data warehouse and then, as specific business needs are identified, create smaller data marts from the data warehouse.
Data warehouse process (top-down)
Benefits of Data Warehouses
Organizations find data warehouses quite beneficial for a number of reasons:
• The process of developing a data warehouse forces an organization to better understand the data that it is currently collecting and, equally important, what data is not being collected.
• A data warehouse provides a centralized view of all data being collected across the enterprise and provides a means for determining data that is inconsistent.
• Once all data is identified as consistent, an organization can generate one version of the truth. This is important when the company wants to report consistent statistics about itself, such as revenue or number of employees.
• By having a data warehouse, snapshots of data can be taken over time. This creates a historical record of data, which allows for an analysis of trends.
• A data warehouse provides tools to combine data, which can provide new information and analysis.
Data Mining
Data mining is the process of analyzing data to find previously unknown trends, patterns, and associations in order to make decisions. Generally, data mining is accomplished through automated means against extremely large data sets, such as a data warehouse. Some examples of data mining include:
• An analysis of sales from a large grocery chain might determine that milk is purchased more frequently the day after it rains in cities with a population of less than 50,000.
• A bank may find that loan applicants whose bank accounts show particular deposit and withdrawal patterns are not good credit risks.
• A baseball team may find that collegiate baseball players with specific statistics in hitting, pitching, and fielding make for more successful major league players.
In some cases, a data-mining project is begun with a hypothetical result in mind. For example, a grocery chain may already have some idea that buying patterns change after it rains and want to get a deeper understanding of exactly what is happening. In other cases, there are no presuppositions and a data-mining program is run against large data sets in order to find patterns and associations.
Privacy Concerns
The increasing power of data mining has caused concerns for many, especially in the area of privacy. In today’s digital world, it is becoming easier than ever to take data from disparate sources and combine them to do new forms of analysis. In fact, a whole industry has sprung up around this technology: data brokers. These firms combine publicly accessible data with information obtained from the government and other sources to create vast warehouses of data about people and companies that they can then sell. This subject will be covered in much more detail in chapter 12 – the chapter on the ethical concerns of information systems.
Business Intelligence and Business Analytics
With tools such as data warehousing and data mining at their disposal, businesses are learning how to use information to their advantage. The term business intelligence is used to describe the process that organizations use to take data they are collecting and analyze it in the hopes of obtaining a competitive advantage. Besides using data from their internal databases, firms often purchase information from data brokers to get a big-picture understanding of their industries. Business analytics is the term used to describe the use of internal company data to improve business processes and practices.
Knowledge Management
We end the chapter with a discussion on the concept of knowledge management (KM). All companies accumulate knowledge over the course of their existence. Some of this knowledge is written down or saved, but not in an organized fashion. Much of this knowledge is not written down; instead, it is stored inside the heads of its employees. Knowledge management is the process of formalizing the capture, indexing, and storing of the company’s knowledge in order to benefit from the experiences and insights that the company has captured during its existence.
Summary
In this chapter, we learned about the role that data and databases play in the context of information systems. Data is made up of small facts and information without context. If you give data context, then you have information. Knowledge is gained when information is consumed and used for decision making. A database is an organized collection of related information. Relational databases are the most widely used type of database, where data is structured into tables and all tables must be related to each other through unique identifiers. A database management system (DBMS) is a software application that is used to create and manage databases, and can take the form of a personal DBMS, used by one person, or an enterprise DBMS that can be used by multiple users. A data warehouse is a special form of database that takes data from other databases in an enterprise and organizes it for analysis. Data mining is the process of looking for patterns and relationships in large data sets. Many businesses use databases, data warehouses, and data-mining techniques in order to produce business intelligence and gain a competitive advantage.
Study Questions
1. What is the difference between data, information, and knowledge?
2. Explain in your own words how the data component relates to the hardware and software components of information systems.
3. What is the difference between quantitative data and qualitative data? In what situations could the number 42 be considered qualitative data?
4. What are the characteristics of a relational database?
5. When would using a personal DBMS make sense?
6. What is the difference between a spreadsheet and a database? List three differences between them.
7. Describe what the term normalization means.
8. Why is it important to define the data type of a field when designing a relational database?
9. Name a database you interact with frequently. What would some of the field names be?
10. What is metadata?
11. Name three advantages of using a data warehouse.
12. What is data mining?
Exercises
1. Review the design of the Student Clubs database earlier in this chapter. Reviewing the lists of data types given, what data types would you assign to each of the fields in each of the tables. What lengths would you assign to the text fields?
2. Download Apache OpenOffice.org and use the database tool to open the “Student Clubs.odb” file available here. Take some time to learn how to modify the database structure and then see if you can add the required items to support the tracking of faculty advisors, as described at the end of the Normalization section in the chapter. Here is a link to the Getting Started documentation.
3. Using Microsoft Access, download the database file of comprehensive baseball statistics from the website SeanLahman.com. (If you don’t have Microsoft Access, you can download an abridged version of the file here that is compatible with Apache Open Office). Review the structure of the tables included in the database. Come up with three different data-mining experiments you would like to try, and explain which fields in which tables would have to be analyzed.
4. Do some original research and find two examples of data mining. Summarize each example and then write about what the two examples have in common.
5. Conduct some independent research on the process of business intelligence. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of how business intelligence is being used.
6. Conduct some independent research on the latest technologies being used for knowledge management. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of software applications or new technologies being used in this field. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/4%3A_Data_and_Databases.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• understand the history and development of networking technologies;
• define the key terms associated with networking technologies;
• understand the importance of broadband technologies; and
• describe organizational networking.
Introduction
In the early days of computing, computers were seen as devices for making calculations, storing data, and automating business processes. However, as the devices evolved, it became apparent that many of the functions of telecommunications could be integrated into the computer. During the 1980s, many organizations began combining their once-separate telecommunications and information-systems departments into an information technology, or IT, department. This ability for computers to communicate with one another and, maybe more importantly, to facilitate communication between individuals and groups, has been an important factor in the growth of computing over the past several decades.
Computer networking really began in the 1960s with the birth of the Internet, as we’ll see below. However, while the Internet and web were evolving, corporate networking was also taking shape in the form of local area networks and client-server computing. In the 1990s, when the Internet came of age, Internet technologies began to pervade all areas of the organization. Now, with the Internet a global phenomenon, it would be unthinkable to have a computer that did not include communications capabilities. This chapter will review the different technologies that have been put in place to enable this communications revolution.
A Brief History of the Internet
In the Beginning: ARPANET
The story of the Internet, and networking in general, can be traced back to the late 1950s. The US was in the depths of the Cold War with the USSR, and each nation closely watched the other to determine which would gain a military or intelligence advantage. In 1957, the Soviets surprised the US with the launch of Sputnik, propelling us into the space age. In response to Sputnik, the US Government created the Advanced Research Projects Agency (ARPA), whose initial role was to ensure that the US was not surprised again. It was from ARPA, now called DARPA (Defense Advanced Research Projects Agency), that the Internet first sprang.
ARPA was the center of computing research in the 1960s, but there was just one problem: many of the computers could not talk to each other. In 1968, ARPA sent out a request for proposals for a communication technology that would allow different computers located around the country to be integrated together into one network. Twelve companies responded to the request, and a company named Bolt, Beranek, and Newman (BBN) won the contract. They began work right away and were able to complete the job just one year later: in September, 1969, the ARPANET was turned on. The first four nodes were at UCLA, Stanford, MIT, and the University of Utah.
The Internet and the World Wide Web
Over the next decade, the ARPANET grew and gained popularity. During this time, other networks also came into existence. Different organizations were connected to different networks. This led to a problem: the networks could not talk to each other. Each network used its own proprietary language, or protocol (see sidebar for the definition of protocol), to send information back and forth. This problem was solved by the invention of transmission control protocol/Internet protocol (TCP/IP). TCP/IP was designed to allow networks running on different protocols to have an intermediary protocol that would allow them to communicate. So as long as your network supported TCP/IP, you could communicate with all of the other networks running TCP/IP. TCP/IP quickly became the standard protocol and allowed networks to communicate with each other. It is from this breakthrough that we first got the term Internet, which simply means “an interconnected network of networks.”
Sidebar: An Internet Vocabulary Lesson
Networking communication is full of some very technical concepts based on some simple principles. Learn the terms below and you’ll be able to hold your own in a conversation about the Internet.
• Packet: The fundamental unit of data transmitted over the Internet. When a device intends to send a message to another device (for example, your PC sends a request to YouTube to open a video), it breaks the message down into smaller pieces, called packets. Each packet has the sender’s address, the destination address, a sequence number, and a piece of the overall message to be sent.
• Hub: A simple network device that connects other devices to the network and sends packets to all the devices connected to it.
• Bridge: A network device that connects two networks together and only allows packets through that are needed.
• Switch: A network device that connects multiple devices together and filters packets based on their destination within the connected devices.
• Router: A device that receives and analyzes packets and then routes them towards their destination. In some cases, a router will send a packet to another router; in other cases, it will send it directly to its destination.
• IP Address: Every device that communicates on the Internet, whether it be a personal computer, a tablet, a smartphone, or anything else, is assigned a unique identifying number called an IP (Internet Protocol) address. Historically, the IP-address standard used has been IPv4 (version 4), which has the format of four numbers between 0 and 255 separated by a period. For example, the domain Saylor.org has the IP address of 107.23.196.166. The IPv4 standard has a limit of 4,294,967,296 possible addresses. As the use of the Internet has proliferated, the number of IP addresses needed has grown to the point where the use of IPv4 addresses will be exhausted. This has led to the new IPv6 standard, which is currently being phased in. The IPv6 standard is formatted as eight groups of four hexadecimal digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334. The IPv6 standard has a limit of 3.4×1038 possible addresses. For more detail about the new IPv6 standard, see this Wikipedia article.
• Domain name: If you had to try to remember the IP address of every web server you wanted to access, the Internet would not be nearly as easy to use. A domain name is a human-friendly name for a device on the Internet. These names generally consist of a descriptive text followed by the top-level domain (TLD). For example, Wikepedia’s domain name is Wikipedia.org; Wikipedia describes the organization and .org is the top-level domain. In this case, the .org TLD is designed for nonprofit organizations. Other well-known TLDs include .com, .net, and .gov. For a complete list and description of domain names, see this Wikipedia article.
• DNS: DNS stands for “domain name system,” which acts as the directory on the Internet. When a request to access a device with a domain name is given, a DNS server is queried. It returns the IP address of the device requested, allowing for proper routing.
• Packet-switching: When a packet is sent from one device out over the Internet, it does not follow a straight path to its destination. Instead, it is passed from one router to another across the Internet until it is reaches its destination. In fact, sometimes two packets from the same message will take different routes! Sometimes, packets will arrive at their destination out of order. When this happens, the receiving device restores them to their proper order. For more details on packet-switching, see this interactive web page.
• Protocol: In computer networking, a protocol is the set of rules that allow two (or more) devices to exchange information back and forth across the network.
Worldwide Internet use over 24-hour period (click to go to site of origin). (Public Domain. Courtesy of the Internet Census 2012 project.)
As we moved into the 1980s, computers were added to the Internet at an increasing rate. These computers were primarily from government, academic, and research organizations. Much to the surprise of the engineers, the early popularity of the Internet was driven by the use of electronic mail (see sidebar below).
Using the Internet in these early days was not easy. In order to access information on another server, you had to know how to type in the commands necessary to access it, as well as know the name of that device. That all changed in 1990, when Tim Berners-Lee introduced his World Wide Web project, which provided an easy way to navigate the Internet through the use of linked text (hypertext). The World Wide Web gained even more steam with the release of the Mosaic browser in 1993, which allowed graphics and text to be combined together as a way to present information and navigate the Internet. The Mosaic browser took off in popularity and was soon superseded by Netscape Navigator, the first commercial web browser, in 1994. The Internet and the World Wide Web were now poised for growth. The chart below shows the growth in users from the early days until now.
Growth of internet usage, 1995–2012 . Data taken from InternetWorldStats.com.
The Dot-Com Bubble
In the 1980s and early 1990s, the Internet was being managed by the National Science Foundation (NSF). The NSF had restricted commercial ventures on the Internet, which meant that no one could buy or sell anything online. In 1991, the NSF transferred its role to three other organizations, thus getting the US government out of direct control over the Internet and essentially opening up commerce online.
This new commercialization of the Internet led to what is now known as the dot-com bubble. A frenzy of investment in new dot-com companies took place in the late 1990s, running up the stock market to new highs on a daily basis. This investment bubble was driven by the fact that investors knew that online commerce would change everything. Unfortunately, many of these new companies had poor business models and ended up with little to show for all of the funds that were invested in them. In 2000 and 2001, the bubble burst and many of these new companies went out of business. Many companies also survived, including the still-thriving Amazon (started in 1994) and eBay (1995). After the dot-com bubble burst, a new reality became clear: in order to succeed online, e-business companies would need to develop real business models and show that they could survive financially using this new technology.
Web 2.0
In the first few years of the World Wide Web, creating and putting up a website required a specific set of knowledge: you had to know how to set up a server on the World Wide Web, how to get a domain name, how to write web pages in HTML, and how to troubleshoot various technical issues as they came up. Someone who did these jobs for a website became known as a webmaster.
As the web gained in popularity, it became more and more apparent that those who did not have the skills to be a webmaster still wanted to create online content and have their own piece of the web. This need was met with new technologies that provided a website framework for those who wanted to put content online. Blogger and Wikipedia are examples of these early Web 2.0 applications, which allowed anyone with something to say a place to go and say it, without the need for understanding HTML or web-server technology.
Starting in the early 2000s, Web 2.0 applications began a second bubble of optimism and investment. It seemed that everyone wanted their own blog or photo-sharing site. Here are some of the companies that came of age during this time: MySpace (2003), Photobucket (2003), Flickr (2004), Facebook (2004), WordPress (2005), Tumblr (2006), and Twitter (2006). The ultimate indication that Web 2.0 had taken hold was when Time magazine named “You” its “Person of the Year” in 2006.
Sidebar: E-mail Is the “Killer” App for the Internet
When the personal computer was created, it was a great little toy for technology hobbyists and armchair programmers. As soon as the spreadsheet was invented, however, businesses took notice, and the rest is history. The spreadsheet was the killer app for the personal computer: people bought PCs just so they could run spreadsheets.
The Internet was originally designed as a way for scientists and researchers to share information and computing power among themselves. However, as soon as electronic mail was invented, it began driving demand for the Internet. This wasn’t what the developers had in mind, but it turned out that people connecting to people was the killer app for the Internet.
We are seeing this again today with social networks, specifically Facebook. Many who weren’t convinced to have an online presence now feel left out without a Facebook account. The connections made between people using Web 2.0 applications like Facebook on their personal computer or smartphone is driving growth yet again.
Sidebar: The Internet and the World Wide Web Are Not the Same Thing
Many times, the terms “Internet” and “World Wide Web,” or even just “the web,” are used interchangeably. But really, they are not the same thing at all! The Internet is an interconnected network of networks. Many services run across the Internet: electronic mail, voice and video, file transfers, and, yes, the World Wide Web.
The World Wide Web is simply one piece of the Internet. It is made up of web servers that have HTML pages that are being viewed on devices with web browsers. It is really that simple.
The Growth of Broadband
In the early days of the Internet, most access was done via a modem over an analog telephone line. A modem (short for “modulator-demodulator”) was connected to the incoming phone line and a computer in order to connect you to a network. Speeds were measured in bits-per-second (bps), with speeds growing from 1200 bps to 56,000 bps over the years. Connection to the Internet via these modems is called dial-up access. Dial-up was very inconvenient because it tied up the phone line. As the web became more and more interactive, dial-up also hindered usage, as users wanted to transfer more and more data. As a point of reference, downloading a typical 3.5 mb song would take 24 minutes at 1200 bps and 2 minutes at 28,800 bps.
A broadband connection is defined as one that has speeds of at least 256,000 bps, though most connections today are much faster, measured in millions of bits per second (megabits or mbps) or even billions (gigabits). For the home user, a broadband connection is usually accomplished via the cable television lines or phone lines (DSL). Both cable and DSL have similar prices and speeds, though each individual may find that one is better than the other for their specific area. Speeds for cable and DSL can vary during different times of the day or week, depending upon how much data traffic is being used. In more remote areas, where cable and phone companies do not provide access, home Internet connections can be made via satellite. The average home broadband speed is anywhere between 3 mbps and 30 mbps. At 10 mbps, downloading a typical 3.5 mb song would take less than a second. For businesses who require more bandwidth and reliability, telecommunications companies can provide other options, such as T1 and T3 lines.
Growth of broadband use (Source: Pew Internet and American Life Project Surveys)
Broadband access is important because it impacts how the Internet is used. When a community has access to broadband, it allows them to interact more online and increases the usage of digital tools overall. Access to broadband is now considered a basic human right by the United Nations, as declared in their 2011 statement:
“Broadband technologies are fundamentally transforming the way we live,” the Broadband Commission for Digital Development, set up last year by the UN Educational Scientific and Cultural Organization (UNESCO) and the UN International Telecommunications Union (ITU), said in issuing “The Broadband Challenge” at a leadership summit in Geneva.
“It is vital that no one be excluded from the new global knowledge societies we are building. We believe that communication is not just a human need – it is a right.”[1]
Wireless Networking
Today we are used to being able to access the Internet wherever we go. Our smartphones can access the Internet; Starbucks provides wireless “hotspots” for our laptops or iPads. These wireless technologies have made Internet access more convenient and have made devices such as tablets and laptops much more functional. Let’s examine a few of these wireless technologies.
Wi-Fi
Wi-Fi is a technology that takes an Internet signal and converts it into radio waves. These radio waves can be picked up within a radius of approximately 65 feet by devices with a wireless adapter. Several Wi-Fi specifications have been developed over the years, starting with 802.11b (1999), followed by the 802.11g specification in 2003 and 802.11n in 2009. Each new specification improved the speed and range of Wi-Fi, allowing for more uses. One of the primary places where Wi-Fi is being used is in the home. Home users are purchasing Wi-Fi routers, connecting them to their broadband connections, and then connecting multiple devices via Wi-Fi.
Mobile Network
As the cellphone has evolved into the smartphone, the desire for Internet access on these devices has led to data networks being included as part of the mobile phone network. While Internet connections were technically available earlier, it was really with the release of the 3G networks in 2001 (2002 in the US) that smartphones and other cellular devices could access data from the Internet. This new capability drove the market for new and more powerful smartphones, such as the iPhone, introduced in 2007. In 2011, wireless carriers began offering 4G data speeds, giving the cellular networks the same speeds that customers were used to getting via their home connection.
Sidebar: Why Doesn’t My Cellphone Work When I Travel Abroad?
As mobile phone technologies have evolved, providers in different countries have chosen different communication standards for their mobile phone networks. In the US, both of the two competing standards exist: GSM (used by AT&T and T-Mobile) and CDMA (used by the other major carriers). Each standard has its pros and cons, but the bottom line is that phones using one standard cannot easily switch to the other. In the US, this is not a big deal because mobile networks exist to support both standards. But when you travel to other countries, you will find that most of them use GSM networks, with the one big exception being Japan, which has standardized on CDMA. It is possible for a mobile phone using one type of network to switch to the other type of network by switching out the SIM card, which controls your access to the mobile network. However, this will not work in all cases. If you are traveling abroad, it is always best to consult with your mobile provider to determine the best way to access a mobile network.
Bluetooth
While Bluetooth is not generally used to connect a device to the Internet, it is an important wireless technology that has enabled many functionalities that are used every day. When created in 1994 by Ericsson, it was intended to replace wired connections between devices. Today, it is the standard method for connecting nearby devices wirelessly. Bluetooth has a range of approximately 300 feet and consumes very little power, making it an excellent choice for a variety of purposes. Some applications of Bluetooth include: connecting a printer to a personal computer, connecting a mobile phone and headset, connecting a wireless keyboard and mouse to a computer, and connecting a remote for a presentation made on a personal computer.
VoIP
A typical VoIP communication. Image courtesy of BroadVoice.
A growing class of data being transferred over the Internet is voice data. A protocol called voice over IP, or VoIP, enables sounds to be converted to a digital format for transmission over the Internet and then re-created at the other end. By using many existing technologies and software, voice communication over the Internet is now available to anyone with a browser (think Skype, Google Hangouts). Beyond this, many companies are now offering VoIP-based telephone service for business and home use.
Organizational Networking
LAN and WAN
While the Internet was evolving and creating a way for organizations to connect to each other and the world, another revolution was taking place inside organizations. The proliferation of personal computers inside organizations led to the need to share resources such as printers, scanners, and data. Organizations solved this problem through the creation of local area networks (LANs), which allowed computers to connect to each other and to peripherals. These same networks also allowed personal computers to hook up to legacy mainframe computers.
Scope of business networks
An LAN is (by definition) a local network, usually operating in the same building or on the same campus. When an organization needed to provide a network over a wider area (with locations in different cities or states, for example), they would build a wide area network (WAN).
Client-Server
The personal computer originally was used as a stand-alone computing device. A program was installed on the computer and then used to do word processing or number crunching. However, with the advent of networking and local area networks, computers could work together to solve problems. Higher-end computers were installed as servers, and users on the local network could run applications and share information among departments and organizations. This is called client-server computing.
Intranet
Just as organizations set up web sites to provide global access to information about their business, they also set up internal web pages to provide information about the organization to the employees. This internal set of web pages is called an intranet. Web pages on the intranet are not accessible to those outside the company; in fact, those pages would come up as “not found” if an employee tried to access them from outside the company’s network.
Extranet
Sometimes an organization wants to be able to collaborate with its customers or suppliers while at the same time maintaining the security of being inside its own network. In cases like this a company may want to create an extranet, which is a part of the company’s network that can be made available securely to those outside of the company. Extranets can be used to allow customers to log in and check the status of their orders, or for suppliers to check their customers’ inventory levels.
Sometimes, an organization will need to allow someone who is not located physically within its internal network to gain access. This access can be provided by a virtual private network (VPN). VPNs will be discussed further in the chapter 6 (on information security).
Sidebar: Microsoft’s SharePoint Powers the Intranet
As organizations begin to see the power of collaboration between their employees, they often look for solutions that will allow them to leverage their intranet to enable more collaboration. Since most companies use Microsoft products for much of their computing, it is only natural that they have looked to Microsoft to provide a solution. This solution is Microsoft’s SharePoint.
SharePoint provides a communication and collaboration platform that integrates seamlessly with Microsoft’s Office suite of applications. Using SharePoint, employees can share a document and edit it together – no more e-mailing that Word document to everyone for review. Projects and documents can be managed collaboratively across the organization. Corporate documents are indexed and made available for search. No more asking around for that procedures document – now you just search for it in SharePoint. For organizations looking to add a social networking component to their intranet, Microsoft offers Yammer, which can be used by itself or integrated into SharePoint.
Cloud Computing
We covered cloud computing in chapter 3, but it should also be mentioned here. The universal availability of the Internet combined with increases in processing power and data-storage capacity have made cloud computing a viable option for many companies. Using cloud computing, companies or individuals can contract to store data on storage devices somewhere on the Internet. Applications can be “rented” as needed, giving a company the ability to quickly deploy new applications. You can read about cloud computing in more detail in chapter 3.
Sidebar: Metcalfe’s Law
Just as Moore’s Law describes how computing power is increasing over time, Metcalfe’s Law describes the power of networking. Specifically, Metcalfe’s Law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system. Think about it this way: If none of your friends were on Facebook, would you spend much time there? If no one else at your school or place of work had e-mail, would it be very useful to you? Metcalfe’s Law tries to quantify this value.
Summary
The networking revolution has completely changed how the computer is used. Today, no one would imagine using a computer that was not connected to one or more networks. The development of the Internet and World Wide Web, combined with wireless access, has made information available at our fingertips. The Web 2.0 revolution has made us all authors of web content. As networking technology has matured, the use of Internet technologies has become a standard for every type of organization. The use of intranets and extranets has allowed organizations to deploy functionality to employees and business partners alike, increasing efficiencies and improving communications. Cloud computing has truly made information available everywhere and has serious implications for the role of the IT department.
Study Questions
1. What were the first four locations hooked up to the Internet (ARPANET)?
2. What does the term packet mean?
3. Which came first, the Internet or the World Wide Web?
4. What was revolutionary about Web 2.0?
5. What was the so-called killer app for the Internet?
6. What makes a connection a broadband connection?
7. What does the term VoIP mean?
8. What is an LAN?
9. What is the difference between an intranet and an extranet?
10. What is Metcalfe’s Law?
Exercises
1. What is the IP address of your computer? How did you find out? What is the IP address of google.com? How did you find out? Did you get IPv4 or IPv6 addresses?
2. What is the difference between the Internet and the World Wide Web? Create at least three statements that identify the differences between the two.
3. Who are the broadband providers in your area? What are the prices and speeds offered?
4. Pretend you are planning a trip to three foreign countries in the next month. Consult your wireless carrier to determine if your mobile phone would work properly in those countries. What would the costs be? What alternatives do you have if it would not work?
1. "UN sets goal of bringing broadband to half developing world’s people by 2015.", UN News Center website, http://www.un.org/apps/news/story.as...1#.Ut7JOmTTk1J | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/5%3A_Networking_and_Communication.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• identify the information security triad;
• identify and understand the high-level concepts surrounding information security tools; and
• secure yourself digitally.
Introduction
As computers and other digital devices have become essential to business and commerce, they have also increasingly become a target for attacks. In order for a company or an individual to use a computing device with confidence, they must first be assured that the device is not compromised in any way and that all communications will be secure. In this chapter, we will review the fundamental concepts of information systems security and discuss some of the measures that can be taken to mitigate security threats. We will begin with an overview focusing on how organizations can stay secure. Several different measures that a company can take to improve security will be discussed. We will then follow up by reviewing security precautions that individuals can take in order to secure their personal computing environment.
The security triad
The Information Security Triad: Confidentiality, Integrity, Availability (CIA)
Confidentiality
When protecting information, we want to be able to restrict access to those who are allowed to see it; everyone else should be disallowed from learning anything about its contents. This is the essence of confidentiality. For example, federal law requires that universities restrict access to private student information. The university must be sure that only those who are authorized have access to view the grade records.
Integrity
Integrity is the assurance that the information being accessed has not been altered and truly represents what is intended. Just as a person with integrity means what he or she says and can be trusted to consistently represent the truth, information integrity means information truly represents its intended meaning. Information can lose its integrity through malicious intent, such as when someone who is not authorized makes a change to intentionally misrepresent something. An example of this would be when a hacker is hired to go into the university’s system and change a grade.
Integrity can also be lost unintentionally, such as when a computer power surge corrupts a file or someone authorized to make a change accidentally deletes a file or enters incorrect information.
Availability
Information availability is the third part of the CIA triad. Availability means that information can be accessed and modified by anyone authorized to do so in an appropriate timeframe. Depending on the type of information, appropriate timeframe can mean different things. For example, a stock trader needs information to be available immediately, while a sales person may be happy to get sales numbers for the day in a report the next morning. Companies such as Amazon.com will require their servers to be available twenty-four hours a day, seven days a week. Other companies may not suffer if their web servers are down for a few minutes once in a while.
Tools for Information Security
In order to ensure the confidentiality, integrity, and availability of information, organizations can choose from a variety of tools. Each of these tools can be utilized as part of an overall information-security policy, which will be discussed in the next section.
Authentication
The most common way to identify someone is through their physical appearance, but how do we identify someone sitting behind a computer screen or at the ATM? Tools for authentication are used to ensure that the person accessing the information is, indeed, who they present themselves to be.
Authentication can be accomplished by identifying someone through one or more of three factors: something they know, something they have, or something they are. For example, the most common form of authentication today is the user ID and password. In this case, the authentication is done by confirming something that the user knows (their ID and password). But this form of authentication is easy to compromise (see sidebar) and stronger forms of authentication are sometimes needed. Identifying someone only by something they have, such as a key or a card, can also be problematic. When that identifying token is lost or stolen, the identity can be easily stolen. The final factor, something you are, is much harder to compromise. This factor identifies a user through the use of a physical characteristic, such as an eye-scan or fingerprint. Identifying someone through their physical characteristics is called biometrics.
A more secure way to authenticate a user is to do multi-factor authentication. By combining two or more of the factors listed above, it becomes much more difficult for someone to misrepresent themselves. An example of this would be the use of an RSA SecurID token. The RSA device is something you have, and will generate a new access code every sixty seconds. To log in to an information resource using the RSA device, you combine something you know, a four-digit PIN, with the code generated by the device. The only way to properly authenticate is by both knowing the code and having the RSA device.
Access Control
Once a user has been authenticated, the next step is to ensure that they can only access the information resources that are appropriate. This is done through the use of access control. Access control determines which users are authorized to read, modify, add, and/or delete information. Several different access control models exist. Here we will discuss two: the access control list (ACL) and role-based access control (RBAC).
For each information resource that an organization wishes to manage, a list of users who have the ability to take specific actions can be created. This is an access control list, or ACL. For each user, specific capabilities are assigned, such as read, write, delete, or add. Only users with those capabilities are allowed to perform those functions. If a user is not on the list, they have no ability to even know that the information resource exists.
ACLs are simple to understand and maintain. However, they have several drawbacks. The primary drawback is that each information resource is managed separately, so if a security administrator wanted to add or remove a user to a large set of information resources, it would be quite difficult. And as the number of users and resources increase, ACLs become harder to maintain. This has led to an improved method of access control, called role-based access control, or RBAC. With RBAC, instead of giving specific users access rights to an information resource, users are assigned to roles and then those roles are assigned the access. This allows the administrators to manage users and roles separately, simplifying administration and, by extension, improving security.
Comparison of ACL and RBAC
Encryption
Many times, an organization needs to transmit information over the Internet or transfer it on external media such as a CD or flash drive. In these cases, even with proper authentication and access control, it is possible for an unauthorized person to get access to the data. Encryption is a process of encoding data upon its transmission or storage so that only authorized individuals can read it. This encoding is accomplished by a computer program, which encodes the plain text that needs to be transmitted; then the recipient receives the cipher text and decodes it (decryption). In order for this to work, the sender and receiver need to agree on the method of encoding so that both parties can communicate properly. Both parties share the encryption key, enabling them to encode and decode each other’s messages. This is called symmetric key encryption. This type of encryption is problematic because the key is available in two different places.
An alternative to symmetric key encryption is public key encryption. In public key encryption, two keys are used: a public key and a private key. To send an encrypted message, you obtain the public key, encode the message, and send it. The recipient then uses the private key to decode it. The public key can be given to anyone who wishes to send the recipient a message. Each user simply needs one private key and one public key in order to secure messages. The private key is necessary in order to decrypt something sent with the public key.
Public key encryption (click for larger diagram)
Sidebar: Password Security
So why is using just a simple user ID/password not considered a secure method of authentication? It turns out that this single-factor authentication is extremely easy to compromise. Good password policies must be put in place in order to ensure that passwords cannot be compromised. Below are some of the more common policies that organizations should put in place.
• Require complex passwords. One reason passwords are compromised is that they can be easily guessed. A recent study found that the top three passwords people used in 2012 were password, 123456 and 12345678.[1] A password should not be simple, or a word that can be found in a dictionary. One of the first things a hacker will do is try to crack a password by testing every term in the dictionary! Instead, a good password policy is one that requires the use of a minimum of eight characters, and at least one upper-case letter, one special character, and one number.
• Change passwords regularly. It is essential that users change their passwords on a regular basis. Users should change their passwords every sixty to ninety days, ensuring that any passwords that might have been stolen or guessed will not be able to be used against the company.
• Train employees not to give away passwords. One of the primary methods that is used to steal passwords is to simply figure them out by asking the users or administrators. Pretexting occurs when an attacker calls a helpdesk or security administrator and pretends to be a particular authorized user having trouble logging in. Then, by providing some personal information about the authorized user, the attacker convinces the security person to reset the password and tell him what it is. Another way that employees may be tricked into giving away passwords is through e-mail phishing. Phishing occurs when a user receives an e-mail that looks as if it is from a trusted source, such as their bank, or their employer. In the e-mail, the user is asked to click a link and log in to a website that mimics the genuine website and enter their ID and password, which are then captured by the attacker.
Backups
Another essential tool for information security is a comprehensive backup plan for the entire organization. Not only should the data on the corporate servers be backed up, but individual computers used throughout the organization should also be backed up. A good backup plan should consist of several components.
• A full understanding of the organizational information resources. What information does the organization actually have? Where is it stored? Some data may be stored on the organization’s servers, other data on users’ hard drives, some in the cloud, and some on third-party sites. An organization should make a full inventory of all of the information that needs to be backed up and determine the best way back it up.
• Regular backups of all data. The frequency of backups should be based on how important the data is to the company, combined with the ability of the company to replace any data that is lost. Critical data should be backed up daily, while less critical data could be backed up weekly.
• Offsite storage of backup data sets. If all of the backup data is being stored in the same facility as the original copies of the data, then a single event, such as an earthquake, fire, or tornado, would take out both the original data and the backup! It is essential that part of the backup plan is to store the data in an offsite location.
• Test of data restoration. On a regular basis, the backups should be put to the test by having some of the data restored. This will ensure that the process is working and will give the organization confidence in the backup plan.
Besides these considerations, organizations should also examine their operations to determine what effect downtime would have on their business. If their information technology were to be unavailable for any sustained period of time, how would it impact the business?
Additional concepts related to backup include the following:
• Universal Power Supply (UPS). A UPS is a device that provides battery backup to critical components of the system, allowing them to stay online longer and/or allowing the IT staff to shut them down using proper procedures in order to prevent the data loss that might occur from a power failure.
• Alternate, or “hot” sites. Some organizations choose to have an alternate site where an exact replica of their critical data is always kept up to date. When the primary site goes down, the alternate site is immediately brought online so that little or no downtime is experienced.
As information has become a strategic asset, a whole industry has sprung up around the technologies necessary for implementing a proper backup strategy. A company can contract with a service provider to back up all of their data or they can purchase large amounts of online storage space and do it themselves. Technologies such as storage area networks and archival systems are now used by most large businesses.
Firewalls
Another method that an organization should use to increase security on its network is a firewall. A firewall can exist as hardware or software (or both). A hardware firewall is a device that is connected to the network and filters the packets based on a set of rules. A software firewall runs on the operating system and intercepts packets as they arrive to a computer. A firewall protects all company servers and computers by stopping packets from outside the organization’s network that do not meet a strict set of criteria. A firewall may also be configured to restrict the flow of packets leaving the organization. This may be done to eliminate the possibility of employees watching YouTube videos or using Facebook from a company computer.
Network configuration with firewalls, IDS, and a DMZ. Click to enlarge.
Some organizations may choose to implement multiple firewalls as part of their network security configuration, creating one or more sections of their network that are partially secured. This segment of the network is referred to as a DMZ, borrowing the term demilitarized zone from the military, and it is where an organization may place resources that need broader access but still need to be secured.
Intrusion Detection Systems
Another device that can be placed on the network for security purposes is an intrusion detection system, or IDS. An IDS does not add any additional security; instead, it provides the functionality to identify if the network is being attacked. An IDS can be configured to watch for specific types of activities and then alert security personnel if that activity occurs. An IDS also can log various types of traffic on the network for analysis later. An IDS is an essential part of any good security setup.
Sidebar: Virtual Private Networks
Using firewalls and other security technologies, organizations can effectively protect many of their information resources by making them invisible to the outside world. But what if an employee working from home requires access to some of these resources? What if a consultant is hired who needs to do work on the internal corporate network from a remote location? In these cases, a virtual private network (VPN) is called for.
A VPN allows a user who is outside of a corporate network to take a detour around the firewall and access the internal network from the outside. Through a combination of software and security measures, this lets an organization allow limited access to its networks while at the same time ensuring overall security.
Physical Security
An organization can implement the best authentication scheme in the world, develop the best access control, and install firewalls and intrusion prevention, but its security cannot be complete without implementation of physical security. Physical security is the protection of the actual hardware and networking components that store and transmit information resources. To implement physical security, an organization must identify all of the vulnerable resources and take measures to ensure that these resources cannot be physically tampered with or stolen. These measures include the following.
• Locked doors: It may seem obvious, but all the security in the world is useless if an intruder can simply walk in and physically remove a computing device. High-value information assets should be secured in a location with limited access.
• Physical intrusion detection: High-value information assets should be monitored through the use of security cameras and other means to detect unauthorized access to the physical locations where they exist.
• Secured equipment: Devices should be locked down to prevent them from being stolen. One employee’s hard drive could contain all of your customer information, so it is essential that it be secured.
• Environmental monitoring: An organization’s servers and other high-value equipment should always be kept in a room that is monitored for temperature, humidity, and airflow. The risk of a server failure rises when these factors go out of a specified range.
• Employee training: One of the most common ways thieves steal corporate information is to steal employee laptops while employees are traveling. Employees should be trained to secure their equipment whenever they are away from the office.
Security Policies
Besides the technical controls listed above, organizations also need to implement security policies as a form of administrative control. In fact, these policies should really be a starting point in developing an overall security plan. A good information-security policy lays out the guidelines for employee use of the information resources of the company and provides the company recourse in the case that an employee violates a policy.
According to the SANS Institute, a good policy is “a formal, brief, and high-level statement or plan that embraces an organization’s general beliefs, goals, objectives, and acceptable procedures for a specified subject area.” Policies require compliance; failure to comply with a policy will result in disciplinary action. A policy does not lay out the specific technical details, instead it focuses on the desired results. A security policy should be based on the guiding principles of confidentiality, integrity, and availability.[2]
A good example of a security policy that many will be familiar with is a web use policy. A web use policy lays out the responsibilities of company employees as they use company resources to access the Internet. A good example of a web use policy is included in Harvard University’s “Computer Rules and Responsibilities” policy, which can be found here.
A security policy should also address any governmental or industry regulations that apply to the organization. For example, if the organization is a university, it must be aware of the Family Educational Rights and Privacy Act (FERPA), which restricts who has access to student information. Health care organizations are obligated to follow several regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
A good resource for learning more about security policies is the SANS Institute’s Information Security Policy Page.
Sidebar: Mobile Security
As the use of mobile devices such as smartphones and tablets proliferates, organizations must be ready to address the unique security concerns that the use of these devices bring. One of the first questions an organization must consider is whether to allow mobile devices in the workplace at all. Many employees already have these devices, so the question becomes: Should we allow employees to bring their own devices and use them as part of their employment activities? Or should we provide the devices to our employees? Creating a BYOD (“Bring Your Own Device”) policy allows employees to integrate themselves more fully into their job and can bring higher employee satisfaction and productivity. In many cases, it may be virtually impossible to prevent employees from having their own smartphones or iPads in the workplace. If the organization provides the devices to its employees, it gains more control over use of the devices, but it also exposes itself to the possibility of an administrative (and costly) mess.
Mobile devices can pose many unique security challenges to an organization. Probably one of the biggest concerns is theft of intellectual property. For an employee with malicious intent, it would be a very simple process to connect a mobile device either to a computer via the USB port, or wirelessly to the corporate network, and download confidential data. It would also be easy to secretly take a high-quality picture using a built-in camera.
When an employee does have permission to access and save company data on his or her device, a different security threat emerges: that device now becomes a target for thieves. Theft of mobile devices (in this case, including laptops) is one of the primary methods that data thieves use.
So what can be done to secure mobile devices? It will start with a good policy regarding their use. According to a 2013 SANS study, organizations should consider developing a mobile device policy that addresses the following issues: use of the camera, use of voice recording, application purchases, encryption at rest, Wi-Fi autoconnect settings, bluetooth settings, VPN use, password settings, lost or stolen device reporting, and backup. [3]
Besides policies, there are several different tools that an organization can use to mitigate some of these risks. For example, if a device is stolen or lost, geolocation software can help the organization find it. In some cases, it may even make sense to install remote data-removal software, which will remove data from a device if it becomes a security risk.
Usability
When looking to secure information resources, organizations must balance the need for security with users’ need to effectively access and use these resources. If a system’s security measures make it difficult to use, then users will find ways around the security, which may make the system more vulnerable than it would have been without the security measures! Take, for example, password policies. If the organization requires an extremely long password with several special characters, an employee may resort to writing it down and putting it in a drawer since it will be impossible to memorize.
Personal Information Security
Poster from Stop. Think. Connect. Click to enlarge. (Copyright: Stop. Think. Connect. http://stopthinkconnect.org/resources)
We will end this chapter with a discussion of what measures each of us, as individual users, can take to secure our computing technologies. There is no way to have 100% security, but there are several simple steps we, as individuals, can take to make ourselves more secure.
• Keep your software up to date. Whenever a software vendor determines that a security flaw has been found in their software, they will release an update to the software that you can download to fix the problem. Turn on automatic updating on your computer to automate this process.
• Install antivirus software and keep it up to date. There are many good antivirus software packages on the market today, including free ones.
• Be smart about your connections. You should be aware of your surroundings. When connecting to a Wi-Fi network in a public place, be aware that you could be at risk of being spied on by others sharing that network. It is advisable not to access your financial or personal data while attached to a Wi-Fi hotspot. You should also be aware that connecting USB flash drives to your device could also put you at risk. Do not attach an unfamiliar flash drive to your device unless you can scan it first with your security software.
• Back up your data. Just as organizations need to back up their data, individuals need to as well. And the same rules apply: do it regularly and keep a copy of it in another location. One simple solution for this is to set up an account with an online backup service, such as Mozy or Carbonite, to automate your backups.
• Secure your accounts with two-factor authentication. Most e-mail and social media providers now have a two-factor authentication option. The way this works is simple: when you log in to your account from an unfamiliar computer for the first time, it sends you a text message with a code that you must enter to confirm that you are really you. This means that no one else can log in to your accounts without knowing your password and having your mobile phone with them.
• Make your passwords long, strong, and unique. For your personal passwords, you should follow the same rules that are recommended for organizations. Your passwords should be long (eight or more characters) and contain at least two of the following: upper-case letters, numbers, and special characters. You also should use different passwords for different accounts, so that if someone steals your password for one account, they still are locked out of your other accounts.
• Be suspicious of strange links and attachments. When you receive an e-mail, tweet, or Facebook post, be suspicious of any links or attachments included there. Do not click on the link directly if you are at all suspicious. Instead, if you want to access the website, find it yourself and navigate to it directly.
You can find more about these steps and many other ways to be secure with your computing by going to Stop. Think. Connect. This website is part of a campaign that was launched in October of 2010 by the STOP. THINK. CONNECT. Messaging Convention in partnership with the U.S. government, including the White House.
Summary
As computing and networking resources have become more and more an integral part of business, they have also become a target of criminals. Organizations must be vigilant with the way they protect their resources. The same holds true for us personally: as digital devices become more and more intertwined with our lives, it becomes crucial for us to understand how to protect ourselves.
Study Questions
1. Briefly define each of the three members of the information security triad.
2. What does the term authentication mean?
3. What is multi-factor authentication?
4. What is role-based access control?
5. What is the purpose of encryption?
6. What are two good examples of a complex password?
7. What is pretexting?
8. What are the components of a good backup plan?
9. What is a firewall?
10. What does the term physical security mean?
Exercises
1. Describe one method of multi-factor authentication that you have experienced and discuss the pros and cons of using multi-factor authentication.
2. What are some of the latest advances in encryption technologies? Conduct some independent research on encryption using scholarly or practitioner resources, then write a two- to three-page paper that describes at least two new advances in encryption technology.
3. What is the password policy at your place of employment or study? Do you have to change passwords every so often? What are the minimum requirements for a password?
4. When was the last time you backed up your data? What method did you use? In one to two pages, describe a method for backing up your data. Ask your instructor if you can get extra credit for backing up your data.
5. Find the information security policy at your place of employment or study. Is it a good policy? Does it meet the standards outlined in the chapter?
6. How are you doing on keeping your own information secure? Review the steps listed in the chapter and comment on how well you are doing.
1. "Born to be breached" by Sean Gallagher on Nov 3 2012. Arstechnica. Retrieved from http://arstechnica.com/information-t...e-most-common/ on May 15, 2013.
2. SANS Institute. "A Short Primer for Developing Security Policies." Accessed from http://www.sans.org/security-resourc...icy_Primer.pdf on May 31, 2013.
3. Taken from SANS Institute's Mobile Device Checklist. You can review the full checklist at www.sans.org/score/checklists/mobile-device-checklist.xls. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_1%3A_What_Is_an_Information_System/6%3A_Information_Systems_Security.txt |
• 7: Does IT Matter?
For over fifty years, computing technology has been a part of business. Organizations have spent trillions of dollars on information technologies. But has all this investment in IT made a difference? Have we seen increases in productivity? Are companies that invest in IT more competitive? In this chapter, we will look at the value IT can bring to an organization and try to answer these questions. We will begin by highlighting two important works from the past two decades.
• 8: Business Processes
The fourth component of information systems is process. But what is a process and how does it tie into information systems? And in what ways do processes have a role in business? This chapter will look to answer those questions and also describe how business processes can be used for strategic advantage.
• 9: The People in Information Systems
In this chapter, we will be discussing the last component of an information system: people. People are involved in information systems in just about every way you can think of: people imagine information systems, people develop information systems, people support information systems, and, perhaps most importantly, people use information systems.
• 10: Information Systems Development
When someone has an idea for a new function to be performed by a computer, how does that idea become reality? If a company wants to implement a new business process and needs new hardware or software to support it, how do they go about making it happen? In this chapter, we will discuss the different methods of taking those ideas and bringing them to reality, a process known as information systems development.
Unit 2: Information Systems for Strategic Advantage
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the productivity paradox and explain the current thinking on this topic;
• evaluate Carr’s argument in “Does IT Matter?”;
• describe the components of competitive advantage; and
• describe information systems that can provide businesses with competitive advantage.
Introduction
For over fifty years, computing technology has been a part of business. Organizations have spent trillions of dollars on information technologies. But has all this investment in IT made a difference? Have we seen increases in productivity? Are companies that invest in IT more competitive? In this chapter, we will look at the value IT can bring to an organization and try to answer these questions. We will begin by highlighting two important works from the past two decades.
The Productivity Paradox
In 1991, Erik Brynjolfsson wrote an article, published in the Communications of the ACM, entitled “The Productivity Paradox of Information Technology: Review and Assessment.” By reviewing studies about the impact of IT investment on productivity, Brynjolfsson was able to conclude that the addition of information technology to business had not improved productivity at all – the “productivity paradox.” From the article[1] He does not draw any specific conclusions from this finding, and provides the following analysis:
Although it is too early to conclude that IT’s productivity contribution has been subpar, a paradox remains in our inability to unequivocally document any contribution after so much effort. The various explanations that have been proposed can be grouped into four categories:
1. Mismeasurement of outputs and inputs,
2. Lags due to learning and adjustment,
3. Redistribution and dissipation of profits,
4. Mismanagement of information and technology.
In 1998, Brynjolfsson and Lorin Hitt published a follow-up paper entitled “ Beyond the Productivity Paradox .”[2] In this paper, the authors utilized new data that had been collected and found that IT did, indeed, provide a positive result for businesses. Further, they found that sometimes the true advantages in using technology were not directly relatable to higher productivity, but to “softer” measures, such as the impact on organizational structure. They also found that the impact of information technology can vary widely between companies.
IT Doesn’t Matter
Just as a consensus was forming about the value of IT, the Internet stock market bubble burst. Just two years later, in 2003, Harvard professor Nicholas Carr wrote his article “IT Doesn’t Matter” in the Harvard Business Review. In this article Carr asserts that as information technology has become more ubiquitous, it has also become less of a differentiator. In other words: because information technology is so readily available and the software used so easily copied, businesses cannot hope to implement these tools to provide any sort of competitive advantage. Carr goes on to suggest that since IT is essentially a commodity, it should be managed like one: low cost, low risk. Using the analogy of electricity, Carr describes how a firm should never be the first to try a new technology, thereby letting others take the risks. IT management should see themselves as a utility within the company and work to keep costs down . For IT, providing the best service with minimal downtime is the goal.
As you can imagine, this article caused quite an uproar, especially from IT companies. Many articles were written in defense of IT; many others in support of Carr. Carr released a book based on the article in 2004, entitled Does IT Matter? Click here to watch a video of Carr being interviewed about his book on CNET.
Probably the best thing to come out of the article and subsequent book was that it opened up discussion on the place of IT in a business strategy, and exactly what role IT could play in competitive advantage. It is that question that we want to address in the rest of the this chapter.
[Embed "IT Doesn't Matter" classroom video footage here: http://quickstream.biola.edu/distanc...n'tMatter.f4v]
Competitive Advantage
What does it mean when a company has a competitive advantage? What are the factors that play into it? While there are entire courses and many different opinions on this topic, let’s go with one of the most accepted definitions, developed by Michael Porter in his book Competitive Advantage: Creating and Sustaining Superior Performance. A company is said to have a competitive advantage over its rivals when it is able to sustain profits that exceed average for the industry. According to Porter, there are two primary methods for obtaining competitive advantage: cost advantage and differentiation advantage. So the question becomes: how can information technology be a factor in one or both of these methods? In the sections below we will explore this question using two of Porter’s analysis tools: the value chain and the five forces model. We will also use Porter’s analysis in his 2001 article “Strategy and the Internet,” which examines the impact of the Internet on business strategy and competitive advantage, to shed further light on the role of information technology in competitive advantage.
The Value Chain
In his book, Porter describes exactly how a company can create value (and therefore profit). Value is built through the value chain: a series of activities undertaken by the company to produce a product or service. Each step in the value chain contributes to the overall value of a product or service. While the value chain may not be a perfect model for every type of company, it does provide a way to analyze just how a company is producing value. The value chain is made up of two sets of activities: primary activities and support activities. We will briefly examine these activities and discuss how information technology can play a role in creating value by contributing to cost advantage or differentiation advantage, or both.
Porter’s value chain
The primary activities are the functions that directly impact the creation of a product or service. The goal of the primary activities is to add more value than they cost. The primary activities are:
• Inbound logistics: These are the functions performed to bring in raw materials and other needed inputs. Information technology can be used here to make these processes more efficient, such as with supply-chain management systems, which allow the suppliers to manage their own inventory.
• Operations: Any part of a business that is involved in converting the raw materials into the final products or services is part of operations. From manufacturing to business process management (covered in chapter 8), information technology can be used to provide more efficient processes and increase innovation through flows of information.
• Outbound logistics: These are the functions required to get the product out to the customer. As with inbound logistics, IT can be used here to improve processes, such as allowing for real-time inventory checks. IT can also be a delivery mechanism itself.
• Sales/Marketing: The functions that will entice buyers to purchase the products are part of sales and marketing. Information technology is used in almost all aspects of this activity. From online advertising to online surveys, IT can be used to innovate product design and reach customers like never before. The company website can be a sales channel itself.
• Service: The functions a business performs after the product has been purchased to maintain and enhance the product’s value are part of the service activity. Service can be enhanced via technology as well, including support services through websites and knowledge bases.
The support activities are the functions in an organization that support, and cut across, all of the primary activities. The support activities are:
• Firm infrastructure: This includes organizational functions such as finance, accounting, and quality control, all of which depend on information technology; the use of ERP systems (to be covered in chapter 9) is a good example of the impact that IT can have on these functions.
• Human resource management: This activity consists of recruiting, hiring, and other services needed to attract and retain employees. Using the Internet, HR departments can increase their reach when looking for candidates. There is also the possibility of allowing employees to use technology for a more flexible work environment.
• Technology development: Here we have the technological advances and innovations that support the primary activities. These advances are then integrated across the firm or within one of the primary activities to add value. Information technology would fall specifically under this activity.
• Procurement: The activities involved in acquiring the raw materials used in the creation of products and services are called procurement. Business-to-business e-commerce can be used to improve the acquisition of materials.
This analysis of the value chain provides some insight into how information technology can lead to competitive advantage. Let’s now look at another tool that Porter developed – the “five forces” model.
Porter’s Five Forces
Porter developed the “five forces” model as a framework for industry analysis. This model can be used to help understand just how competitive an industry is and to analyze its strengths and weaknesses. The model consists of five elements, each of which plays a role in determining the average profitability of an industry.
Porter’s five forces
In 2001, Porter wrote an article entitled ”Strategy and the Internet,” in which he takes this model and looks at how the Internet impacts the profitability of an industry. Below is a quick summary of each of the five forces and the impact of the Internet.
• Threat of substitute products or services: How easily can a product or service be replaced with something else? The more types of products or services there are that can meet a particular need, the less profitability there will be in an industry. For example, the advent of the mobile phone has replaced the need for pagers. The Internet has made people more aware of substitute products, driving down industry profits in those industries being substituted.
• Bargaining power of suppliers: When a company has several suppliers to choose from, it can demand a lower price. When a sole supplier exists, then the company is at the mercy of the supplier. For example, if only one company makes the controller chip for a car engine, that company can control the price, at least to some extent. The Internet has given companies access to more suppliers, driving down prices. On the other hand, suppliers now also have the ability to sell directly to customers.
• Bargaining power of customers: A company that is the sole provider of a unique product has the ability to control pricing. But the Internet has given customers many more options to choose from.
• Barriers to entry: The easier it is to enter an industry, the tougher it will be to make a profit in that industry. The Internet has an overall effect of making it easier to enter industries. It is also very easy to copy technology, so new innovations will not last that long.
• Rivalry among existing competitors: The more competitors there are in an industry, the bigger a factor price becomes. The advent of the Internet has increased competition by widening the geographic market and lowering the costs of doing business. For example, a manufacturer in Southern California may now have to compete against a manufacturer in the South, where wages are lower.
Porter’s five forces are used to analyze an industry to determine the average profitability of a company within that industry. Adding in Porter’s analysis of the Internet, we can see that the Internet (and by extension, information technology in general) has the effect of lowering overall profitability. [3] While the Internet has certainly produced many companies that are big winners, the overall winners have been the consumers, who have been given an ever-increasing market of products and services and lower prices.
Using Information Systems for Competitive Advantage
Now that we have an understanding of competitive advantage and some of the ways that IT may be used to help organizations gain it, we will turn our attention to some specific examples. A strategic information system is an information system that is designed specifically to implement an organizational strategy meant to provide a competitive advantage. These sorts of systems began popping up in the 1980s, as noted in a paper by Charles Wiseman entitled “Creating Competitive Weapons From Information Systems.”[4]
Specifically, a strategic information system is one that attempts to do one or more of the following:
• deliver a product or a service at a lower cost;
• deliver a product or service that is differentiated;
• help an organization focus on a specific market segment;
• enable innovation.
Following are some examples of information systems that fall into this category.
Business Process Management Systems
In their book, IT Doesn’t Matter – Business Processes Do, Howard Smith and Peter Fingar argue that it is the integration of information systems with business processes that leads to competitive advantage. They then go on to state that Carr’s article is dangerous because it gave CEOs and IT managers the green light to start cutting their technology budgets, putting their companies in peril. They go on to state that true competitive advantage can be found with information systems that support business processes. In chapter 8 we will focus on the use of business processes for competitive advantage.
Electronic Data Interchange
One of the ways that information systems have participated in competitive advantage is through integrating the supply chain electronically. This is primarily done through a process called electronic data interchange, or EDI. EDI can be thought of as the computer-to-computer exchange of business documents in a standard electronic format between business partners. By integrating suppliers and distributors via EDI, a company can vastly reduce the resources required to manage the relevant information. Instead of manually ordering supplies, the company can simply place an order via the computer and the next time the order process runs, it is ordered.
EDI example
Collaborative Systems
As organizations began to implement networking technologies, information systems emerged that allowed employees to begin collaborating in different ways. These systems allowed users to brainstorm ideas together without the necessity of physical, face-to-face meetings. Utilizing tools such as discussion boards, document sharing, and video, these systems made it possible for ideas to be shared in new ways and the thought processes behind these ideas to be documented.
Broadly speaking, any software that allows multiple users to interact on a document or topic could be considered collaborative. Electronic mail, a shared Word document, social networks, and discussion boards would fall into this broad definition. However, many software tools have been created that are designed specifically for collaborative purposes. These tools offer a broad spectrum of collaborative functions. Here is just a short list of some collaborative tools available for businesses today:
• Google Drive. Google Drive offers a suite of office applications (such as a word processor, spreadsheet, drawing, presentation) that can be shared between individuals. Multiple users can edit the documents at the same time and threaded comments are available.
• Microsoft SharePoint. SharePoint integrates with Microsoft Office and allows for collaboration using tools most office workers are familiar with. SharePoint was covered in more detail in chapter 5.
• Cisco WebEx. WebEx is a business communications platform that combines video and audio communications and allows participants to interact with each other’s computer desktops. WebEx also provides a shared whiteboard and the capability for text-based chat to be going on during the sessions, along with many other features. Mobile editions of WebEx allow for full participation using smartphones and tablets.
• Atlassian Confluence. Confluence provides an all-in-one project-management application that allows users to collaborate on documents and communicate progress. The mobile edition of Confluence allows the project members to stay connected throughout the project.
• IBM Lotus Notes/Domino. One of the first true “groupware” collaboration tools, Lotus Notes (and its web-based cousin, Domino) provides a full suite of collaboration software, including integrated e-mail.
Decision Support Systems
A decision support system (DSS) is an information system built to help an organization make a specific decision or set of decisions. DSSs can exist at different levels of decision-making with the organization, from the CEO to the first-level managers. These systems are designed to take inputs regarding a known (or partially-known) decision-making process and provide the information necessary to make a decision. DSSs generally assist a management-level person in the decision-making process, though some can be designed to automate decision-making.
An organization has a wide variety of decisions to make, ranging from highly structured decisions to unstructured decisions. A structured decision is usually one that is made quite often, and one in which the decision is based directly on the inputs. With structured decisions, once you know the necessary information you also know the decision that needs to be made. For example, inventory reorder levels can be structured decisions: once our inventory of widgets gets below a specific threshold, automatically reorder ten more. Structured decisions are good candidates for automation, but we don’t necessarily build decision-support systems for them.
An unstructured decision involves a lot of unknowns. Many times, unstructured decisions are decisions being made for the first time. An information system can support these types of decisions by providing the decision-maker(s) with information-gathering tools and collaborative capabilities. An example of an unstructured decision might be dealing with a labor issue or setting policy for a new technology.
Decision support systems work best when the decision-maker(s) are making semi-structured decisions. A semi-structured decision is one in which most of the factors needed for making the decision are known but human experience and other outside factors may still play a role. A good example of an semi-structured decision would be diagnosing a medical condition (see sidebar).
As with collaborative systems, DSSs can come in many different formats. A nicely designed spreadsheet that allows for input of specific variables and then calculates required outputs could be considered a DSS. Another DSS might be one that assists in determining which products a company should develop. Input into the system could include market research on the product, competitor information, and product development costs. The system would then analyze these inputs based on the specific rules and concepts programmed into it. Finally, the system would report its results, with recommendations and/or key indicators to be used in making a decision. A DSS can be looked at as a tool for competitive advantage in that it can give an organization a mechanism to make wise decisions about products and innovations.
Sidebar: Isabel – A Health Care DSS
A discussed in the text, DSSs are best applied to semi-structured decisions, in which most of the needed inputs are known but human experience and environmental factors also play a role. A good example that is in use today is Isabel, a health care DSS. The creators of Isabel explain how it works:
Isabel uses the information routinely captured during your workup, whether free text or structured data, and instantaneously provides a diagnosis checklist for review. The checklist contains a list of possible diagnoses with critical “Don’t Miss Diagnoses” flagged. When integrated into your EMR system Isabel can provide “one click” seamless diagnosis support with no additional data entry. [5]
Investing in IT for Competitive Advantage
In 2008, Brynjolfsson and McAfee published a study in the Harvard Business Review on the role of IT in competitive advantage, entitled “Investing in the IT That Makes a Competitive Difference.” Their study confirmed that IT can play a role in competitive advantage, if deployed wisely. In their study, they draw three conclusions[6]:
• First, the data show that IT has sharpened differences among companies instead of reducing them. This reflects the fact that while companies have always varied widely in their ability to select, adopt, and exploit innovations, technology has accelerated and amplified these differences.
• Second, good management matters: Highly qualified vendors, consultants, and IT departments might be necessary for the successful implementation of enterprise technologies themselves, but the real value comes from the process innovations that can now be delivered on those platforms. Fostering the right innovations and propagating them widely are both executive responsibilities – ones that can’t be delegated.
• Finally, the competitive shakeup brought on by IT is not nearly complete, even in the IT-intensive US economy. We expect to see these altered competitive dynamics in other countries, as well, as their IT investments grow.
Information systems can be used for competitive advantage, but they must be used strategically. Organizations must understand how they want to differentiate themselves and then use all the elements of information systems (hardware, software, data, people, and process) to accomplish that differentiation.
Summary
Information systems are integrated into all components of business today, but can they bring competitive advantage? Over the years, there have been many answers to this question. Early research could not draw any connections between IT and profitability, but later research has shown that the impact can be positive. IT is not a panacea; just purchasing and installing the latest technology will not, by itself, make a company more successful. Instead, the combination of the right technologies and good management, together, will give a company the best chance of a positive result.
Study Questions
1. What is the productivity paradox?
2. Summarize Carr’s argument in “Does IT Matter.”
3. How is the 2008 study by Brynjolfsson and McAfee different from previous studies? How is it the same?
4. What does it mean for a business to have a competitive advantage?
5. What are the primary activities and support activities of the value chain?
6. What has been the overall impact of the Internet on industry profitability? Who has been the true winner?
7. How does EDI work?
8. Give an example of a semi-structured decision and explain what inputs would be necessary to provide assistance in making the decision.
9. What does a collaborative information system do?
10. How can IT play a role in competitive advantage, according to the 2008 article by Brynjolfsson and McAfee?
Exercises
1. Do some independent research on Nicholas Carr (the author of “IT Doesn’t Matter”) and explain his current position on the ability of IT to provide competitive advantage.
2. Review the WebEx website. What features of WebEx would contribute to good collaboration? What makes WebEx a better collaboration tool than something like Skype or Google Hangouts?
3. Think of a semi-structured decision that you make in your daily life and build your own DSS using a spreadsheet that would help you make that decision.
1. Brynjolfsson, Erik. "The Productivity Paradox of Information Technology: Review and Assessment" Copyright © 1993, 1994 Erik Brynjolfsson, All Rights Reserved Center for Coordination Science MIT Sloan School of Management Cambridge, Massachusetts Previous version: December 1991 This Version: September 1992 Published in Communications of the ACM, December, 1993; and Japan Management Research, June, 1994 (in Japanese).
2. Brynjolfsson, Erik and Lorin Hitt. "Beyond the Productivity Paradox", Communications of the ACM, August 1998, Vol. 41(8): pp. 49–55. Copyright © 1998 by Association for Computing Machinery, Inc. (ACM).
3. Porter, Michael. "Strategy and the Internet," Harvard Business Review, Vol. 79, No. 3, March 2001. http://hbswk.hbs.edu/item/2165.html
4. Wiseman, C., & MacMillan, I. C. (1984). CREATING COMPETITIVE WEAPONS FROM INFORMATION SYSTEMS. Journal Of Business Strategy, 5(2), 42.
5. Taken from http://www.isabelhealthcare.com/home/ourmission. Accessed July 15, 2013.
6. McAfee, Andrew and Brynjolfsson, Erik "Investing in the IT That Makes a Competitive Difference" Harvard Business Review, (July-August, 2008) | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_2%3A_Information_Systems_for_Strategic_Advantage/07%3A_Does_IT_Matter.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the term business process;
• identify the different systems needed to support business processes in an organization;
• explain the value of an enterprise resource planning (ERP) system;
• explain how business process management and business process reengineering work; and
• understand how information technology combined with business processes can bring an organization competitive advantage.
Introduction
The fourth component of information systems is process. But what is a process and how does it tie into information systems? And in what ways do processes have a role in business? This chapter will look to answer those questions and also describe how business processes can be used for strategic advantage.
What Is a Business Process?
We have all heard the term process before, but what exactly does it mean? A process is a series of tasks that are completed in order to accomplish a goal. A business process, therefore, is a process that is focused on achieving a goal for a business. If you have worked in a business setting, you have participated in a business process. Anything from a simple process for making a sandwich at Subway to building a space shuttle utilizes one or more business processes.
Processes are something that businesses go through every day in order to accomplish their mission. The better their processes, the more effective the business. Some businesses see their processes as a strategy for achieving competitive advantage. A process that achieves its goal in a unique way can set a company apart. A process that eliminates costs can allow a company to lower its prices (or retain more profit).
Documenting a Process
Every day, each of us will conduct many processes without even thinking about them: getting ready for work, using an ATM, reading our e-mail, etc. But as processes grow more complex, they need to be documented. For businesses, it is essential to do this, because it allows them to ensure control over how activities are undertaken in their organization. It also allows for standardization: McDonald’s has the same process for building a Big Mac in all of its restaurants.
The simplest way to document a process is to simply create a list. The list shows each step in the process; each step can be checked off upon completion. For example, a simple process, such as how to create an account on eBay, might look like this:
1. Go to ebay.com.
2. Click on “register.”
3. Enter your contact information in the “Tell us about you” box.
4. Choose your user ID and password.
5. Agree to User Agreement and Privacy Policy by clicking on “Submit.”
For processes that are not so straightforward, documenting the process as a checklist may not be sufficient. For example, here is the process for determining if an article for a term needs to be added to Wikipedia:
1. Search Wikipedia to determine if the term already exists.
2. If the term is found, then an article is already written, so you must think of another term. Go to 1.
3. If the term is not found, then look to see if there is a related term.
4. If there is a related term, then create a redirect.
5. If there is not a related term, then create a new article.
This procedure is relatively simple – in fact, it has the same number of steps as the previous example – but because it has some decision points, it is more difficult to track with as a simple list. In these cases, it may make more sense to use a diagram to document the process:
Process diagram for determining if a new term should be added to Wikipedia . (Public Domain)
Managing Business Process Documentation
As organizations begin to document their processes, it becomes an administrative task to keep track of them. As processes change and improve, it is important to know which processes are the most recent. It is also important to manage the process so that it can be easily updated! The requirement to manage process documentation has been one of the driving forces behind the creation of the document management system. A document management system stores and tracks documents and supports the following functions:
• Versions and timestamps. The document management system will keep multiple versions of documents. The most recent version of a document is easy to identify and will be served up by default.
• Approvals and workflows. When a process needs to be changed, the system will manage both access to the documents for editing and the routing of the document for approvals.
• Communication. When a process changes, those who implement the process need to be made aware of the changes. A document management system will notify the appropriate people when a change to a document is approved.
Of course, document management systems are not only used for managing business process documentation. Many other types of documents are managed in these systems, such as legal documents or design documents.
ERP Systems
An enterprise resource planning (ERP) system is a software application with a centralized database that can be used to run an entire company. Let’s take a closer look at the definition of each of these components:
A software application: The system is a software application, which means that it has been developed with specific logic and rules behind it. It has to be installed and configured to work specifically for an individual organization.
• With a centralized database: All data in an ERP system is stored in a single, central database. This centralization is key to the success of an ERP – data entered in one part of the company can be immediately available to other parts of the company.
• That can be used to run an entire company: An ERP can be used to manage an entire organization’s operations. If they so wish, companies can purchase modules for an ERP that represent different functions within the organization, such as finance, manufacturing, and sales. Some companies choose to purchase many modules, others choose a subset of the modules.
An ERP system not only centralizes an organization’s data, but the processes it enforces are the processes the organization adopts. When an ERP vendor designs a module, it has to implement the rules for the associated business processes. A selling point of an ERP system is that it has best practices built right into it. In other words, when an organization implements an ERP, it also gets improved best practices as part of the deal!
An ERP system
For many organizations, the implementation of an ERP system is an excellent opportunity to improve their business practices and upgrade their software at the same time. But for others, an ERP brings them a challenge: Is the process embedded in the ERP really better than the process they are currently utilizing? And if they implement this ERP, and it happens to be the same one that all of their competitors have, will they simply become more like them, making it much more difficult to differentiate themselves?
Registered trademark of SAP
This has been one of the criticisms of ERP systems: that they commoditize business processes, driving all businesses to use the same processes and thereby lose their uniqueness. The good news is that ERP systems also have the capability to be configured with custom processes. For organizations that want to continue using their own processes or even design new ones, ERP systems offer ways to support this through the use of customizations.
But there is a drawback to customizing an ERP system: organizations have to maintain the changes themselves. Whenever an update to the ERP system comes out, any organization that has created a custom process will be required to add that change to their ERP. This will require someone to maintain a listing of these changes and will also require retesting the system every time an upgrade is made. Organizations will have to wrestle with this decision: When should they go ahead and accept the best-practice processes built into the ERP system and when should they spend the resources to develop their own processes? It makes the most sense to only customize those processes that are critical to the competitive advantage of the company.
Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.
Business Process Management
Organizations that are serious about improving their business processes will also create structures to manage those processes. Business process management (BPM) can be thought of as an intentional effort to plan, document, implement, and distribute an organization’s business processes with the support of information technology.
BPM is more than just automating some simple steps. While automation can make a business more efficient, it cannot be used to provide a competitive advantage. BPM, on the other hand, can be an integral part of creating that advantage.
Not all of an organization’s processes should be managed this way. An organization should look for processes that are essential to the functioning of the business and those that may be used to bring a competitive advantage. The best processes to look at are those that include employees from multiple departments, those that require decision-making that cannot be easily automated, and processes that change based on circumstances.
To make this clear, let’s take a look at an example.
Suppose a large clothing retailer is looking to gain a competitive advantage through superior customer service. As part of this, they create a task force to develop a state-of-the-art returns policy that allows customers to return any article of clothing, no questions asked. The organization also decides that, in order to protect the competitive advantage that this returns policy will bring, they will develop their own customization to their ERP system to implement this returns policy. As they prepare to roll out the system, they invest in training for all of their customer-service employees, showing them how to use the new system and specifically how to process returns. Once the updated returns process is implemented, the organization will be able to measure several key indicators about returns that will allow them to adjust the policy as needed. For example, if they find that many women are returning their high-end dresses after wearing them once, they could implement a change to the process that limits – to, say, fourteen days – the time after the original purchase that an item can be returned. As changes to the returns policy are made, the changes are rolled out via internal communications, and updates to the returns processing on the system are made. In our example, the system would no longer allow a dress to be returned after fourteen days without an approved reason.
If done properly, business process management will provide several key benefits to an organization, which can be used to contribute to competitive advantage. These benefits include:
• Empowering employees. When a business process is designed correctly and supported with information technology, employees will be able to implement it on their own authority. In our returns-policy example, an employee would be able to accept returns made before fourteen days or use the system to make determinations on what returns would be allowed after fourteen days.
• Built-in reporting. By building measurement into the programming, the organization can keep up to date on key metrics regarding their processes. In our example, these can be used to improve the returns process and also, ideally, to reduce returns.
• Enforcing best practices. As an organization implements processes supported by information systems, it can work to implement the best practices for that class of business process. In our example, the organization may want to require that all customers returning a product without a receipt show a legal ID. This requirement can be built into the system so that the return will not be processed unless a valid ID number is entered.
• Enforcing consistency. By creating a process and enforcing it with information technology, it is possible to create a consistency across the entire organization. In our example, all stores in the retail chain can enforce the same returns policy. And if the returns policy changes, the change can be instantly enforced across the entire chain.
Business Process Reengineering
As organizations look to manage their processes to gain a competitive advantage, they also need to understand that their existing ways of doing things may not be the most effective or efficient. A process developed in the 1950s is not going to be better just because it is now supported by technology.
In 1990, Michael Hammer published an article in the Harvard Business Review entitled “Reengineering Work: Don’t Automate, Obliterate.” This article put forward the thought that simply automating a bad process does not make it better. Instead, companies should “blow up” their existing processes and develop new processes that take advantage of the new technologies and concepts. He states in the introduction to the article:[1]
Many of our job designs, work flows, control mechanisms, and organizational structures came of age in a different competitive environment and before the advent of the computer. They are geared towards greater efficiency and control. Yet the watchwords of the new decade are innovation and speed, service, and quality.
It is time to stop paving the cow paths. Instead of embedding outdated processes in silicon and software, we should obliterate them and start over. We should “reengineer” our businesses: use the power of modern information technology to radically redesign our business processes in order to achieve dramatic improvements in their performance.
Business process reengineering is not just taking an existing process and automating it. BPR is fully understanding the goals of a process and then dramatically redesigning it from the ground up to achieve dramatic improvements in productivity and quality. But this is easier said than done. Most of us think in terms of how to do small, local improvements to a process; complete redesign requires thinking on a larger scale. Hammer provides some guidelines for how to go about doing business process reengineering:
• Organize around outcomes, not tasks. This simply means to design the process so that, if possible, one person performs all the steps. Instead of repeating one step in the process over and over, the person stays involved in the process from start to finish.
• Have those who use the outcomes of the process perform the process. Using information technology, many simple tasks are now automated, so we can empower the person who needs the outcome of the process to perform it. The example Hammer gives here is purchasing: instead of having every department in the company use a purchasing department to order supplies, have the supplies ordered directly by those who need the supplies using an information system.
• Subsume information-processing work into the real work that produces the information. When one part of the company creates information (like sales information, or payment information), it should be processed by that same department. There is no need for one part of the company to process information created in another part of the company.
• Treat geographically dispersed resources as though they were centralized. With the communications technologies in place today, it becomes easier than ever to not worry about physical location. A multinational organization does not need separate support departments (such as IT, purchasing, etc.) for each location anymore.
• Link parallel activities instead of integrating their results. Departments that work in parallel should be sharing data and communicating with each other during their activities instead of waiting until each group is done and then comparing notes.
• Put the decision points where the work is performed, and build controls into the process. The people who do the work should have decision-making authority and the process itself should have built-in controls using information technology.
• Capture information once, at the source. Requiring information to be entered more than once causes delays and errors. With information technology, an organization can capture it once and then make it available whenever needed.
These principles may seem like common sense today, but in 1990 they took the business world by storm. Hammer gives example after example of how organizations improved their business processes by many orders of magnitude without adding any new employees, simply by changing how they did things (see sidebar).
Unfortunately, business process reengineering got a bad name in many organizations. This was because it was used as an excuse for cost cutting that really had nothing to do with BPR. For example, many companies simply used it as an excuse for laying off part of their workforce. Today, however, many of the principles of BPR have been integrated into businesses and are considered part of good business-process management.
Sidebar: Reengineering the College Bookstore
The process of purchasing the correct textbooks in a timely manner for college classes has always been problematic. And now, with online bookstores such as Amazon competing directly with the college bookstore for students’ purchases, the college bookstore is under pressure to justify its existence.
But college bookstores have one big advantage over their competitors: they have access to students’ data. In other words, once a student has registered for classes, the bookstore knows exactly what books that student will need for the upcoming term. To leverage this advantage and take advantage of new technologies, the bookstore wants to implement a new process that will make purchasing books through the bookstore advantageous to students. Though they may not be able to compete on price, they can provide other advantages, such as reducing the time it takes to find the books and the ability to guarantee that the book is the correct one for the class. In order to do this, the bookstore will need to undertake a process redesign.
The goal of the process redesign is simple: capture a higher percentage of students as customers of the bookstore. After diagramming the existing process and meeting with student focus groups, the bookstore comes up with a new process. In the new process, the bookstore utilizes information technology to reduce the amount of work the students need to do in order to get their books. In this new process, the bookstore sends the students an e-mail with a list of all the books required for their upcoming classes. By clicking a link in this e-mail, the students can log into the bookstore, confirm their books, and purchase the books. The bookstore will then deliver the books to the students.
College bookstore process redesign
ISO Certification
Many organizations now claim that they are using best practices when it comes to business processes. In order to set themselves apart and prove to their customers (and potential customers) that they are indeed doing this, these organizations are seeking out an ISO 9000 certification. ISO is an acronym for International Standards Organization (website here). This body defines quality standards that organizations can implement to show that they are, indeed, managing business processes in an effective way. The ISO 9000 certification is focused on quality management.
In order to receive ISO certification, an organization must be audited and found to meet specific criteria. In its most simple form, the auditors perform the following review:
• Tell me what you do (describe the business process).
• Show me where it says that (reference the process documentation).
• Prove that this is what happened (exhibit evidence in documented records).
Over the years, this certification has evolved and many branches of the certification now exist. ISO certification is one way to separate an organization from others. You can find out more about the ISO 9000 standard here.
Summary
The advent of information technologies has had a huge impact on how organizations design, implement, and support business processes. From document management systems to ERP systems, information systems are tied into organizational processes. Using business process management, organizations can empower employees and leverage their processes for competitive advantage. Using business process reengineering, organizations can vastly improve their effectiveness and the quality of their products and services. Integrating information technology with business processes is one way that information systems can bring an organization lasting competitive advantage.
Study Questions
1. What does the term business process mean?
2. What are three examples of business process from a job you have had or an organization you have observed?
3. What is the value in documenting a business process?
4. What is an ERP system? How does an ERP system enforce best practices for an organization?
5. What is one of the criticisms of ERP systems?
6. What is business process reengineering? How is it different from incrementally improving a process?
7. Why did BPR get a bad name?
8. List the guidelines for redesigning a business process.
9. What is business process management? What role does it play in allowing a company to differentiate itself?
10. What does ISO certification signify?
Exercises
1. Think of a business process that you have had to perform in the past. How would you document this process? Would a diagram make more sense than a checklist? Document the process both as a checklist and as a diagram.
2. Review the return policies at your favorite retailer, then answer this question: What information systems do you think would need to be in place to support their return policy.
3. If you were implementing an ERP system, in which cases would you be more inclined to modify the ERP to match your business processes? What are the drawbacks of doing this?
4. Which ERP is the best? Do some original research and compare three leading ERP systems to each other. Write a two- to three-page paper that compares their features.
1. Hammer, Michael. "Reengineering work: don't automate, obliterate." Harvard Business Review 68.4 (1990): 104–112. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_2%3A_Information_Systems_for_Strategic_Advantage/08%3A_Business_Processes.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe each of the different roles that people play in the design, development, and use of information systems;
• understand the different career paths available to those who work with information systems;
• explain the importance of where the information-systems function is placed in an organization; and
• describe the different types of users of information systems.
Introduction
In the opening chapters of this text, we focused on the technology behind information systems: hardware, software, data, and networking. In the last chapter, we discussed business processes and the key role they can play in the success of a business. In this chapter, we will be discussing the last component of an information system: people.
People are involved in information systems in just about every way you can think of: people imagine information systems, people develop information systems, people support information systems, and, perhaps most importantly, people use information systems.
The Creators of Information Systems
The first group of people we are going to look at play a role in designing, developing, and building information systems. These people are generally very technical and have a background in programming and mathematics. Just about everyone who works in the creation of information systems has a minimum of a bachelor’s degree in computer science or information systems, though that is not necessarily a requirement. We will be looking at the process of creating information systems in more detail in chapter 10.
Systems Analyst
The role of the systems analyst is to straddle the divide between identifying business needs and imagining a new or redesigned computer-based system to fulfill those needs. This individual will work with a person, team, or department with business requirements and identify the specific details of a system that needs to be built. Generally, this will require the analyst to have a good understanding of the business itself , the business processes involved, and the ability to document them well. The analyst will identify the different stakeholders in the system and work to involve the appropriate individuals in the process.
Once the requirements are determined, the analyst will begin the process of translating these requirements into an information-systems design. A good analyst will understand what different technological solutions will work and provide several different alternatives to the requester, based on the company’s budgetary constraints, technology constraints, and culture. Once the solution is selected, the analyst will create a detailed document describing the new system. This new document will require that the analyst understand how to speak in the technical language of systems developers.
A systems analyst generally is not the one who does the actual development of the information system. The design document created by the systems analyst provides the detail needed to create the system and is handed off to a programmer (or team of programmers) to do the actual creation of the system. In some cases, however, a systems analyst may go ahead and create the system that he or she designed. This person is sometimes referred to as a programmer-analyst.
In other cases, the system may be assembled from off-the-shelf components by a person called a systems integrator. This is a specific type of systems analyst that understands how to get different software packages to work with each other.
To become a systems analyst, you should have a background both in the business and in systems design. Many analysts first worked as programmers and/or had experience in the business before becoming systems analysts.
Programmer
Programmers spend their time writing computer code in a programming language. In the case of systems development, programmers generally attempt to fulfill the design specifications given to them by a systems analyst. Many different styles of programming exist: a programmer may work alone for long stretches of time or may work in a team with other programmers. A programmer needs to be able to understand complex processes and also the intricacies of one or more programming languages. Generally, a programmer is very proficient in mathematics, as mathematical concepts underlie most programming code.
Computer Engineer
Computer engineers design the computing devices that we use every day. There are many types of computer engineers, who work on a variety of different types of devices and systems. Some of the more prominent engineering jobs are as follows:
• Hardware engineer. A hardware engineer designs hardware components, such as microprocessors. Many times, a hardware engineer is at the cutting edge of computing technology, creating something brand new. Other times, the hardware engineer’s job is to engineer an existing component to work faster or use less power. Many times, a hardware engineer’s job is to write code to create a program that will be implemented directly on a computer chip.
• Software engineer. Software engineers do not actually design devices; instead, they create new programming languages and operating systems, working at the lowest levels of the hardware to develop new kinds of software to run on the hardware.
• Systems engineer. A systems engineer takes the components designed by other engineers and makes them all work together. For example, to build a computer, the mother board, processor, memory, and hard disk all have to work together. A systems engineer has experience with many different types of hardware and software and knows how to integrate them to create new functionality.
• Network engineer. A network engineer’s job is to understand the networking requirements of an organization and then design a communications system to meet those needs, using the networking hardware and software available.
There are many different types of computer engineers, and often the job descriptions overlap. While many may call themselves engineers based on a company job title, there is also a professional designation of “professional engineer,” which has specific requirements behind it. In the US, each state has its own set of requirements for the use of this title, as do different countries around the world. Most often, it involves a professional licensing exam.
Information-Systems Operations and Administration
Another group of information-systems professionals are involved in the day-to-day operations and administration of IT. These people must keep the systems running and up-to-date so that the rest of the organization can make the most effective use of these resources.
Computer Operator
A computer operator is the person who keeps the large computers running. This person’s job is to oversee the mainframe computers and data centers in organizations. Some of their duties include keeping the operating systems up to date, ensuring available memory and disk storage, and overseeing the physical environment of the computer. Since mainframe computers increasingly have been replaced with servers, storage management systems, and other platforms, computer operators’ jobs have grown broader and include working with these specialized systems.
Database Administrator
A database administrator (DBA) is the person who manages the databases for an organization. This person creates and maintains databases that are used as part of applications or the data warehouse. The DBA also consults with systems analysts and programmers on projects that require access to or the creation of databases.
Help-Desk/Support Analyst
Most mid-size to large organizations have their own information-technology help desk. The help desk is the first line of support for computer users in the company. Computer users who are having problems or need information can contact the help desk for assistance. Many times, a help-desk worker is a junior-level employee who does not necessarily know how to answer all of the questions that come his or her way. In these cases, help-desk analysts work with senior-level support analysts or have a computer knowledgebase at their disposal to help them investigate the problem at hand. The help desk is a great place to break into working in IT because it exposes you to all of the different technologies within the company. A successful help-desk analyst should have good people and communications skills, as well as at least junior-level IT skills.
Trainer
A computer trainer conducts classes to teach people specific computer skills. For example, if a new ERP system is being installed in an organization, one part of the implementation process is to teach all of the users how to use the new system. A trainer may work for a software company and be contracted to come in to conduct classes when needed; a trainer may work for a company that offers regular training sessions; or a trainer may be employed full time for an organization to handle all of their computer instruction needs. To be successful as a trainer, you need to be able to communicate technical concepts well and also have a lot of patience!
Managing Information Systems
The management of information-systems functions is critical to the success of information systems within the organization. Here are some of the jobs associated with the management of information systems.
CIO
The CIO, or chief information officer, is the head of the information-systems function. This person aligns the plans and operations of the information systems with the strategic goals of the organization. This includes tasks such as budgeting, strategic planning, and personnel decisions for the information-systems function. The CIO must also be the face of the IT department within the organization. This involves working with senior leaders in all parts of the organization to ensure good communication and planning.
Interestingly, the CIO position does not necessarily require a lot of technical expertise. While helpful, it is more important for this person to have good management skills and understand the business. Many organizations do not have someone with the title of CIO; instead, the head of the information-systems function is called vice president of information systems or director of information systems.
Functional Manager
As an information-systems organization becomes larger, many of the different functions are grouped together and led by a manager. These functional managers report to the CIO and manage the employees specific to their function. For example, in a large organization, there is a group of systems analysts who report to a manager of the systems-analysis function. For more insight into how this might look, see the discussion later in the chapter of how information systems are organized.
ERP Management
Organizations using an ERP require one or more individuals to manage these systems. These people make sure that the ERP system is completely up to date, work to implement any changes to the ERP that are needed, and consult with various user departments on needed reports or data extracts.
Project Managers
Information-systems projects are notorious for going over budget and being delivered late. In many cases, a failed IT project can spell doom for a company. A project manager is responsible for keeping projects on time and on budget. This person works with the stakeholders of the project to keep the team organized and communicates the status of the project to management. A project manager does not have authority over the project team; instead, the project manager coordinates schedules and resources in order to maximize the project outcomes. A project manager must be a good communicator and an extremely organized person. A project manager should also have good people skills. Many organizations require each of their project managers to become certified as a project management professional (PMP).
Information-Security Officer
An information-security officer is in charge of setting information-security policies for an organization, and then overseeing the implementation of those policies. This person may have one or more people reporting to them as part of the information-security team. As information has become a critical asset, this position has become highly valued. The information-security officer must ensure that the organization’s information remains secure from both internal and external threats.
Emerging Roles
As technology evolves, many new roles are becoming more common as other roles fade. For example, as we enter the age of “big data,” we are seeing the need for more data analysts and business-intelligence specialists. Many companies are now hiring social-media experts and mobile-technology specialists. The increased use of cloud computing and virtual-machine technologies also is breeding demand for expertise in those areas.
Career Paths in Information Systems
These job descriptions do not represent all possible jobs within an information-systems organization. Larger organizations will have more specialized roles; smaller organizations may combine some of these roles. Many of these roles may exist outside of a traditional information-systems organization, as we will discuss below.
Working with information systems can be a rewarding career choice. Whether you want to be involved in very technical jobs (programmer, database administrator), or you want to be involved in working with people (systems analyst, trainer), there are many different career paths available.
Many times, those in technical jobs who want career advancement find themselves in a dilemma: do they want to continue doing technical work, where sometimes their advancement options are limited, or do they want to become a manager of other employees and put themselves on a management career track? In many cases, those proficient in technical skills are not gifted with managerial skills. Some organizations, especially those that highly value their technically skilled employees, will create a technical track that exists in parallel to the management track so that they can retain employees who are contributing to the organization with their technical skills.
Sidebar: Are Certifications Worth Pursuing?
As technology is becoming more and more important to businesses, hiring employees with technical skills is becoming critical. But how can an organization ensure that the person they are hiring has the necessary skills? These days, many organizations are including technical certifications as a prerequisite for getting hired.
Certifications are designations given by a certifying body that someone has a specific level of knowledge in a specific technology. This certifying body is often the vendor of the product itself, though independent certifying organizations, such as CompTIA, also exist. Many of these organizations offer certification tracks, allowing a beginning certificate as a prerequisite to getting more advanced certificates. To get a certificate, you generally attend one or more training classes and then take one or more certification exams. Passing the exams with a certain score will qualify you for a certificate. In most cases, these classes and certificates are not free and, in fact, can run into the thousands of dollars. Some examples of the certifications in highest demand include Microsoft (software certifications), Cisco (networking), and SANS (security).
For many working in IT (or thinking about an IT career), determining whether to pursue one or more of these certifications is an important question. For many jobs, such as those involving networking or security, a certificate will be required by the employer as a way to determine which potential employees have a basic level of skill. For those who are already in an IT career, a more advanced certificate may lead to a promotion. There are other cases, however, when experience with a certain technology will negate the need for certification. For those wondering about the importance of certification, the best solution is to talk to potential employers and those already working in the field to determine the best choice.
Organizing the Information-Systems Function
In the early years of computing, the information-systems function (generally called data processing) was placed in the finance or accounting department of the organization. As computing became more important, a separate information-systems function was formed, but it still was generally placed under the CFO and considered to be an administrative function of the company. In the 1980s and 1990s, when companies began networking internally and then linking up to the Internet, the information-systems function was combined with the telecommunications functions and designated the information technology (IT) department. As the role of information technology continued to increase, its place in the organization also moved up the ladder. In many organizations today, the head of IT (the CIO) reports directly to the CEO.
Where in the Organization Should IS Be?
Before the advent of the personal computer, the information-systems function was centralized within organizations in order to maximize control over computing resources. When the PC began proliferating, many departments within organizations saw it as a chance to gain some computing resources for themselves. Some departments created an internal information-systems group, complete with systems analysts, programmers, and even database administrators. These departmental-IS groups were dedicated to the information needs of their own departments, providing quicker turnaround and higher levels of service than a centralized IT department. However, having several IS groups within an organization led to a lot of inefficiencies: there were now several people performing the same jobs in different departments. This decentralization also led to company data being stored in several places all over the company. In some organizations, a “matrix” reporting structure has developed, in which IT personnel are placed within a department and report to both the department management and the functional management within IS. The advantages of dedicated IS personnel for each department are weighed against the need for more control over the strategic information resources of the company.
For many companies, these questions are resolved by the implementation of the ERP system (see discussion of ERP in chapter 8). Because an ERP system consolidates most corporate data back into a single database, the implementation of an ERP system requires organizations to find “islands” of data so that they can integrate them back into the corporate system. The ERP allows organizations to regain control of their information and influences organizational decisions throughout the company.
Outsourcing
Many times, an organization needs a specific skill for a limited period of time. Instead of training an existing employee or hiring someone new, it may make more sense to outsource the job. Outsourcing can be used in many different situations within the information-systems function, such as the design and creation of a new website or the upgrade of an ERP system. Some organizations see outsourcing as a cost-cutting move, contracting out a whole group or department.
New Models of Organizations
The integration of information technology has influenced the structure of organizations. The increased ability to communicate and share information has led to a “flattening” of the organizational structure due to the removal of one or more layers of management.
Another organizational change enabled by information systems is the network-based organizational structure. In a networked-based organizational structure, groups of employees can work somewhat independently to accomplish a project. In a networked organization, people with the right skills are brought together for a project and then released to work on other projects when that project is over. These groups are somewhat informal and allow for all members of the group to maximize their effectiveness.
Information-Systems Users – Types of Users
Besides the people who work to create, administer, and manage information systems, there is one more extremely important group of people: the users of information systems. This group represents a very large percentage of the people involved. If the user is not able to successfully learn and use an information system, the system is doomed to failure.
Technology adoption user types . (Public Domain)
One tool that can be used to understand how users will adopt a new technology comes from a 1962 study by Everett Rogers. In his book, Diffusion of Innovation,[1] Rogers studied how farmers adopted new technologies, and he noticed that the adoption rate started slowly and then dramatically increased once adoption hit a certain point. He identified five specific types of technology adopters:
• Innovators. Innovators are the first individuals to adopt a new technology. Innovators are willing to take risks, are the youngest in age, have the highest social class, have great financial liquidity, are very social, and have the closest contact with scientific sources and interaction with other innovators. Risk tolerance has them adopting technologies that may ultimately fail. Financial resources help absorb these failures (Rogers 1962 5th ed, p. 282).
• Early adopters. The early adopters are those who adopt innovation after a technology has been introduced and proven. These individuals have the highest degree of opinion leadership among the other adopter categories, which means that they can influence the opinions of the largest majority. They are typically younger in age, have higher social status, more financial liquidity, more advanced education, and are more socially aware than later adopters. These people are more discrete in adoption choices than innovators, and realize judicious choice of adoption will help them maintain a central communication position (Rogers 1962 5th ed, p. 283).
• Early majority. Individuals in this category adopt an innovation after a varying degree of time. This time of adoption is significantly longer than the innovators and early adopters. This group tends to be slower in the adoption process, has above average social status, has contact with early adopters, and seldom holds positions of opinion leadership in a system (Rogers 1962 5th ed, p. 283).
• Late majority. The late majority will adopt an innovation after the average member of the society. These individuals approach an innovation with a high degree of skepticism, have below average social status, very little financial liquidity, are in contact with others in the late majority and the early majority, and show very little opinion leadership.
• Laggards. Individuals in this category are the last to adopt an innovation. Unlike those in the previous categories, individuals in this category show no opinion leadership. These individuals typically have an aversion to change-agents and tend to be advanced in age. Laggards typically tend to be focused on “traditions,” are likely to have the lowest social status and the lowest financial liquidity, be oldest of all other adopters, and be in contact with only family and close friends.
These five types of users can be translated into information-technology adopters as well, and provide additional insight into how to implement new information systems within an organization. For example, when rolling out a new system, IT may want to identify the innovators and early adopters within the organization and work with them first, then leverage their adoption to drive the rest of the implementation.
Summary
In this chapter, we have reviewed the many different categories of individuals who make up the people component of information systems. The world of information technology is changing so fast that new roles are being created all the time, and roles that existed for decades are being phased out. That said, this chapter should have given you a good idea of the importance of the people component of information systems.
Study Questions
1. Describe the role of a systems analyst.
2. What are some of the different roles for a computer engineer?
3. What are the duties of a computer operator?
4. What does the CIO do?
5. Describe the job of a project manager.
6. Explain the point of having two different career paths in information systems.
7. What are the advantages and disadvantages of centralizing the IT function?
8. What impact has information technology had on the way companies are organized?
9. What are the five types of information-systems users?
10. Why would an organization outsource?
Exercises
1. Which IT job would you like to have? Do some original research and write a two-page paper describing the duties of the job you are interested in.
2. Spend a few minutes on Dice or Monster to find IT jobs in your area. What IT jobs are currently available? Write up a two-page paper describing three jobs, their starting salary (if listed), and the skills and education needed for the job.
3. How is the IT function organized in your school or place of employment? Create an organization chart showing how the IT organization fits into your overall organization. Comment on how centralized or decentralized the IT function is.
4. What type of IT user are you? Take a look at the five types of technology adopters and then write a one-page summary of where you think you fit in this model.
1. Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_2%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_Systems.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• explain the overall process of developing a new software application;
• explain the differences between software development methodologies;
• understand the different types of programming languages used to develop software;
• understand some of the issues surrounding the development of websites and mobile applications; and
• identify the four primary implementation policies.
Introduction
When someone has an idea for a new function to be performed by a computer, how does that idea become reality? If a company wants to implement a new business process and needs new hardware or software to support it, how do they go about making it happen? In this chapter, we will discuss the different methods of taking those ideas and bringing them to reality, a process known as information systems development.
Programming
As we learned in chapter 2, software is created via programming. Programming is the process of creating a set of logical instructions for a digital device to follow using a programming language. The process of programming is sometimes called “coding” because the syntax of a programming language is not in a form that everyone can understand – it is in “code.”
The process of developing good software is usually not as simple as sitting down and writing some code. True, sometimes a programmer can quickly write a short program to solve a need. But most of the time, the creation of software is a resource-intensive process that involves several different groups of people in an organization. In the following sections, we are going to review several different methodologies for software development.
Systems-Development Life Cycle
The first development methodology we are going to review is the systems-development life cycle (SDLC). This methodology was first developed in the 1960s to manage the large software projects associated with corporate systems running on mainframes. It is a very structured and risk-averse methodology designed to manage large projects that included multiple programmers and systems that would have a large impact on the organization.
SDLC waterfall
Various definitions of the SDLC methodology exist, but most contain the following phases.
1. Preliminary Analysis. In this phase, a review is done of the request. Is creating a solution possible? What alternatives exist? What is currently being done about it? Is this project a good fit for our organization? A key part of this step is a feasibility analysis, which includes an analysis of the technical feasibility (is it possible to create this?), the economic feasibility (can we afford to do this?), and the legal feasibility (are we allowed to do this?). This step is important in determining if the project should even get started.
2. System Analysis. In this phase, one or more system analysts work with different stakeholder groups to determine the specific requirements for the new system. No programming is done in this step. Instead, procedures are documented, key players are interviewed, and data requirements are developed in order to get an overall picture of exactly what the system is supposed to do. The result of this phase is a system-requirements document.
3. System Design. In this phase, a designer takes the system-requirements document created in the previous phase and develops the specific technical details required for the system. It is in this phase that the business requirements are translated into specific technical requirements. The design for the user interface, database, data inputs and outputs, and reporting are developed here. The result of this phase is a system-design document. This document will have everything a programmer will need to actually create the system.
4. Programming. The code finally gets written in the programming phase. Using the system-design document as a guide, a programmer (or team of programmers) develop the program. The result of this phase is an initial working program that meets the requirements laid out in the system-analysis phase and the design developed in the system-design phase.
5. Testing. In the testing phase, the software program developed in the previous phase is put through a series of structured tests. The first is a unit test, which tests individual parts of the code for errors or bugs. Next is a system test, where the different components of the system are tested to ensure that they work together properly. Finally, the user-acceptance test allows those that will be using the software to test the system to ensure that it meets their standards. Any bugs, errors, or problems found during testing are addressed and then tested again.
6. Implementation. Once the new system is developed and tested, it has to be implemented in the organization. This phase includes training the users, providing documentation, and conversion from any previous system to the new system. Implementation can take many forms, depending on the type of system, the number and type of users, and how urgent it is that the system become operational. These different forms of implementation are covered later in the chapter.
7. Maintenance. This final phase takes place once the implementation phase is complete. In this phase, the system has a structured support process in place: reported bugs are fixed and requests for new features are evaluated and implemented; system updates and backups are performed on a regular basis.
The SDLC methodology is sometimes referred to as the waterfall methodology to represent how each step is a separate part of the process; only when one step is completed can another step begin. After each step, an organization must decide whether to move to the next step or not. This methodology has been criticized for being quite rigid. For example, changes to the requirements are not allowed once the process has begun. No software is available until after the programming phase.
Again, SDLC was developed for large, structured projects. Projects using SDLC can sometimes take months or years to complete. Because of its inflexibility and the availability of new programming techniques and tools, many other software-development methodologies have been developed. Many of these retain some of the underlying concepts of SDLC but are not as rigid.
Rapid Application Development
Rapid application development (RAD) is a software-development (or systems-development) methodology that focuses on quickly building a working model of the software, getting feedback from users, and then using that feedback to update the working model. After several iterations of development, a final version is developed and implemented.
The RAD methodology (Public Domain)
The RAD methodology consists of four phases:
1. Requirements Planning. This phase is similar to the preliminary-analysis, system-analysis, and design phases of the SDLC. In this phase, the overall requirements for the system are defined, a team is identified, and feasibility is determined.
2. User Design. In this phase, representatives of the users work with the system analysts, designers, and programmers to interactively create the design of the system. One technique for working with all of these various stakeholders is the so-called JAD session. JAD is an acronym for joint application development. A JAD session gets all of the stakeholders together to have a structured discussion about the design of the system. Application developers also sit in on this meeting and observe, trying to understand the essence of the requirements.
3. Construction. In the construction phase, the application developers, working with the users, build the next version of the system.This is an interactive process, and changes can be made as developers are working on the program. This step is executed in parallel with the User Design step in an iterative fashion, until an acceptable version of the product is developed.
4. Cutover. In this step, which is similar to the implementation step of the SDLC, the system goes live. All steps required to move from the previous state to the use of the new system are completed here.
As you can see, the RAD methodology is much more compressed than SDLC. Many of the SDLC steps are combined and the focus is on user participation and iteration. This methodology is much better suited for smaller projects than SDLC and has the added advantage of giving users the ability to provide feedback throughout the process. SDLC requires more documentation and attention to detail and is well suited to large, resource-intensive projects. RAD makes more sense for smaller projects that are less resource-intensive and need to be developed quickly.
Agile Methodologies
Agile methodologies are a group of methodologies that utilize incremental changes with a focus on quality and attention to detail. Each increment is released in a specified period of time (called a time box), creating a regular release schedule with very specific objectives. While considered a separate methodology from RAD, they share some of the same principles: iterative development, user interaction, ability to change. The agile methodologies are based on the “Agile Manifesto,” first released in 2001.
The characteristics of agile methods include:
• small cross-functional teams that include development-team members and users;
• daily status meetings to discuss the current state of the project;
• short time-frame increments (from days to one or two weeks) for each change to be completed; and
• at the end of each iteration, a working project is completed to demonstrate to the stakeholders.
The goal of the agile methodologies is to provide the flexibility of an iterative approach while ensuring a quality product.
Lean Methodology
The lean methodology
One last methodology we will discuss is a relatively new concept taken from the business bestseller The Lean Startup, by Eric Reis. In this methodology, the focus is on taking an initial idea and developing a minimum viable product (MVP). The MVP is a working software application with just enough functionality to demonstrate the idea behind the project. Once the MVP is developed, it is given to potential users for review. Feedback on the MVP is generated in two forms: (1) direct observation and discussion with the users, and (2) usage statistics gathered from the software itself. Using these two forms of feedback, the team determines whether they should continue in the same direction or rethink the core idea behind the project, change the functions, and create a new MVP. This change in strategy is called a pivot. Several iterations of the MVP are developed, with new functions added each time based on the feedback, until a final product is completed.
The biggest difference between the lean methodology and the other methodologies is that the full set of requirements for the system are not known when the project is launched. As each iteration of the project is released, the statistics and feedback gathered are used to determine the requirements. The lean methodology works best in an entrepreneurial environment where a company is interested in determining if their idea for a software application is worth developing.
The Quality Triangle
When developing software, or any sort of product or service, there exists a tension between the developers and the different stakeholder groups, such as management, users, and investors. This tension relates to how quickly the software can be developed (time), how much money will be spent (cost), and how well it will be built (quality). The quality triangle is a simple concept. It states that for any product or service being developed, you can only address two of the following: time, cost, and quality.
The quality triangle
So what does it mean that you can only address two of the three? It means that you cannot complete a low-cost, high-quality project in a small amount of time. However, if you are willing or able to spend a lot of money, then a project can be completed quickly with high-quality results (through hiring more good programmers). If a project’s completion date is not a priority, then it can be completed at a lower cost with higher-quality results. Of course, these are just generalizations, and different projects may not fit this model perfectly. But overall, this model helps us understand the tradeoffs that we must make when we are developing new products and services.
Programming Languages
As I noted earlier, software developers create software using one of several programming languages. A programming language is an artificial language that provides a way for a programmer to create structured code to communicate logic in a format that can be executed by the computer hardware. Over the past few decades, many different types of programming languages have evolved to meet many different needs. One way to characterize programming languages is by their “generation.”
Generations of Programming Languages
Early languages were specific to the type of hardware that had to be programmed; each type of computer hardware had a different low-level programming language (in fact, even today there are differences at the lower level, though they are now obscured by higher-level programming languages). In these early languages, very specific instructions had to be entered line by line – a tedious process.
First-generation languages are called machine code. In machine code, programming is done by directly setting actual ones and zeroes (the bits) in the program using binary code. Here is an example program that adds 1234 and 4321 using machine language:
```10111001 00000000
11010010 10100001
00000100 00000000
10001001 00000000
00001110 10001011
00000000 00011110
00000000 00011110
00000000 00000010
10111001 00000000
11100001 00000011
00010000 11000011
10001001 10100011
00001110 00000100
00000010 00000000```
Assembly language is the second-generation language. Assembly language gives english-like phrases to the machine-code instructions, making it easier to program. An assembly-language program must be run through an assembler, which converts it into machine code. Here is an example program that adds 1234 and 4321 using assembly language:
```MOV CX,1234
MOV DS:[0],CX
MOV CX,4321
MOV AX,DS:[0]
MOV BX,DS:[2]
ADD AX,BX
MOV DS:[4],AX```
Third-generation languages are not specific to the type of hardware on which they run and are much more like spoken languages. Most third-generation languages must be compiled, a process that converts them into machine code. Well-known third-generation languages include BASIC, C, Pascal, and Java. Here is an example using BASIC:
```A=1234
B=4321
C=A+B
END```
Fourth-generation languages are a class of programming tools that enable fast application development using intuitive interfaces and environments. Many times, a fourth-generation language has a very specific purpose, such as database interaction or report-writing. These tools can be used by those with very little formal training in programming and allow for the quick development of applications and/or functionality. Examples of fourth-generation languages include: Clipper, FOCUS, FoxPro, SQL, and SPSS.
Why would anyone want to program in a lower-level language when they require so much more work? The answer is similar to why some prefer to drive stick-shift automobiles instead of automatic transmission: control and efficiency. Lower-level languages, such as assembly language, are much more efficient and execute much more quickly. You have finer control over the hardware as well. Sometimes, a combination of higher- and lower-level languages are mixed together to get the best of both worlds: the programmer will create the overall structure and interface using a higher-level language but will use lower-level languages for the parts of the program that are used many times or require more precision.
The programming language spectrum
Compiled vs. Interpreted
Besides classifying a program language based on its generation, it can also be classified by whether it is compiled or interpreted. As we have learned, a computer language is written in a human-readable form. In a compiled language, the program code is translated into a machine-readable form called an executable that can be run on the hardware. Some well-known compiled languages include C, C++, and COBOL.
An interpreted language is one that requires a runtime program to be installed in order to execute. This runtime program then interprets the program code line by line and runs it. Interpreted languages are generally easier to work with but also are slower and require more system resources. Examples of popular interpreted languages include BASIC, PHP, PERL, and Python. The web languages of HTML and Javascript would also be considered interpreted because they require a browser in order to run.
The Java programming language is an interesting exception to this classification, as it is actually a hybrid of the two. A program written in Java is partially compiled to create a program that can be understood by the Java Virtual Machine (JVM). Each type of operating system has its own JVM which must be installed, which is what allows Java programs to run on many different types of operating systems.
Procedural vs. Object-Oriented
A procedural programming language is designed to allow a programmer to define a specific starting point for the program and then execute sequentially. All early programming languages worked this way. As user interfaces became more interactive and graphical, it made sense for programming languages to evolve to allow the user to define the flow of the program. The object-oriented programming language is set up so that the programmer defines “objects” that can take certain actions based on input from the user. In other words, a procedural program focuses on the sequence of activities to be performed; an object-oriented program focuses on the different items being manipulated.
For example, in a human-resources system, an “EMPLOYEE” object would be needed. If the program needed to retrieve or set data regarding an employee, it would first create an employee object in the program and then set or retrieve the values needed. Every object has properties, which are descriptive fields associated with the object. In the example below, an employee object has the properties “Name”, “Employee number”, “Birthdate” and “Date of hire”. An object also has “methods”, which can take actions related to the object. In the example, there are two methods. The first is “ComputePay()”, which will return the current amount owed the employee. The second is “ListEmployees()”, which will retrieve a list of employees who report to this employee.
Object: EMPLOYEE
Name
Employee number
Birthdate
Date of hire
ComputePay()
ListEmployees()
Figure: An example of an object
What is COBOL?
If you have been around business programming very long, you may have heard about the COBOL programming language. COBOL is a procedural, compiled language that at one time was the primary programming language for business applications. Invented in 1959 for use on large mainframe computers, COBOL is an abbreviation of common business-oriented language. With the advent of more efficient programming languages, COBOL is now rarely seen outside of old, legacy applications.
Programming Tools
To write a program, a programmer needs little more than a text editor and a good idea. However, to be productive, he or she must be able to check the syntax of the code, and, in some cases, compile the code. To be more efficient at programming, additional tools, such as an integrated development environment (IDE) or computer-aided software-engineering (CASE) tools, can be used.
Integrated Development Environment
For most programming languages, an IDE can be used. An IDE provides a variety of tools for the programmer, and usually includes:
• an editor for writing the program that will color-code or highlight keywords from the programming language;
• a help system that gives detailed documentation regarding the programming language;
• a compiler/interpreter, which will allow the programmer to run the program;
• a debugging tool, which will provide the programmer details about the execution of the program in order to resolve problems in the code; and
• a check-in/check-out mechanism, which allows for a team of programmers to work together on a project and not write over each other’s code changes.
Probably the most popular IDE software package right now is Microsoft’s Visual Studio. Visual Studio is the IDE for all of Microsoft’s programming languages, including Visual Basic, Visual C++, and Visual C#.
CASE Tools
While an IDE provides several tools to assist the programmer in writing the program, the code still must be written. Computer-aided software-engineering (CASE) tools allow a designer to develop software with little or no programming. Instead, the CASE tool writes the code for the designer. CASE tools come in many varieties, but their goal is to generate quality code based on input created by the designer.
Building a Website
In the early days of the World Wide Web, the creation of a website required knowing how to use hypertext markup language (HTML). Today, most websites are built with a variety of tools, but the final product that is transmitted to a browser is still HTML. HTML, at its simplest, is a text language that allows you to define the different components of a web page. These definitions are handled through the use of HTML tags, which consist of text between brackets. For example, an HTML tag can tell the browser to show a word in italics, to link to another web page, or to insert an image. In the example below, some text is being defined as a heading while other text is being emphasized.
Simple HTML
Simple HTML output
While HTML is used to define the components of a web page, cascading style sheets (CSS) are used to define the styles of the components on a page. The use of CSS allows the style of a website to be set and stay consistent throughout. For example, if the designer wanted all first-level headings (h1) to be blue and centered, he or she could set the “h1″ style to match. The following example shows how this might look.
HTML with CSS
HTML with CSS output
The combination of HTML and CSS can be used to create a wide variety of formats and designs and has been widely adopted by the web-design community. The standards for HTML are set by a governing body called the World Wide Web Consortium. The current version of HTML is HTML 5, which includes new standards for video, audio, and drawing.
When developers create a website, they do not write it out manually in a text editor. Instead, they use web design tools that generate the HTML and CSS for them. Tools such as Adobe Dreamweaver allow the designer to create a web page that includes images and interactive elements without writing a single line of code. However, professional web designers still need to learn HTML and CSS in order to have full control over the web pages they are developing.
Build vs. Buy
When an organization decides that a new software program needs to be developed, they must determine if it makes more sense to build it themselves or to purchase it from an outside company. This is the “build vs. buy” decision.
There are many advantages to purchasing software from an outside company. First, it is generally less expensive to purchase a software package than to build it. Second, when a software package is purchased, it is available much more quickly than if the package is built in-house. Software applications can take months or years to build; a purchased package can be up and running within a month. A purchased package has already been tested and many of the bugs have already been worked out. It is the role of a systems integrator to make various purchased systems and the existing systems at the organization work together.
There are also disadvantages to purchasing software. First, the same software you are using can be used by your competitors. If a company is trying to differentiate itself based on a business process that is in that purchased software, it will have a hard time doing so if its competitors use the same software. Another disadvantage to purchasing software is the process of customization. If you purchase a software package from a vendor and then customize it, you will have to manage those customizations every time the vendor provides an upgrade. This can become an administrative headache, to say the least!
Even if an organization determines to buy software, it still makes sense to go through many of the same analyses that they would do if they were going to build it themselves. This is an important decision that could have a long-term strategic impact on the organization.
Web Services
As we saw in chapter 3, the move to cloud computing has allowed software to be looked at as a service. One option companies have these days is to license functions provided by other companies instead of writing the code themselves. These are called web services, and they can greatly simplify the addition of functionality to a website.
For example, suppose a company wishes to provide a map showing the location of someone who has called their support line. By utilizing Google Maps API web services, they can build a Google Map right into their application. Or a shoe company could make it easier for its retailers to sell shoes online by providing a shoe-size web service that the retailers could embed right into their website.
Web services can blur the lines between “build vs. buy.” Companies can choose to build a software application themselves but then purchase functionality from vendors to supplement their system.
End-User Computing
In many organizations, application development is not limited to the programmers and analysts in the information-technology department. Especially in larger organizations, other departments develop their own department-specific applications. The people who build these are not necessarily trained in programming or application development, but they tend to be adept with computers. A person, for example, who is skilled in a particular software package, such as a spreadsheet or database package, may be called upon to build smaller applications for use by his or her own department. This phenomenon is referred to as end-user development, or end-user computing.
End-user computing can have many advantages for an organization. First, it brings the development of applications closer to those who will use them. Because IT departments are sometimes quite backlogged, it also provides a means to have software created more quickly. Many organizations encourage end-user computing to reduce the strain on the IT department.
End-user computing does have its disadvantages as well. If departments within an organization are developing their own applications, the organization may end up with several applications that perform similar functions, which is inefficient, since it is a duplication of effort. Sometimes, these different versions of the same application end up providing different results, bringing confusion when departments interact. These applications are often developed by someone with little or no formal training in programming. In these cases, the software developed can have problems that then have to be resolved by the IT department.
End-user computing can be beneficial to an organization, but it should be managed. The IT department should set guidelines and provide tools for the departments who want to create their own solutions. Communication between departments will go a long way towards successful use of end-user computing.
Building a Mobile App
In many ways, building an application for a mobile device is exactly the same as building an application for a traditional computer. Understanding the requirements for the application, designing the interface, working with users – all of these steps still need to be carried out.
So what’s different about building an application for a mobile device? In some ways, mobile applications are more limited. An application running on a mobile device must be designed to be functional on a smaller screen. Mobile applications should be designed to use fingers as the primary pointing device. Mobile devices generally have less available memory, storage space, and processing power.
Mobile applications also have many advantages over applications built for traditional computers. Mobile applications have access to the functionality of the mobile device, which usually includes features such as geolocation data, messaging, the camera, and even a gyroscope.
One of the most important questions regarding development for mobile devices is this: Do we want to develop an app at all? A mobile app is an expensive proposition, and it will only run on one type of mobile device at a time. For example, if you create an iPhone app, users with Android phones are out of luck. Each app takes several thousand dollars to create, so this may not be the best use of your funds.
Many organizations are moving away from developing a specific app for a mobile device and are instead making their websites more functional on mobile devices. Using a web-design framework called responsive design, a website can be made highly functional no matter what type of device is browsing it. With a responsive website, images resize themselves based on the size of the device’s screen, and text flows and sizes itself properly for optimal viewing. You can find out more about responsive design here.
Implementation Methodologies
Once a new system is developed (or purchased), the organization must determine the best method for implementing it. Convincing a group of people to learn and use a new system can be a very difficult process. Using new software, and the business processes it gives rise to, can have far-reaching effects within the organization.
There are several different methodologies an organization can adopt to implement a new system. Four of the most popular are listed below.
• Direct cutover. In the direct-cutover implementation methodology, the organization selects a particular date that the old system is not going to be used anymore. On that date, the users begin using the new system and the old system is unavailable. The advantages to using this methodology are that it is very fast and the least expensive. However, this method is the riskiest as well. If the new system has an operational problem or if the users are not properly prepared, it could prove disastrous for the organization.
• Pilot implementation. In this methodology, a subset of the organization (called a pilot group) starts using the new system before the rest of the organization. This has a smaller impact on the company and allows the support team to focus on a smaller group of individuals.
• Parallel operation. With parallel operation, the old and new systems are used simultaneously for a limited period of time. This method is the least risky because the old system is still being used while the new system is essentially being tested. However, this is by far the most expensive methodology since work is duplicated and support is needed for both systems in full.
• Phased implementation. In phased implementation, different functions of the new application are used as functions from the old system are turned off. This approach allows an organization to slowly move from one system to another.
Which of these implementation methodologies to use depends on the complexity and importance of the old and new systems.
Change Management
As new systems are brought online and old systems are phased out, it becomes important to manage the way change is implemented in the organization. Change should never be introduced in a vacuum. The organization should be sure to communicate proposed changes before they happen and plan to minimize the impact of the change that will occur after implementation. Change management is a critical component of IT oversight.
Maintenance
Once a new system has been introduced, it enters the maintenance phase. In this phase, the system is in production and is being used by the organization. While the system is no longer actively being developed, changes need to be made when bugs are found or new features are requested. During the maintenance phase, IT management must ensure that the system continues to stay aligned with business priorities and continues to run well.
Summary
Software development is about so much more than programming. Developing new software applications requires several steps, from the formal SDLC process to more informal processes such as agile programming or lean methodologies. Programming languages have evolved from very low-level machine-specific languages to higher-level languages that allow a programmer to write software for a wide variety of machines. Most programmers work with software development tools that provide them with integrated components to make the software development process more efficient. For some organizations, building their own software applications does not make the most sense; instead, they choose to purchase software built by a third party to save development costs and speed implementation. In end-user computing, software development happens outside the information technology department. When implementing new software applications, there are several different types of implementation methodologies that must be considered.
Study Questions
1. What are the steps in the SDLC methodology?
2. What is RAD software development?
3. What makes the lean methodology unique?
4. What are three differences between second-generation and third-generation languages?
5. Why would an organization consider building its own software application if it is cheaper to buy one?
6. What is responsive design?
7. What is the relationship between HTML and CSS in website design?
8. What is the difference between the pilot implementation methodology and the parallel implementation methodology?
9. What is change management?
10. What are the four different implementation methodologies?
Exercises
1. Which software-development methodology would be best if an organization needed to develop a software tool for a small group of users in the marketing department? Why? Which implementation methodology should they use? Why?
2. Doing your own research, find three programming languages and categorize them in these areas: generation, compiled vs. interpreted, procedural vs. object-oriented.
3. Some argue that HTML is not a programming language. Doing your own research, find three arguments for why it is not a programming language and three arguments for why it is.
4. Read more about responsive design using the link given in the text. Provide the links to three websites that use responsive design and explain how they demonstrate responsive-design behavior. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_2%3A_Information_Systems_for_Strategic_Advantage/10%3A_Information_Systems_Development.txt |
• 12: The Ethical and Legal Implications of Information Systems
New technologies create new situations that we have never dealt with before. How do we handle the new capabilities that these devices empower us with? What new laws are going to be needed to protect us from ourselves? This chapter will kick off with a discussion of the impact of information systems on how we behave (ethics). This will be followed with the new legal structures being put in place, with a focus on intellectual property and privacy.
• 13: Future Trends in Information Systems
Unit 3: Information Systems Beyond the Organization
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe what the term information systems ethics means;
• explain what a code of ethics is and describe the advantages and disadvantages;
• define the term intellectual property and explain the protections provided by copyright, patent, and trademark; and
• describe the challenges that information technology brings to individual privacy.
Introduction
Information systems have had an impact far beyond the world of business. New technologies create new situations that we have never dealt with before. How do we handle the new capabilities that these devices empower us with? What new laws are going to be needed to protect us from ourselves? This chapter will kick off with a discussion of the impact of information systems on how we behave (ethics). This will be followed with the new legal structures being put in place, with a focus on intellectual property and privacy.
Information Systems Ethics
The term ethics is defined as “a set of moral principles” or “the principles of conduct governing an individual or a group.”[1] Since the dawn of civilization, the study of ethics and their impact has fascinated mankind. But what do ethics have to do with information systems?
The introduction of new technology can have a profound effect on human behavior. New technologies give us capabilities that we did not have before, which in turn create environments and situations that have not been specifically addressed in ethical terms. Those who master new technologies gain new power; those who cannot or do not master them may lose power. In 1913, Henry Ford implemented the first moving assembly line to create his Model T cars. While this was a great step forward technologically (and economically), the assembly line reduced the value of human beings in the production process. The development of the atomic bomb concentrated unimaginable power in the hands of one government, who then had to wrestle with the decision to use it. Today’s digital technologies have created new categories of ethical dilemmas.
For example, the ability to anonymously make perfect copies of digital music has tempted many music fans to download copyrighted music for their own use without making payment to the music’s owner. Many of those who would never have walked into a music store and stolen a CD find themselves with dozens of illegally downloaded albums.
Digital technologies have given us the ability to aggregate information from multiple sources to create profiles of people. What would have taken weeks of work in the past can now be done in seconds, allowing private organizations and governments to know more about individuals than at any time in history. This information has value, but also chips away at the privacy of consumers and citizens.
Code of Ethics
One method for navigating new ethical waters is a code of ethics. A code of ethics is a document that outlines a set of acceptable behaviors for a professional or social group; generally, it is agreed to by all members of the group. The document details different actions that are considered appropriate and inappropriate.
A good example of a code of ethics is the Code of Ethics and Professional Conduct of the Association for Computing Machinery,[2] an organization of computing professionals that includes academics, researchers, and practitioners. Here is a quote from the preamble:
Commitment to ethical professional conduct is expected of every member (voting members, associate members, and student members) of the Association for Computing Machinery (ACM).
This Code, consisting of 24 imperatives formulated as statements of personal responsibility, identifies the elements of such a commitment. It contains many, but not all, issues professionals are likely to face. Section 1 outlines fundamental ethical considerations, while Section 2 addresses additional, more specific considerations of professional conduct. Statements in Section 3 pertain more specifically to individuals who have a leadership role, whether in the workplace or in a volunteer capacity such as with organizations like ACM. Principles involving compliance with this Code are given in Section 4.
In the ACM’s code, you will find many straightforward ethical instructions, such as the admonition to be honest and trustworthy. But because this is also an organization of professionals that focuses on computing, there are more specific admonitions that relate directly to information technology:
• No one should enter or use another’s computer system, software, or data files without permission. One must always have appropriate approval before using system resources, including communication ports, file space, other system peripherals, and computer time.
• Designing or implementing systems that deliberately or inadvertently demean individuals or groups is ethically unacceptable.
• Organizational leaders are responsible for ensuring that computer systems enhance, not degrade, the quality of working life. When implementing a computer system, organizations must consider the personal and professional development, physical safety, and human dignity of all workers. Appropriate human-computer ergonomic standards should be considered in system design and in the workplace.
One of the major advantages of creating a code of ethics is that it clarifies the acceptable standards of behavior for a professional group. The varied backgrounds and experiences of the members of a group lead to a variety of ideas regarding what is acceptable behavior. While to many the guidelines may seem obvious, having these items detailed provides clarity and consistency. Explicitly stating standards communicates the common guidelines to everyone in a clear manner.
Having a code of ethics can also have some drawbacks. First of all, a code of ethics does not have legal authority; in other words, breaking a code of ethics is not a crime in itself. So what happens if someone violates one of the guidelines? Many codes of ethics include a section that describes how such situations will be handled. In many cases, repeated violations of the code result in expulsion from the group.
In the case of ACM: “Adherence of professionals to a code of ethics is largely a voluntary matter. However, if a member does not follow this code by engaging in gross misconduct, membership in ACM may be terminated.” Expulsion from ACM may not have much of an impact on many individuals, since membership in ACM is usually not a requirement for employment. However, expulsion from other organizations, such as a state bar organization or medical board, could carry a huge impact.
Another possible disadvantage of a code of ethics is that there is always a chance that important issues will arise that are not specifically addressed in the code. Technology is quickly changing, and a code of ethics might not be updated often enough to keep up with all of the changes. A good code of ethics, however, is written in a broad enough fashion that it can address the ethical issues of potential changes to technology while the organization behind the code makes revisions.
Finally, a code of ethics could have also be a disadvantage in that it may not entirely reflect the ethics or morals of every member of the group. Organizations with a diverse membership may have internal conflicts as to what is acceptable behavior. For example, there may be a difference of opinion on the consumption of alcoholic beverages at company events. In such cases, the organization must make a choice about the importance of addressing a specific behavior in the code.
Sidebar: Acceptable Use Policies
Many organizations that provide technology services to a group of constituents or the public require agreement to an acceptable use policy (AUP) before those services can be accessed. Similar to a code of ethics, this policy outlines what is allowed and what is not allowed while someone is using the organization’s services. An everyday example of this is the terms of service that must be agreed to before using the public Wi-Fi at Starbucks, McDonald’s, or even a university. Here is an example of an acceptable use policy from Virginia Tech.
Just as with a code of ethics, these acceptable use policies specify what is allowed and what is not allowed. Again, while some of the items listed are obvious to most, others are not so obvious:
• “Borrowing” someone else’s login ID and password is prohibited.
• Using the provided access for commercial purposes, such as hosting your own business website, is not allowed.
• Sending out unsolicited email to a large group of people is prohibited.
Also as with codes of ethics, violations of these policies have various consequences. In most cases, such as with Wi-Fi, violating the acceptable use policy will mean that you will lose your access to the resource. While losing access to Wi-Fi at Starbucks may not have a lasting impact, a university student getting banned from the university’s Wi-Fi (or possibly all network resources) could have a large impact.
Intellectual Property
One of the domains that have been deeply impacted by digital technologies is the domain of intellectual property. Digital technologies have driven a rise in new intellectual property claims and made it much more difficult to defend intellectual property.
Intellectual property is defined as “property (as an idea, invention, or process) that derives from the work of the mind or intellect.”[3] This could include creations such as song lyrics, a computer program, a new type of toaster, or even a sculpture.
Practically speaking, it is very difficult to protect an idea. Instead, intellectual property laws are written to protect the tangible results of an idea. In other words, just coming up with a song in your head is not protected, but if you write it down it can be protected.
Protection of intellectual property is important because it gives people an incentive to be creative. Innovators with great ideas will be more likely to pursue those ideas if they have a clear understanding of how they will benefit. In the US Constitution, Article 8, Section 8, the authors saw fit to recognize the importance of protecting creative works:
Congress shall have the power . . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
An important point to note here is the “limited time” qualification. While protecting intellectual property is important because of the incentives it provides, it is also necessary to limit the amount of benefit that can be received and allow the results of ideas to become part of the public domain.
Outside of the US, intellectual property protections vary. You can find out more about a specific country’s intellectual property laws by visiting the World Intellectual Property Organization.
In the following sections we will review three of the best-known intellectual property protections: copyright, patent, and trademark.
Copyright
Copyright is the protection given to songs, computer programs, books, and other creative works; any work that has an “author” can be copyrighted. Under the terms of copyright, the author of a work controls what can be done with the work, including:
• Who can make copies of the work.
• Who can make derivative works from the original work.
• Who can perform the work publicly.
• Who can display the work publicly.
• Who can distribute the work.
Many times, a work is not owned by an individual but is instead owned by a publisher with whom the original author has an agreement. In return for the rights to the work, the publisher will market and distribute the work and then pay the original author a portion of the proceeds.
Copyright protection lasts for the life of the original author plus seventy years. In the case of a copyrighted work owned by a publisher or another third party, the protection lasts for ninety-five years from the original creation date. For works created before 1978, the protections vary slightly. You can see the full details on copyright protections by reviewing the Copyright Basics document available at the US Copyright Office’s website.
Obtaining Copyright Protection
In the United States, a copyright is obtained by the simple act of creating the original work. In other words, when an author writes down that song, makes that film, or designs that program, he or she automatically has the copyright. However, for a work that will be used commercially, it is advisable to register for a copyright with the US Copyright Office. A registered copyright is needed in order to bring legal action against someone who has used a work without permission.
First Sale Doctrine
If an artist creates a painting and sells it to a collector who then, for whatever reason, proceeds to destroy it, does the original artist have any recourse? What if the collector, instead of destroying it, begins making copies of it and sells them? Is this allowed? The first sale doctrine is a part of copyright law that addresses this, as shown below[4]:
The first sale doctrine, codified at 17 U.S.C. § 109, provides that an individual who knowingly purchases a copy of a copyrighted work from the copyright holder receives the right to sell, display or otherwise dispose of that particular copy, notwithstanding the interests of the copyright owner.
So, in our examples, the copyright owner has no recourse if the collector destroys her artwork. But the collector does not have the right to make copies of the artwork.
Fair Use
Another important provision within copyright law is that of fair use. Fair use is a limitation on copyright law that allows for the use of protected works without prior authorization in specific cases. For example, if a teacher wanted to discuss a current event in her class, she could pass out copies of a copyrighted news story to her students without first getting permission. Fair use is also what allows a student to quote a small portion of a copyrighted work in a research paper.
Unfortunately, the specific guidelines for what is considered fair use and what constitutes copyright violation are not well defined. Fair use is a well-known and respected concept and will only be challenged when copyright holders feel that the integrity or market value of their work is being threatened. The following four factors are considered when determining if something constitutes fair use: [5]
1. The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes;
2. The nature of the copyrighted work;
3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole;
4. The effect of the use upon the potential market for, or value of, the copyrighted work.
If you are ever considering using a copyrighted work as part of something you are creating, you may be able to do so under fair use. However, it is always best to check with the copyright owner to be sure you are staying within your rights and not infringing upon theirs.
Sidebar: The History of Copyright Law
As noted above, current copyright law grants copyright protection for seventy years after the author’s death, or ninety-five years from the date of creation for a work created for hire. But it was not always this way.
The first US copyright law, which only protected books, maps, and charts, provided protection for only 14 years with a renewable term of 14 years. Over time, copyright law was revised to grant protections to other forms of creative expression, such as photography and motion pictures. Congress also saw fit to extend the length of the protections, as shown in the chart below. Today, copyright has become big business, with many businesses relying on the income from copyright-protected works for their income.
Many now think that the protections last too long. The Sonny Bono Copyright Term Extension Act has been nicknamed the “Mickey Mouse Protection Act,” as it was enacted just in time to protect the copyright on the Walt Disney Company’s Mickey Mouse character. Because of this term extension, many works from the 1920s and 1930s that would have been available now in the public domain are not available.
Evolution of copyright term length. (CC-BY-SA: Tom Bell)
The Digital Millennium Copyright Act
As digital technologies have changed what it means to create, copy, and distribute media, a policy vacuum has been created. In 1998, the US Congress passed the Digital Millennium Copyright Act (DMCA), which extended copyright law to take into consideration digital technologies. Two of the best-known provisions from the DMCA are the anti-circumvention provision and the “safe harbor” provision.
• The anti-circumvention provision makes it illegal to create technology to circumvent technology that has been put in place to protect a copyrighted work. This provision includes not just the creation of the technology but also the publishing of information that describes how to do it. While this provision does allow for some exceptions, it has become quite controversial and has led to a movement to have it modified.
• The “safe harbor” provision limits the liability of online service providers when someone using their services commits copyright infringement. This is the provision that allows YouTube, for example, not to be held liable when someone posts a clip from a copyrighted movie. The provision does require the online service provider to take action when they are notified of the violation (a “takedown” notice). For an example of how takedown works, here’s how YouTube handles these requests: YouTube Copyright Infringement Notification.
Many think that the DMCA goes too far and ends up limiting our freedom of speech. The Electronic Frontier Foundation (EFF) is at the forefront of this battle. For example, in discussing the anti-circumvention provision, the EFF states:
Yet the DMCA has become a serious threat that jeopardizes fair use, impedes competition and innovation, chills free expression and scientific research, and interferes with computer intrusion laws. If you circumvent DRM [digital rights management] locks for non-infringing fair uses or create the tools to do so you might be on the receiving end of a lawsuit.
Sidebar: Creative Commons
In chapter 2, we learned about open-source software. Open-source software has few or no copyright restrictions; the creators of the software publish their code and make their software available for others to use and distribute for free. This is great for software, but what about other forms of copyrighted works? If an artist or writer wants to make their works available, how can they go about doing so while still protecting the integrity of their work? Creative Commons is the solution to this problem.
Creative Commons is a nonprofit organization that provides legal tools for artists and authors. The tools offered make it simple to license artistic or literary work for others to use or distribute in a manner consistent with the author’s intentions. Creative Commons licenses are indicated with the symbol . It is important to note that Creative Commons and public domain are not the same. When something is in the public domain, it has absolutely no restrictions on its use or distribution. Works whose copyrights have expired, for example, are in the public domain.
By using a Creative Commons license, authors can control the use of their work while still making it widely accessible. By attaching a Creative Commons license to their work, a legally binding license is created. Here are some examples of these licenses:
• CC-BY: This is the least restrictive license. It lets others distribute and build upon the work, even commercially, as long as they give the author credit for the original work.
• CC-BY-SA: This license restricts the distribution of the work via the “share-alike” clause. This means that others can freely distribute and build upon the work, but they must give credit to the original author and they must share using the same Creative Commons license.
• CC-BY-NC: This license is the same as CC-BY but adds the restriction that no one can make money with this work. NC stands for “non-commercial.”
• CC-BY-NC-ND: This license is the same as CC-BY-NC but also adds the ND restriction, which means that no derivative works may be made from the original.
These are a few of the more common licenses that can be created using the tools that Creative Commons makes available. For a full listing of the licenses and to learn much more about Creative Commons, visit their web site.
Patent
Another important form of intellectual property protection is the patent. A patent creates protection for someone who invents a new product or process. The definition of invention is quite broad and covers many different fields. Here are some examples of items receiving patents:
• circuit designs in semiconductors;
• prescription drug formulas;
• firearms;
• locks;
• plumbing;
• engines;
• coating processes; and
• business processes.
Once a patent is granted, it provides the inventor with protection from others infringing on his or her patent. A patent holder has the right to “exclude others from making, using, offering for sale, or selling the invention throughout the United States or importing the invention into the United States for a limited time in exchange for public disclosure of the invention when the patent is granted.”[6]
As with copyright, patent protection lasts for a limited period of time before the invention or process enters the public domain. In the US, a patent lasts twenty years. This is why generic drugs are available to replace brand-name drugs after twenty years.
Obtaining Patent Protection
Unlike copyright, a patent is not automatically granted when someone has an interesting idea and writes it down. In most countries, a patent application must be submitted to a government patent office. A patent will only be granted if the invention or process being submitted meets certain conditions:
• It must be original. The invention being submitted must not have been submitted before.
• It must be non-obvious. You cannot patent something that anyone could think of. For example, you could not put a pencil on a chair and try to get a patent for a pencil-holding chair.
• It must be useful. The invention being submitted must serve some purpose or have some use that would be desired.
The job of the patent office is to review patent applications to ensure that the item being submitted meets these requirements. This is not an easy job: in 2012, the US Patent Office received 576,763 patent applications and granted 276,788 patents. The current backlog for a patent approval is 18.1 months. Over the past fifty years, the number of patent applications has risen from just 100,000 a year to almost 600,000; digital technologies are driving much of this innovation.
Increase in patent applications, 1963–2012 (Source: US Patent and Trademark Office)
Sidebar: What Is a Patent Troll?
The advent of digital technologies has led to a large increase in patent filings and therefore a large number of patents being granted. Once a patent is granted, it is up to the owner of the patent to enforce it; if someone is found to be using the invention without permission, the patent holder has the right to sue to force that person to stop and to collect damages.
The rise in patents has led to a new form of profiteering called patent trolling. A patent troll is a person or organization who gains the rights to a patent but does not actually make the invention that the patent protects. Instead, the patent troll searches for those who are illegally using the invention in some way and sues them. In many cases, the infringement being alleged is questionable at best. For example, companies have been sued for using Wi-Fi or for scanning documents, technologies that have been on the market for many years.
Recently, the US government has begun taking action against patent trolls. Several pieces of legislation are working their way through Congress that will, if enacted, limit the ability of patent trolls to threaten innovation. You can learn a lot more about patent trolls by listening to a detailed investigation conducted by the radio program This American Life, by clicking this link.
Trademark
A trademark is a word, phrase, logo, shape or sound that identifies a source of goods or services. For example, the Nike “Swoosh,” the Facebook “f”, and Apple’s apple (with a bite taken out of it) are all trademarked. The concept behind trademarks is to protect the consumer. Imagine going to the local shopping center to purchase a specific item from a specific store and finding that there are several stores all with the same name!
Two types of trademarks exist – a common-law trademark and a registered trademark. As with copyright, an organization will automatically receive a trademark if a word, phrase, or logo is being used in the normal course of business (subject to some restrictions, discussed below). A common-law trademark is designated by placing “TM” next to the trademark. A registered trademark is one that has been examined, approved, and registered with the trademark office, such as the Patent and Trademark Office in the US. A registered trademark has the circle-R (®) placed next to the trademark.
While most any word, phrase, logo, shape, or sound can be trademarked, there are a few limitations. A trademark will not hold up legally if it meets one or more of the following conditions:
• The trademark is likely to cause confusion with a mark in a registration or prior application.
• The trademark is merely descriptive for the goods/services. For example, trying to register the trademark “blue” for a blue product you are selling will not pass muster.
• The trademark is a geographic term.
• The trademark is a surname. You will not be allowed to trademark “Smith’s Bookstore.”
• The trademark is ornamental as applied to the goods. For example, a repeating flower pattern that is a design on a plate cannot be trademarked.
As long as an organization uses its trademark and defends it against infringement, the protection afforded by it does not expire. Because of this, many organizations defend their trademark against other companies whose branding even only slightly copies their trademark. For example, Chick-fil-A has trademarked the phrase “Eat Mor Chikin” and has vigorously defended it against a small business using the slogan “Eat More Kale.” Coca-Cola has trademarked the contour shape of its bottle and will bring legal action against any company using a bottle design similar to theirs. As an example of trademarks that have been diluted and have now lost their protection in the US are “aspirin” (originally trademarked by Bayer), “escalator” (originally trademarked by Otis), and “yo-yo” (originally trademarked by Duncan).
Information Systems and Intellectual Property
The rise of information systems has forced us to rethink how we deal with intellectual property. From the increase in patent applications swamping the government’s patent office to the new laws that must be put in place to enforce copyright protection, digital technologies have impacted our behavior.
Privacy
The term privacy has many definitions, but for our purposes, privacy will mean the ability to control information about oneself. Our ability to maintain our privacy has eroded substantially in the past decades, due to information systems.
Personally Identifiable Information
Information about a person that can be used to uniquely establish that person’s identify is called personally identifiable information, or PII. This is a broad category that includes information such as:
• name;
• social security number;
• date of birth;
• place of birth;
• mother‘s maiden name;
• biometric records (fingerprint, face, etc.);
• medical records;
• educational records;
• financial information; and
• employment information.
Organizations that collect PII are responsible to protect it. The Department of Commerce recommends that “organizations minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.” They go on to state that “the likelihood of harm caused by a breach involving PII is greatly reduced if an organization minimizes the amount of PII it uses, collects, and stores.”[7] Organizations that do not protect PII can face penalties, lawsuits, and loss of business. In the US, most states now have laws in place requiring organizations that have had security breaches related to PII to notify potential victims, as does the European Union.
Just because companies are required to protect your information does not mean they are restricted from sharing it. In the US, companies can share your information without your explicit consent (see sidebar below), though not all do so. Companies that collect PII are urged by the FTC to create a privacy policy and post it on their website. The state of California requires a privacy policy for any website that does business with a resident of the state (see http://www.privacy.ca.gov/lawenforcement/laws.htm).
While the privacy laws in the US seek to balance consumer protection with promoting commerce, in the European Union privacy is considered a fundamental right that outweighs the interests of commerce. This has led to much stricter privacy protection in the EU, but also makes commerce more difficult between the US and the EU.
Non-Obvious Relationship Awareness
Digital technologies have given us many new capabilities that simplify and expedite the collection of personal information. Every time we come into contact with digital technologies, information about us is being made available. From our location to our web-surfing habits, our criminal record to our credit report, we are constantly being monitored. This information can then be aggregated to create profiles of each and every one of us. While much of the information collected was available in the past, collecting it and combining it took time and effort. Today, detailed information about us is available for purchase from different companies. Even information not categorized as PII can be aggregated in such a way that an individual can be identified.
This process of collecting large quantities of a variety of information and then combining it to create profiles of individuals is known as non-obvious relationship awareness, or NORA. First commercialized by big casinos looking to find cheaters, NORA is used by both government agencies and private organizations, and it is big business.
Non-obvious relationship awareness (NORA)
In some settings, NORA can bring many benefits, such as in law enforcement. By being able to identify potential criminals more quickly, crimes can be solved more quickly or even prevented before they happen. But these advantages come at a price: our privacy.
Restrictions on Record Collecting
In the US, the government has strict guidelines on how much information can be collected about its citizens. Certain classes of information have been restricted by laws over time, and the advent of digital tools has made these restrictions more important than ever.
Children’s Online Privacy Protection Act
Websites that are collecting information from children under the age of thirteen are required to comply with the Children’s Online Privacy Protection Act (COPPA), which is enforced by the Federal Trade Commission (FTC). To comply with COPPA, organizations must make a good-faith effort to determine the age of those accessing their websites and, if users are under thirteen years old, must obtain parental consent before collecting any information.
Family Educational Rights and Privacy Act
The Family Educational Rights and Privacy Act (FERPA) is a US law that protects the privacy of student education records. In brief, this law specifies that parents have a right to their child’s educational information until the child reaches either the age of eighteen or begins attending school beyond the high school level. At that point, control of the information is given to the child. While this law is not specifically about the digital collection of information on the Internet, the educational institutions that are collecting student information are at a higher risk for disclosing it improperly because of digital technologies.
Health Insurance Portability and Accountability Act
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is the law the specifically singles out records related to health care as a special class of personally identifiable information. This law gives patients specific rights to control their medical records, requires health care providers and others who maintain this information to get specific permission in order to share it, and imposes penalties on the institutions that breach this trust. Since much of this information is now shared via electronic medical records, the protection of those systems becomes paramount.
Sidebar: Do Not Track
When it comes to getting permission to share personal information, the US and the EU have different approaches. In the US, the “opt-out” model is prevalent; in this model, the default agreement is that you have agreed to share your information with the organization and must explicitly tell them that you do not want your information shared. There are no laws prohibiting the sharing of your data (beyond some specific categories of data, such as medical records). In the European Union, the “opt-in” model is required to be the default. In this case, you must give your explicit permission before an organization can share your information.
To combat this sharing of information, the Do Not Track initiative was created. As its creators explain[8]:
Do Not Track is a technology and policy proposal that enables users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms. At present few of these third parties offer a reliable tracking opt out, and tools for blocking them are neither user-friendly nor comprehensive. Much like the popular Do Not Call registry, Do Not Track provides users with a single, simple, persistent choice to opt out of third-party web tracking.
Summary
The rapid changes in information technology in the past few decades have brought a broad array of new capabilities and powers to governments, organizations, and individuals alike. These new capabilities have required thoughtful analysis and the creation of new norms, regulations, and laws. In this chapter, we have seen how the areas of intellectual property and privacy have been affected by these new capabilities and how the regulatory environment has been changed to address them.
Study Questions
1. What does the term information systems ethics mean?
2. What is a code of ethics? What is one advantage and one disadvantage of a code of ethics?
3. What does the term intellectual property mean? Give an example.
4. What protections are provided by a copyright? How do you obtain one?
5. What is fair use?
6. What protections are provided by a patent? How do you obtain one?
7. What does a trademark protect? How do you obtain one?
8. What does the term personally identifiable information mean?
9. What protections are provided by HIPAA, COPPA, and FERPA?
10. How would you explain the concept of NORA?
Exercises
1. Provide one example of how information technology has created an ethical dilemma that would not have existed before the advent of information technology.
2. Find an example of a code of ethics or acceptable use policy related to information technology and highlight five points that you think are important.
3. Do some original research on the effort to combat patent trolls. Write a two-page paper that discusses this legislation.
4. Give an example of how NORA could be used to identify an individual.
5. How are intellectual property protections different across the world? Pick two countries and do some original research, then compare the patent and copyright protections offered in those countries to those in the US. Write a two- to three-page paper describing the differences.
1. http://www.merriam-webster.com/dictionary/ethics
2. ACM Code of Ethics and Professional Conduct Adopted by ACM Council 10/16/92.
3. http://www.merriam-webster.com/dicti...ual%20property
4. http://www.justice.gov/usao/eousa/fo...9/crm01854.htm
5. http://www.copyright.gov/fls/fl102.html
6. From the US Patent and Trademark Office, "What Is A Patent?" http://www.uspto.gov/patents/
7. Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). National Institute of Standards and Technology, US Department of Commerce Special Publication 800-122. http://csrc.nist.gov/publications/ni.../sp800-122.pdf
8. http://donottrack.us/ | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_3%3A_Information_Systems_Beyond_the_Organization/12%3A_The_Ethical_and_Legal_Implications_of_Infor.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe future trends in information systems.
Introduction
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today, devices that we can hold in one hand are more powerful than the computers used to land a man on the moon. The Internet has made the entire world accessible to us, allowing us to communicate and collaborate with each other like never before. In this chapter, we will examine current trends and look ahead to what is coming next.
Global
The first trend to note is the continuing expansion of globalization. The use of the Internet is growing all over the world, and with it the use of digital devices. The growth is coming from some unexpected places; countries such as Indonesia and Iran are leading the way in Internet growth.
Global Internet growth, 2008–2012. Click to enlarge. (Source: Internet World Stats)
Social
Social media growth is another trend that continues. Facebook now has over one billion users! In 2013, 80% of Facebook users were outside of the US and Canada.[1] Countries where Facebook is growing rapidly include Indonesia, Mexico, and the Philippines. [2]
Besides Facebook, other social media sites are also seeing tremendous growth. Over 70% of YouTube’s users are outside the US, with the UK, India, Germany, Canada, France, South Korea, and Russia leading the way.[3] Pinterest gets over 50% of its users from outside the US, with over 9% from India. [4] Twitter now has over 230 million active users. [5] Social media sites not based in the US are also growing. China’s QQ instant-messaging service is the eighth most-visited site in the world.[6]
Personal
Ever since the advent of Web 2.0 and e-commerce, users of information systems have expected to be able to modify their experiences to meet their personal tastes. From custom backgrounds on computer desktops to unique ringtones on mobile phones, makers of digital devices provide the ability to personalize how we use them. More recently, companies such as Netflix have begun assisting their users with personalizations by making suggestions. In the future, we will begin seeing devices perfectly matched to our personal preferences, based upon information collected about us in the past.
Mobile
Perhaps the most impactful trend in digital technologies in the last decade has been the advent of mobile technologies. Beginning with the simple cellphone in the 1990s and evolving into the smartphones and tablets of today, the growth of mobile has been overwhelming. Here are some key indicators of this trend:
• Mobile device sales. In 2011, smartphones began outselling personal computers.[7]
• The number of smartphone subscribers grew at 31% in 2013, with China leading the way at 354 million smartphone users.
• Internet access via mobile. In May of 2013, mobile accounted for 15% of all Internet traffic. In China, 75% of Internet users used their smartphone to access it. Facebook reported that 68% of its active users used their mobile platform to access the social network.
• The rise of tablets. While Apple defined the smartphone with the iPhone, the iPad sold more than three times as many units in its first twelve months as the iPhone did in its first twelve months. Tablet shipments now outpace notebook PCs and desktop PCs. The research firm IDC predicts that 87% of all connected devices will be either smartphones or tablets by 2017. [8]
Wearable
The average smartphone user looks at his or her smartphone 150 times a day for functions such as messaging (23 times), phone calls (22), listening to music (13), and social media (9).[9] Many of these functions would be much better served if the technology was worn on, or even physically integrated into, our bodies. This technology is known as a “wearable.”
oogle Glass. Click to enlarge. (CC-BY: Flickr, user Tedeytan)
Wearables have been around for a long time, with technologies such as hearing aids and, later, bluetooth earpieces. But now, we are seeing an explosion of new wearable technologies. Perhaps the best known of these is Google Glass, an augmented reality device that you wear over your eyes like a pair of eyeglasses. Visible only to you, Google Glass will project images into your field of vision based on your context and voice commands. You can find out much more about Google Glass at http://www.google.com/glass/start/.
Another class of wearables are those related to health care. The UP by Jawbone consists of a wristband and an app that tracks how you sleep, move, and eat, then helps you use that information to feel your best. [10] It can be used to track your sleep patterns, moods, eating patterns, and other aspects of daily life, and then report back to you via an app on your smartphone or tablet. As our population ages and technology continues to evolve, there will be a large increase in wearables like this.
Collaborative
As more of us use smartphones and wearables, it will be simpler than ever to share data with each other for mutual benefit. Some of this sharing can be done passively, such as reporting our location in order to update traffic statistics. Other data can be reported actively, such as adding our rating of a restaurant to a review site.
The smartphone app Waze is a community-based tool that keeps track of the route you are traveling and how fast you are making your way to your destination. In return for providing your data, you can benefit from the data being sent from all of the other users of the app. Waze will route you around traffic and accidents based upon real-time reports from other users.
Yelp! allows consumers to post ratings and reviews of local businesses into a database, and then it provides that data back to consumers via its website or mobile phone app. By compiling ratings of restaurants, shopping centers, and services, and then allowing consumers to search through its directory, Yelp! has become a huge source of business for many companies. Unlike data collected passively however, Yelp! relies on its users to take the time to provide honest ratings and reviews.
Printable
One of the most amazing innovations to be developed recently is the 3-D printer. A 3-D printer allows you to print virtually any 3-D object based on a model of that object designed on a computer. 3-D printers work by creating layer upon layer of the model using malleable materials, such as different types of glass, metals, or even wax.
3-D printing is quite useful for prototyping the designs of products to determine their feasibility and marketability. 3-D printing has also been used to create working prosthetic legs, handguns, and even an ear that can hear beyond the range of normal hearing. The US Air Force now uses 3-D printed parts on the F-18 fighter jet.[11]
3-D printing is one of many technologies embraced by the “maker” movement. Chris Anderson, editor of Wired magazine, puts it this way[12]:
In a nutshell, the term “Maker” refers to a new category of builders who are using open-source methods and the latest technology to bring manufacturing out of its traditional factory context, and into the realm of the personal desktop computer. Until recently, the ability to manufacture was reserved for those who owned factories. What’s happened over the last five years is that we’ve brought the Web’s democratizing power to manufacturing. Today, you can manufacture with the push of a button.
Findable
The “Internet of Things” refers to the idea of physical objects being connected to the Internet. Advances in wireless technologies and sensors will allow physical objects to send and receive data about themselves. Many of the technologies to enable this are already available – it is just a matter of integrating them together.
In a 2010 report by McKinsey & Company on the Internet of Things[13], six broad applications are identified:
• Tracking behavior. When products are embedded with sensors, companies can track the movements of these products and even monitor interactions with them. Business models can be fine-tuned to take advantage of this behavioral data. Some insurance companies, for example, are offering to install location sensors in customers’ cars. That allows these companies to base the price of policies on how a car is driven as well as where it travels.
• Enhanced situational awareness. Data from large numbers of sensors deployed, for example, in infrastructure (such as roads and buildings), or to report on environmental conditions (including soil moisture, ocean currents, or weather), can give decision makers a heightened awareness of real-time events, particularly when the sensors are used with advanced display or visualization technologies. Security personnel, for instance, can use sensor networks that combine video, audio, and vibration detectors to spot unauthorized individuals who enter restricted areas.
• Sensor-driven decision analysis. The Internet of Things also can support longer-range, more complex human planning and decision making. The technology requirements – tremendous storage and computing resources linked with advanced software systems that generate a variety of graphical displays for analyzing data – rise accordingly.
• Process optimization. Some industries, such as chemical production, are installing legions of sensors to bring much greater granularity to monitoring. These sensors feed data to computers, which in turn analyze the data and then send signals to actuators that adjust processes – for example, by modifying ingredient mixtures, temperatures, or pressures.
• Optimized resource consumption. Networked sensors and automated feedback mechanisms can change usage patterns for scarce resources, such as energy and water. This can be accomplished by dynamically changing the price of these goods to increase or reduce demand.
• Complex autonomous systems. The most demanding use of the Internet of Things involves the rapid, real-time sensing of unpredictable conditions and instantaneous responses guided by automated systems. This kind of machine decision-making mimics human reactions, though at vastly enhanced performance levels. The automobile industry, for instance, is stepping up the development of systems that can detect imminent collisions and take evasive action.
Autonomous
A final trend that is emerging is an extension of the Internet of Things: autonomous robots and vehicles. By combining software, sensors, and location technologies, devices that can operate themselves to perform specific functions are being developed. These take the form of creations such as medical nanotechnology robots (nanobots), self-driving cars, or unmanned aerial vehicles (UAVs).
A nanobot is a robot whose components are on the scale of about a nanometer, which is one-billionth of a meter. While still an emerging field, it is showing promise for applications in the medical field. For example, a set of nanobots could be introduced into the human body to combat cancer or a specific disease.
In March of 2012, Google introduced the world to their driverless car by releasing a video on YouTube showing a blind man driving the car around the San Francisco area. The car combines several technologies, including a laser radar system, worth about \$150,000. While the car is not available commercially yet, three US states (Nevada, Florida, and California) have already passed legislation making driverless cars legal.
A UAV, often referred to as a “drone,” is a small airplane or helicopter that can fly without a pilot. Instead of a pilot, they are either run autonomously by computers in the vehicle or operated by a person using a remote control. While most drones today are used for military or civil applications, there is a growing market for personal drones. For around \$300, a consumer can purchase a drone for personal use.
Summary
As the world of information technology moves forward, we will be constantly challenged by new capabilities and innovations that will both amaze and disgust us. As we learned in chapter 12, many times the new capabilities and powers that come with these new technologies will test us and require a new way of thinking about the world. Businesses and individuals alike need to be aware of these coming changes and prepare for them.
Study Questions
1. Which countries are the biggest users of the Internet? Social media? Mobile?
2. Which country had the largest Internet growth (in %) between 2008 and 2012?
3. How will most people connect to the Internet in the future?
4. What are two different applications of wearable technologies?
5. What are two different applications of collaborative technologies?
6. What capabilities do printable technologies have?
7. How will advances in wireless technologies and sensors make objects “findable”?
8. What is enhanced situational awareness?
9. What is a nanobot?
10. What is a UAV?
Exercises
1. If you were going to start a new technology business, which of the emerging trends do you think would be the biggest opportunity? Do some original research to estimate the market size.
2. What privacy concerns could be raised by collaborative technologies such as Waze?
3. Do some research about the first handgun printed using a 3-D printer and report on some of the concerns raised.
4. Write up an example of how the Internet of Things might provide a business with a competitive advantage.
5. How do you think wearable technologies could improve overall healthcare?
6. What potential problems do you see with a rise in the number of driverless cars? Do some independent research and write a two-page paper that describes where driverless cars are legal and what problems may occur.
7. Seek out the latest presentation by Mary Meeker on “Internet Trends” (if you cannot find it, the video from 2013 is available at http://allthingsd.com/20130529/mary-meekers-2013-internet-trends-deck-the-full-video/). Write a one-page paper describing what the top three trends are, in your opinion. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2014_Edition)/Unit_3%3A_Information_Systems_Beyond_the_Organization/13%3A_Future_Trends_in_Information_Systems.txt |
• 1: What Is an Information System?
The first day of class I ask my students to tell me what they think an information system is. I generally get answers such as “computers,” “databases,” or “Excel.” These are good answers, but definitely incomplete ones. The study of information systems goes far beyond understanding some technologies. Let’s begin our study by defining information systems.
• 2: Hardware
An information system is made up of five components: hardware, software, data, people, and process. The physical parts of computing devices – those that you can actually touch – are referred to as hardware. In this chapter, we will take a look at this component of information systems, learn a little bit about how it works, and discuss some of the current trends surrounding it.
• 3: Software
The second component of an information system is software. Simply put: Software is the set of instructions that tell the hardware what to do. Software is created through the process of programming. Without software, the hardware would not be functional.
• 4: Data and Databases
Imagine if you turned on a computer, started the word processor, but could not save a document. Imagine if you opened a music player but there was no music to play. Imagine opening a web browser but there were no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.
• 5: Networking and Communication
This ability for computers to communicate with one another and, maybe more importantly, to facilitate communication between individuals and groups, has been an important factor in the growth of computing over the past several decades. In the 1990s, when the Internet came of age, Internet technologies began to pervade all areas of the organization. Now, with the Internet a global phenomenon, it would be unthinkable to have a computer that did not include communications capabilities.
• 6: Information Systems Security
In this chapter, we will review the fundamental concepts of information systems security and discuss some of the measures that can be taken to mitigate security threats. We will begin with an overview focusing on how organizations can stay secure. Several different measures that a company can take to improve security will be discussed. We will then follow up by reviewing security precautions that individuals can take in order to secure their personal computing environment.
01: What is an information system
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define what an information system is by identifying its major components;
• describe the basic history of information systems; and
• describe the basic argument behind the article “Does IT Matter?” by Nicholas Carr.
Introduction
Welcome to the world of information systems, a world that seems to change almost daily. Over the past few decades information systems have progressed to being virtually everywhere, even to the point where you may not realize its existence in many of your daily activities. Stop and consider how you interface with various components in information systems every day through different electronic devices. Smartphones, laptop, and personal computers connect us constantly to a variety of systems including messaging, banking, online retailing, and academic resources, just to name a few examples. Information systems are at the center of virtually every organization, providing users with almost unlimited resources.
Have you ever considered why businesses invest in technology? Some purchase computer hardware and software because everyone else has computers. Some even invest in the same hardware and software as their business friends even though different technology might be more appropriate for them. Finally, some businesses do sufficient research before deciding what best fits their needs. As you read through this book be sure to evaluate the contents of each chapter based on how you might someday apply what you have learned to strengthen the position of the business you work for, or maybe even your own business. Wise decisions can result in stability and growth for your future enterprise.
Information systems surround you almost every day. Wi-fi networks on your university campus, database search services in the learning resource center, and printers in computer labs are good examples. Every time you go shopping you are interacting with an information system that manages inventory and sales. Even driving to school or work results in an interaction with the transportation information system, impacting traffic lights, cameras, etc. Vending machines connect and communicate using the Internet of Things (IoT). Your car’s computer system does more than just control the engine – acceleration, shifting, and braking data is always recorded. And, of course, everyone’s smartphone is constantly connecting to available networks via Wi-fi, recording your location and other data.
Can you think of some words to describe an information system? Words such as “computers,” “networks,” or “databases” might pop into your mind. The study of information systems encompasses a broad array of devices, software, and data systems. Defining an information system provides you with a solid start to this course and the content you are about to encounter.
Defining Information Systems
Many programs in business require students to take a course in information systems. Various authors have attempted to define the term in different ways. Read the following definitions, then see if you can detect some variances.
• “An information system (IS) can be defined technically as a set of interrelated components that collect, process, store, and distribute information to support decision making and control in an organization.” [1]
• “Information systems are combinations of hardware, software, and telecommunications networks that people build and use to collect, create, and distribute useful data, typically in organizational settings.”[2]
• “Information systems are interrelated components working together to collect, process, store, and disseminate information to support decision making, coordination, control, analysis, and visualization in an organization.”[3]
As you can see these definitions focus on two different ways of describing information systems: the components that make up an information system and the role those components play in an organization. Each of these need to be examined.
The Components of Information Systems
Information systems can be viewed as having five major components: hardware, software, data, people, and processes. The first three are technology. These are probably what you thought of when defining information systems. The last two components, people and processes, separate the idea of information systems from more technical fields, such as computer science. In order to fully understand information systems, you will need to understand how all of these components work together to bring value to an organization.
Technology
Technology can be thought of as the application of scientific knowledge for practical purposes. From the invention of the wheel to the harnessing of electricity for artificial lighting, technology has become ubiquitous in daily life, to the degree that it is assumed to always be available for use regardless of location. As discussed before, the first three components of information systems – hardware, software, and data – all fall under the category of technology. Each of these will be addressed in an individual chapter. At this point a simple introduction should help you in your understanding.
Hardware
Hardware is the tangible, physical portion of an information system – the part you can touch. Computers, keyboards, disk drives, and flash drives are all examples of information systems hardware. How these hardware components function and work together will be covered in Chapter 2.
Software
Software comprises the set of instructions that tell the hardware what to do. Software is not tangible – it cannot be touched. Programmers create software by typing a series of instructions telling the hardware what to do. Two main categories of software are: Operating Systems and Application software. Operating Systems software provides the interface between the hardware and the Application software. Examples of operating systems for a personal computer include Microsoft Windows and Ubuntu Linux. The mobile phone operating system market is dominated by Google Android and Apple iOS. Application software allows the user to perform tasks such as creating documents, recording data in a spreadsheet, or messaging a friend. Software will be explored more thoroughly in Chapter 3.
Data
The third technology component is data. You can think of data as a collection of facts. For example, your address (street, city state, postal code), your phone number, and your social networking account are all pieces of data. Like software, data is also intangible, unable to be seen in its native state. Pieces of unrelated data are not very useful. But aggregated, indexed, and organized together into a database, data can become a powerful tool for businesses. Organizations collect all kinds of data and use it to make decisions which can then be analyzed as to their effectiveness. The analysis of data is then used to improve the organization’s performance. Chapter 4 will focus on data and databases, and how it is used in organizations.
Networking Communication
Besides the technology components (hardware, software, and data) which have long been considered the core technology of information systems, it has been suggested that one other component should be added: communication. An information system can exist without the ability to communicate – the first personal computers were stand-alone machines that did not access the Internet. However, in today’s hyper-connected world, it is an extremely rare computer that does not connect to another device or to a enetwork. Technically, the networking communication component is made up of hardware and software, but it is such a core feature of today’s information systems that it has become its own category. Networking will be covered in Chapter 5.
People
When thinking about information systems, it is easy to focus on the technology components and forget to look beyond these tools to fully understand their integration into an organization. A focus on the people involved in information systems is the next step. From the front-line user support staff, to systems analysts, to developers, all the way up to the chief information officer (CIO), the people involved with information systems are an essential element. The people component will be covered in Chapter 9.
Process
The last component of information systems is process. A process is a series of steps undertaken to achieve a desired outcome or goal. Information systems are becoming more integrated with organizational processes, bringing greater productivity and better control to those processes. But simply automating activities using technology is not enough – businesses looking to utilize information systems must do more. The ultimate goal is to improve processes both internally and externally, enhancing interfaces with suppliers and customers. Technology buzzwords such as “business process re-engineering,” “business process management,” and “enterprise resource planning” all have to do with the continued improvement of these business procedures and the integration of technology with them. Businesses hoping to gain a competitive advantage over their competitors are highly focused on this component of information systems. The process element in information systems will be discussed in Chapter 8.
The Role of Information Systems
You should now understand that information systems have a number of vital components, some tangible, others intangible, and still others of a personnel nature. These components collect, store, organize, and distribute data throughout the organization. You may have even realized that one of the roles of information systems is to take data and turn it into information, and then transform that information into organizational knowledge. As technology has developed, this role has evolved into the backbone of the organization, making information systems integral to virtually every business. The integration of information systems into organizations has progressed over the decades.
The Mainframe Era
From the late 1950s through the 1960s, computers were seen as a way to more efficiently do calculations. These first business computers were room-sized monsters, with several machines linked together. The primary work was to organize and store large volumes of information that were tedious to manage by hand. Only large businesses, universities, and government agencies could afford them, and they took a crew of specialized personnel and dedicated facilities to provide information to organizations.
Time-sharing allowed dozens or even hundreds of users to simultaneously access mainframe computers from locations in the same building or miles away. Typical functions included scientific calculations and accounting, all under the broader umbrella of “data processing.”
In the late 1960s, Manufacturing Resources Planning (MRP) systems were introduced. This software, running on a mainframe computer, gave companies the ability to manage the manufacturing process, making it more efficient. From tracking inventory to creating bills of materials to scheduling production, the MRP systems gave more businesses a reason to integrate computing into their processes. IBM became the dominant mainframe company. Continued improvement in software and the availability of cheaper hardware eventually brought mainframe computers (and their little sibling, the minicomputer) into most large businesses.
Today you probably think of Silicon Valley in northern California as the center of computing and technology. But in the days of the mainframe’s dominance corporations in the cities of Minneapolis and St. Paul produced most computers. The advent of the personal computer resulted in the “center of technology” eventually moving to Silicon Valley.
The PC Revolution
In 1975, the first microcomputer was announced on the cover of Popular Mechanics: the Altair 8800. Its immediate popularity sparked the imagination of entrepreneurs everywhere, and there were soon dozens of companies manufacturing these “personal computers.” Though at first just a niche product for computer hobbyists, improvements in usability and the availability of practical software led to growing sales. The most prominent of these early personal computer makers was a little company known as Apple Computer, headed by Steve Jobs and Steve Wozniak, with the hugely successful “Apple II.” Not wanting to be left out of the revolution, in 1981 IBM teamed with Microsoft, then just a startup company, for their operating system software and hurriedly released their own version of the personal computer simply called the “PC.” Small businesses finally had affordable computing that could provide them with needed information systems. Popularity of the IBM PC gave legitimacy to the microcomputer and it was named Time magazine’s “Man of the Year” for 1982.
Because of the IBM PC’s open architecture, it was easy for other companies to copy, or “clone” it. During the 1980s, many new computer companies sprang up, offering less expensive versions of the PC. This drove prices down and spurred innovation. Microsoft developed the Windows operating system, with version 3.1 in 1992 becoming the first commercially successful release. Typical uses for the PC during this period included word processing, spreadsheets, and databases. These early PCs were standalone machines, not connected to a network.
Client-Server
In the mid-1980s, businesses began to see the need to connect their computers as a way to collaborate and share resources. Known as “client-server,” this networking architecture allowed users to log in to the Local Area Network (LAN) from their PC (the “client”) by connecting to a central computer called a “server.” The server would lookup permissions for each user to determine who had access to various resources such as printers and files. Software companies began developing applications that allowed multiple users to access the same data at the same time. This evolved into software applications for communicating, with the first popular use of electronic mail appearing at this time.
This networking and data sharing all stayed mainly within the confines of each business. Sharing of electronic data between companies was a very specialized function. Computers were now seen as tools to collaborate internally within an organization. These networks of computers were becoming so powerful that they were replacing many of the functions previously performed by the larger mainframe computers at a fraction of the cost. It was during this era that the first Enterprise Resource Planning (ERP) systems were developed and run on the client-server architecture. An ERP system is an application with a centralized database that can be used to run a company’s entire business. With separate modules for accounting, finance, inventory, human resources, and many more, ERP systems, with Germany’s SAP leading the way, represented the state of the art in information systems integration. ERP systems will be discussed in Chapter 9.
The Internet, World Wide Web and E-Commerce
The first long distance transmission between two computers occurred on October 29, 1969 when developers under the direction of Dr. Leonard Kleinrock sent the word “login” from the campus of UCLA to Stanford Research Institute in Menlo Park, California, a distance of over 350 miles. The United States Department of Defense created and funded ARPA Net (Advanced Research Projects Administration), an experimental network which eventually became known as the Internet. ARPA Net began with just four nodes or sites, a very humble start for today’s Internet. Initially, the Internet was confined to use by universities, government agencies, and researchers. Users were required to type commands (today we refer to this as “command line”) in order to communicate and transfer files. The first e-mail messages on the Internet were sent in the early 1970s as a few very large companies expanded from local networks to the Internet. The computer was now evolving from a purely computational device into the world of digital communications.
In 1989, Tim Berners-Lee developed a simpler way for researchers to share information over the Internet, a concept he called the World Wide Web.[4] This invention became the catalyst for the growth of the Internet as a way for businesses to share information about themselves. As web browsers and Internet connections became the norm, companies rushed to grab domain names and create websites.
In 1991 the National Science Foundation, which governed how the Internet was used, lifted restrictions on its commercial use. Corporations soon realized the huge potential of a digital marketplace on the Internet and in 1994 both eBay and Amazon were founded. A mad rush of investment in Internet-based businesses led to the dot-com boom through the late 1990s, and then the dot-com bust in 2000. The bust occurred as investors, tired of seeing hundreds of companies reporting losses, abandoned their investments. An important outcome for businesses was that thousands of miles of Internet connections, in the form of fiber optic cable, were laid around the world during that time. The world became truly “wired” heading into the new millenium, ushering in the era of globalization, which will be discussed in Chapter 11. This TED Talk video focuses on connecting Africa to the Internet through undersea fibre optic cable.
The digital world also became a more dangerous place as virtually all companies connected to the Internet. Computer viruses and worms, once slowly propagated through the sharing of computer disks, could now grow with tremendous speed via the Internet. Software and operating systems written for a standalone world found it very difficult to defend against these sorts of threats. A whole new industry of computer and Internet security arose. Information security will be discussed in Chapter 6.
Web 2.0
As the world recovered from the dot-com bust, the use of technology in business continued to evolve at a frantic pace. Websites became interactive. Instead of just visiting a site to find out about a business and then purchase its products, customers wanted to be able to customize their experience and interact online with the business. This new type of interactive website, where you did not have to know how to create a web page or do any programming in order to put information online, became known as Web 2.0. This new stage of the Web was exemplified by blogging, social networking, and interactive comments being available on many websites. The new Web 2.0 world, in which online interaction became expected, had a major impact on many businesses and even whole industries. Many bookstores found themselves relegated to a niche status. Video rental chains and travel agencies simply began going out of business as they were replaced by online technologies. The newspaper industry saw a huge drop in circulation with some cities such as New Orleans no longer able to support a daily newspaper.
Disintermediation is the process of technology replacing a middleman in a transaction. Web 2.0 allowed users to get information and news online, reducing dependence of physical books and newspapers.
As the world became more connected, new questions arose. Should access to the Internet be considered a right? Is it legal to copy a song that had been downloaded from the Internet? Can information entered into a website be kept private? What information is acceptable to collect from children? Technology moved so fast that policymakers did not have enough time to enact appropriate laws. Ethical issues surrounding information systems will be covered in Chapter 12.
The Post-PC World, Sort of
Ray Ozzie, a technology visionary at Microsoft, stated in 2012 that computing was moving into a phase he called the post-PC world.[5] Now six years later that prediction has not stood up very well to reality. As you will read in Chapter 13, PC sales have dropped slightly in recent years while there has been a precipitous decline in tablet sales. Smartphone sales have accelerated, due largely to their mobility and ease of operation. Just as the mainframe before it, the PC will continue to play a key role in business, but its role will be somewhat diminished as people emphasize mobility as a central feature of technology. Cloud computing provides users with mobile access to data and applications, making the PC more of a part of the communications channel rather than a repository of programs and information. Innovation in the development of technology and communications will continue to move businesses forward.
Eras of Business Computing
Era Hardware Operating System Applications
Mainframe
(1970s)
Terminals connected to mainframe computer Time-sharing
(TSO) on Multiple Virtual Storage (MVS)
Custom-written
MRP software
PC
(mid-1980s)
IBM PC or compatible. Sometimes connected to mainframe computer via
network interface card.
MS-DOS WordPerfect,
Lotus 1-2-3
Client-Server
(late 80s to early 90s)
IBM PC “clone” on a Novell Network. Windows for Workgroups Microsoft
Word, Microsoft Excel
World
Wide Web (mid-90s to early 2000s)
IBM PC “clone” connected to company intranet. Windows XP Microsoft
Office, Internet Explorer
Web 2.0 (mid-2000s – present) Laptop connected to company Wi-Fi. Windows 10 Microsoft
Office
Post-PC
(today and beyond)
Smartphones Android, iOS Mobile-friendly
websites, mobile apps
Can Information Systems Bring Competitive Advantage?
It has always been the assumption that the implementation of information systems will bring a business competitive advantage. If installing one computer to manage inventory can make a company more efficient, then it can be expected that installing several computers can improve business processes and efficiency.
In 2003, Nicholas Carr wrote an article in the Harvard Business Review that questioned this assumption. Entitled “I.T. Doesn’t Matter.” Carr was concerned that information technology had become just a commodity. Instead of viewing technology as an investment that will make a company stand out, Carr said technology would become as common as electricity – something to be managed to reduce costs, ensure that it is always running, and be as risk-free as possible.
The article was both hailed and scorned. Can I.T. bring a competitive advantage to an organization? It sure did for Walmart (see sidebar). Technology and competitive advantage will be discussed in Chapter 7.
Sidebar: Walmart Uses Information Systems to Become the World’s Leading Retailer
Walmart is the world’s largest retailer, earn 8.1 billion for the fiscal year that ended on January 31, 2018. Walmart currently serves over 260 million customers every week worldwide through its 11,700 stores in 28 countries.[6]In 2018 Fortune magazine for the sixth straight year ranked Walmart the number one company for annual revenue as they again exceeded \$500 billion in annual sales. The next closest company, Exxon, had less than half of Walmart’s total revenue.[7] Walmart’s rise to prominence is due in large part to making information systems a high priority, especially in their Supply Chain Management (SCM) system known as Retail Link.ing \$14.3 billion on sales of \$30
This system, unique when initially implemented in the mid-1980s, allowed Walmart’s suppliers to directly access the inventory levels and sales information of their products at any of Walmart’s more than eleven thousand stores. Using Retail Link, suppliers can analyze how well their products are selling at one or more Walmart stores with a range of reporting options. Further, Walmart requires the suppliers to use Retail Link to manage their own inventory levels. If a supplier feels that their products are selling out too quickly, they can use Retail Link to petition Walmart to raise the inventory levels for their products. This has essentially allowed Walmart to “hire” thousands of product managers, all of whom have a vested interest in the products they are managing. This revolutionary approach to managing inventory has allowed Walmart to continue to drive prices down and respond to market forces quickly.
Today Walmart continues to innovate with information technology. Using its tremendous market presence, any technology that Walmart requires its suppliers to implement immediately becomes a business standard. For example, in 1983 Walmart became the first large retailer to require suppliers to the use Uniform Product Code (UPC) labels on all products. Clearly, Walmart has learned how to use I.T. to gain a competitive advantage.
Summary
In this chapter you have been introduced to the concept of information systems. Several definitions focused on the main components: technology, people, and process. You saw how the business use of information systems has evolved over the years, from the use of large mainframe computers for number crunching, through the introduction of the PC and networks, all the way to the era of mobile computing. During each of these phases, new innovations in software and technology allowed businesses to integrate technology more deeply into their organizations.
Virtually every company uses information systems which leads to the question: Does information systems bring a competitive advantage? In the final analysis the goal of this book is to help you understand the importance of information systems in making an organization more competitive. Your challenge is to understand the key components of an information system and how it can be used to bring a competitive advantage to every organization you will serve in your career.
Study Questions
1. What are the five major components that make up an information system?
2. List the three examples of information system hardware?
3. Microsoft Windows is an example of which component of information systems?
4. What is application software?
5. What roles do people play in information systems?
6. What is the definition of a process?
7. What was invented first, the personal computer or the Internet?
8. In what year were restrictions on commercial use of the Internet first lifted?
9. What is Carr’s main argument about information technology?
Exercises
1. Suppose that you had to explain to a friend the concept of an information system. How would you define it? Write a one-paragraph description in your own words that you feel would best describe an information system to your friends or family.
2. Of the five primary components of an information system (hardware, software, data, people, process), which do you think is the most important to the success of a business organization? Write a one-paragraph answer to this question that includes an example from your personal experience to support your answer.
3. Everyone interacts with various information systems every day: at the grocery store, at work, at school, even in our cars. Make a list of the different information systems you interact with daily. Can you identify the technologies, people, and processes involved in making these systems work.
4. Do you agree that we are in a post-PC stage in the evolution of information systems? Do some original research and cite it as you make your prediction about what business computing will look like in the next generation.
5. The Walmart sidebar introduced you to how information systems was used to make them the world’s leading retailer. Walmart has continued to innovate and is still looked to as a leader in the use of technology. Do some original research and write a one-page report detailing a new technology that Walmart has recently implemented or is pioneering.
Labs
1. Examine your PC. Using a four column table format identify and record the following information: 1st column: Program name, 2nd column: software manufacturer, 3rd column: software version, 4th column: software type (editor/word processor, spreadsheet, database, etc.).
2. Examine your mobile phone. Create another four column table similar to the one in Lab #1. This time identify the apps, then record the requested information.
3. In this chapter you read about the evolution of computing from mainframe computers to PCs and on to smartphones. Create a four column table and record the following information about your own electronic devices: 1st column – Type: PC or smartphone, 2nd column – Operating system including version, 3rd column – Storage capacity, 4th column – Storage available.
1. Laudon, K.C. and Laudon, J. P. (2014) Management Information Systems, thirteenth edition. Upper Saddle River, New Jersey: Pearson.
2. Valacich, J. and Schneider, C. (2010). Information Systems Today – Managing in the Digital World, fourth edition. Upper Saddle River, New Jersey: Prentice-Hall.
3. Laudon, K.C. and Laudon, J. P. (2012). Management Information Systems, twelfth edition. Upper Saddle River, New Jersey: Prentice-Hall.
4. CERN. (n.d.) The Birth of the Web. Retrieved from http://public.web.cern.ch/public/en/about/web-en.html
5. Marquis, J. (2012, July 16) What is the Post-PC World? Online Universities.com. Retrieved from https://www.onlineuniversities.com/b...post-pc-world/
6. Walmart. (n.d.) 2017 Annual Report. Retrieved from http://s2.q4cdn.com/056532643/files/...017_AR-(1).pdf
7. McCoy, K. (2018, May 21). Big Winners in Fortune 500 List. USA Today. Retrieved from http://https://www.usatoday.com/story/money/2018/05/21/big-winners-fortune-500-list-walmart-exxon-mobil-amazon/628003002/ | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/100%3A_What_Is_an_Information_System.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe information systems hardware;
• identify the primary components of a computer and the functions they perform; and
• explain the effect of the commoditization of the personal computer.
Introduction
As you learned in the first chapter, an information system is made up of five components: hardware, software, data, people, and process. The physical parts of computing devices – those that you can actually touch – are referred to as hardware. In this chapter, you will take a look at this component of information systems, learn a little bit about how it works, and discuss some of the current trends surrounding it.
As stated above, computer hardware encompasses digital devices that you can physically touch. This includes devices such as the following:
• desktop computers
• laptop computers
• mobile phones
• tablet computers
• e-readers
• storage devices, such as flash drives
• input devices, such as keyboards, mice, and scanners
• output devices such as printers and speakers.
Besides these more traditional computer hardware devices, many items that were once not considered digital devices are now becoming computerized themselves. Digital technologies are being integrated into many everyday objects so the days of a device being labeled categorically as computer hardware may be ending. Examples of these types of digital devices include automobiles, refrigerators, and even beverage dispensers. In this chapter, you will also explore digital devices, beginning with defining what is meant by the term itself.
Digital Devices
A digital device processes electronic signals into discrete values, of which there can be two or more. In comparison analog signals are continuous and can be represented by a smooth wave pattern. You might think of digital (discrete) as being the opposite of analog.
Many electronic devices process signals into two discrete values, typically known as binary. These values are represented as either a one (“on”) or a zero (“off”). It is commonly accepted to refer to the on state as representing the presence of an electronic signal. It then follows that the off state is represented by the absence of an electronic signal. Note: Technically, the voltages in a system are evaluated with high voltages converted into a one or on state and low voltages converted into a zero or off state.
Each one or zero is referred to as a bit (a blending of the two words “binary” and “digit”). A group of eight bits is known as a byte. The first personal computers could process 8 bits of data at once. The number of bits that can be processed by a computer’s processor at one time is known as word size. Today’s PCs can process 64 bits of data at a time which is where the term 64-bit processor comes from. You are most likely using a computer with a 64-bit processor.
Sidebar: Understanding Binary
The numbering system you first learned was Base 10 also known as Decimal. In Base 10 each column in the number represents a power of 10 with the exponent increasing in each column as you move to the left, as shown in the table:
Thousands Hundreds Tens Units
103 102 101 100
The rightmost column represents units or the values zero through nine. The next column from the left represents tens or the values teens, twenties, thirties, etc, followed by the hundreds column (one hundred, two hundred, etc.), then the thousands column (one thousand, two thousand) etc. Expanding the table above, you can write the number 3456 as follows:
Thousands Hundreds Tens Units
103 102 101 100
3 4 5 6
3000 400 50 6
Computers use the Base 2 numbering system. Similar to Base 10, each column has a Base of 2 and has an increasing exponent value moving to the left as shown in the table below:
Two cubed Two squared Two Units
23 22 21 20
The rightmost column represents 20 or units ( 1 ). The next column from the left represents 21 twos or ( 2 ). The third column represents 22 or ( 4 ) and the fourth column represents 23 or ( 8 ). Expanding the table above, you can see how the decimal number 15 is converted to 1111 in binary as follows:
Two cubed Two squared Two Units
23 22 21 20
1 1 1 1
8 4 2 1
8 + 4 + 2 + 1 = 15
Understanding binary is important because it helps us understand how computers store and transmit data. A “bit” is the lowest level of data storage, stored as either a one or a zero. If a computer wants to communicate the number 15, it would need to send 1111 in binary (as shown above). This is four bits of data since four digits are needed. A “byte” is 8 bits. If a computer wanted to transmit the number 15 in a byte, it would send 00001111. The highest number that can be sent in a byte is 255, which is 11111111, which is equal to 27+26+25+24+23+22+21+20.
As the capacities of digital devices grew, new terms were developed to identify the capacities of processors, memory, and disk storage space. Prefixes were applied to the word byte to represent different orders of magnitude. Since these are digital specifications, the prefixes were originally meant to represent multiples of 1024 (which is 210), but have more recently been rounded for the sake of simplicity to mean multiples of 1000, as shown in the table below:
Prefix Represents Example
kilo one thousand kilobyte=one thousand bytes
mega one million megabyte = one million bytes
giga one billion gigabyte = one billion bytes
tera one trillion terabyte = one trillion bytes
peta one quadrillion petabyte = one quadrillion bytes
exa one quintillion exabyte = one quintillion bytes
zetta one sextillion zettabyte = one sextillion bytes
yotta one septillion yottabyte = one septillion bytes
Tour of a PC
All personal computers consist of the same basic components: a Central Processing Unit (CPU), memory, circuit board, storage, and input/output devices. Almost every digital device uses the same set of components, so examining the personal computer will give you insight into the structure of a variety of digital devices. Here’s a “tour” of a personal computer.
Processing Data: The CPU
The core of a computer is the Central Processing Unit, or CPU. It can be thought of as the “brains” of the device. The CPU carries out the commands sent to it by the software and returns results to be acted upon.
The earliest CPUs were large circuit boards with limited functionality. Today, a CPU can perform a large variety of functions. There are two primary manufacturers of CPUs for personal computers: Intel and Advanced Micro Devices (AMD).
The speed (“clock time”) of a CPU is measured in hertz. A hertz is defined as one cycle per second. A kilohertz (abbreviated kHz) is one thousand cycles per second, a megahertz (mHz) is one million cycles per second, and a gigahertz (gHz) is one billion cycles per second. The CPU’s processing power is increasing at an amazing rate (see the sidebar about Moore’s Law).
Besides a faster clock time, today’s CPU chips contain multiple processors. These chips, known as dual-core (two processors) or quad-core (four processors), increase the processing power of a computer by providing the capability of multiple CPUs all sharing the processing load. Intel’s Core i7 processors contain 6 cores and their Core i9 processors contain 16 cores. This video shows how a CPU works.
Sidebar: Moore’s Law and Huang’s Law
As you know computers get faster every year. Many times we are not sure if we want to buy today’s model because next week it won’t be the most advanced any more. Gordon Moore, one of the founders of Intel, recognized this phenomenon in 1965, noting that microprocessor transistor counts had been doubling every year.[1] His insight eventually evolved into Moore’s Law:
The number of integrated circuits on a chip doubles every two years.
Moore’s Law has been generalized into the concept that computing power will double every two years for the same price point. Another way of looking at this is to think that the price for the same computing power will be cut in half every two years. Moore’s Law has held true for over forty years (see figure below).
The limits of Moore’s Law are now being reached and circuits cannot be reduced further. However, Huang’s Law regarding Graphics Processors Units (GPUs) may extend well into the future. Nvidia’s CEO Jensen Huang spoke at the GPU Technology Conference in March 2018 announcing that the speed of GPUs are increasing faster than Moore’s Law. Nvidia’s GPUs are 25 times faster than five years ago. He admitted that the advancement is because of advances in architecture, memory technology, algorithms, and interconnects.[2]
Motherboard
The motherboard is the main circuit board on the computer. The CPU, memory, and storage components, among other things, all connect into the motherboard. Motherboards come in different shapes and sizes, depending upon how compact or expandable the computer is designed to be. Most modern motherboards have many integrated components, such as network interface card, video, and sound processing, which previously required separate components.
The motherboard provides much of the bus of the computer (the term bus refers to the electrical connections between different computer components). The bus is an important factor in determining the computer’s speed – the combination of how fast the bus can transfer data and the number of data bits that can be moved at one time determine the speed. The traces shown in the image are on the underside of the motherboard and provide connections between motherboard components.
Random-Access Memory
When a computer boots, it begins to load information from storage into its working memory. This working memory, called Random-Access Memory (RAM), can transfer data much faster than the hard disk. Any program that you are running on the computer is loaded into RAM for processing. In order for a computer to work effectively, some minimal amount of RAM must be installed. In most cases, adding more RAM will allow the computer to run faster. Another characteristic of RAM is that it is “volatile.” This means that it can store data as long as it is receiving power. When the computer is turned off, any data stored in RAM is lost.
RAM is generally installed in a personal computer through the use of a Double Data Rate (DDR) memory module. The type of DDR accepted into a computer is dependent upon the motherboard. There have been basically four generations of DDR: DDR1, DDR2, DDR3, and DDR4. Each generation runs faster than the previous with DDR4 capable of speeds twice as fast as DDR3 while consuming less voltage.
Hard Disk
While the RAM is used as working memory, the computer also needs a place to store data for the longer term. Most of today’s personal computers use a hard disk for long-term data storage. A hard disk is considered non-volatile storage because when the computer is turned off the data remains in storage on the disk, ready for when the computer is turned on. Drives with a capacity less than 1 Terabyte usually have just one platter. Notice the single platter in the image. The read/write arm must be positioned over the appropriate track before accessing or writing data.”
Solid State Drives
Solid State Drives (SSD) are becoming more popular in personal computers. The SSD performs the same function as a hard disk, namely long-term storage. Instead of spinning disks, the SSD uses flash memory that incorporates EEPROM (Electrically Erasable Programmable Read Only Memory) chips, which is much faster.
Solid-state drives are currently a bit more expensive than hard disks. However, the use of flash memory instead of disks makes them much lighter and faster than hard disks. SSDs are primarily utilized in portable computers, making them lighter, more durable, and more efficient. Some computers combine the two storage technologies, using the SSD for the most accessed data (such as the operating system) while using the hard disk for data that is accessed less frequently. SSDs are considered more reliable since there are no moving parts.
Removable Media
Removable storage has changed greatly over the four decades of PCs. Floppy disks have been replaced by CD-ROM drives, then they were replaced by USB (Universal Serial Bus) drives. USB drives are now standard on all PCs with capacities approaching 512 gigabytes. Speeds have also increased from 480 Megabits in USB 2.0 to 10 Gigabits in USB 3.1. USB devices also use EEPROM technology.
[3]
Network Connection
When personal computers were first stand-alone units when first developed, which meant that data was brought into the computer or removed from the computer via removable media. Beginning in the mid-1980s, however, organizations began to see the value in connecting computers together via a digital network. Because of this personal computers needed the ability to connect to these networks. Initially, this was done by adding an expansion card to the computer that enabled the network connection. These cards were known as Network Interface Cards (NIC). By the mid-1990s an Ethernet network port was built into the motherboard on most personal computers. As wireless technologies began to dominate in the early 2000s, many personal computers also began including wireless networking capabilities. Digital communication technologies will be discussed further in Chapter 5.
Input and Output
In order for a personal computer to be useful, it must have channels for receiving input from the user and channels for delivering output to the user. These input and output devices connect to the computer via various connection ports, which generally are part of the motherboard and are accessible outside the computer case. In early personal computers, specific ports were designed for each type of output device. The configuration of these ports has evolved over the years, becoming more and more standardized over time. Today, almost all devices plug into a computer through the use of a USB port. This port type, first introduced in 1996, has increased in its capabilities, both in its data transfer rate and power supplied.
Bluetooth
Besides USB, some input and output devices connect to the computer via a wireless-technology standard called Bluetooth which was invented in 1994. Bluetooth exchanges data over short distances of 10 meters up to 100 meters using radio waves. Two devices communicating with Bluetooth must both have a Bluetooth communication chip installed. Bluetooth devices include pairing your phone to your car, computer keyboards, speakers, headsets, and home security, to name just a few.
Input Devices
All personal computers need components that allow the user to input data. Early computers simply used a keyboard for entering data or select an item from a menu to run a program. With the advent operating systems offering the graphical user interface, the mouse became a standard component of a computer. These two components are still the primary input devices to a personal computer, though variations of each have been introduced with varying levels of success over the years. For example, many new devices now use a touch screen as the primary way of data entry.
Other input devices include scanners which allow users to input documents into a computer either as images or as text. Microphones can be used to record audio or give voice commands. Webcams and other types of video cameras can be used to record video or participate in a video chat session.
Output Devices
Output devices are essential as well. The most obvious output device is a display or monitor, visually representing the state of the computer. In some cases, a personal computer can support multiple displays or be connected to larger-format displays such as a projector or large-screen television. Other output devices include speakers for audio output and printers for hardcopy output.
Sidebar: Which Hardware Components Contribute to the Speed of Your Computer
The speed of a computer is determined by many elements, some related to hardware and some related to software. In hardware, speed is improved by giving the electrons shorter distances to travel in completing a circuit. Since the first CPU was created in the early 1970s, engineers have constantly worked to figure out how to shrink these circuits and put more and more circuits onto the same chip – these are known as integrated circuits. And this work has paid off – the speed of computing devices has been continuously improving.
Multi-core processors, or CPUs, have contributed to faster speeds. Intel engineers have also improved CPU speeds by using QuickPath Interconnect, a technique which minimizes the processor’s need to communicate directly with RAM or the hard drive. Instead, the CPU contains a cache of frequently used data for a particular program. An algorithm evaluates a program’s data usage and determines which data should be temporarily stored in the cache.
The hardware components that contribute to the speed of a personal computer are the CPU, the motherboard, RAM, and the hard disk. In most cases, these items can be replaced with newer, faster components. The table below shows how each of these contributes to the speed of a computer. Besides upgrading hardware, there are many changes that can be made to the software of a computer to make it faster.
Component Speed
measured by
Units Description
CPU Clock
speed
GHz (billions of cycles) Hertz indicates the time it takes to complete a cycle.
Motherboard Bus
speed
MHz The speed at which data can move across the bus.
RAM Data
transfer rate
Mb/s (millions of bytes per second) The time it takes for data to be transferred from memory to system measured in Megabytes.
Hard Disk Access
time
ms (millisecond) The time it takes for the drive to locate the data to be accessed.
Data
transfer rate
MBit/s The time it takes for data to be transferred from disk to system.
Other Computing Devices
A personal computer is designed to be a general-purpose device, able to solve many different types of problems. As the technologies of the personal computer have become more commonplace, many of the components have been integrated into other devices that previously were purely mechanical. The definition or description of what defines a computer has changed. Portability has been an important feature for most users. Here is an overview of some trends in personal computing.
Portable Computers
Portable computing today includes laptops, notebooks and netbooks, many weighing less than 4 pounds and providing longer battery life. The MacBook Air is a good example of this: it weighs less than three pounds and is only 0.68 inches thick!
Netbooks (short for Network Books) are extremely light because they do not have a hard drive, depending instead on the Internet “cloud” for data and application storage. Netbooks depend on a Wi-Fi connection and can run Web browsers as well as a word processor.
Smartphones
While cell phones were introduced in the 1970s, smartphones have only been around for the past 20 years. As cell phones evolved they gained a broader array of features and programs. Today’s smartphones provide the user with telephone, email, location, and calendar services, to name a few. They function as a highly mobile computer, able to connect to the Internet through either cell technology or Wi-Fi. Smartphones have revolutionized computing, bringing the one feature PCs and laptops could not deliver, namely mobility. Consider the following data regarding mobile computing [4]:
1. There are 3.7 billion global mobile Internet users as at January 2018.
2. Mobile devices influenced sales to the tune of over \$1.4 trillion in 2016.
3. Mobile commerce revenue in the U.S. is projected to be \$459.38 billion in 2018, and it is estimated to be \$693.36 billion by 2019.
4. By the end of 2018, over \$1 trillion — or 75 percent — of ecommerce sales in China will be done via mobile devices.
5. The average order value for online orders placed on Smartphones in the first quarter of 2018 is \$84.55 while the average order value for orders placed on Tablets is \$94.91.
6. Of the 2.79 billion active social media users in the world, 2.55 billion actively use their mobile devices for social media-related activities.
7. 90 percent of the time spent on mobile devices is spent in apps.
8. Mobile traffic is responsible for 52.2 percent of Internet traffic in 2018 — compared to 50.3 percent from 2017.
9. While the total percentage of mobile traffic is more than desktop, engagement is higher on desktop. 55.9 percent of time spent on sites is by desktop users and 40.1 percent of time spent on sites is by mobile users.
10. By 2020, mobile commerce will account for 45 percent of all e-commerce activities — compared to 20.6 percent in 2016.
The Apple iPhone was introduced in January 2007 and went on the market in June of that same year. Its ease of use and intuitive interface made it an immediate success and solidified the future of smartphones. The first Android phone was released in 2008 with functionality similar to the iPhone.
Tablet Computers
A tablet computer uses a touch screen as its primary input and is small enough and light enough to be easily transported. They generally have no keyboard and are self-contained inside a rectangular case. Apple set the standard for tablet computing with the introduction of the iPad in 2010 using iOS, the operating system of the iPhone. After the success of the iPad, computer manufacturers began to develop new tablets that utilized operating systems that were designed for mobile devices, such as Android.
Global market share for tablets has changed since the early days of Apple’s dominance. Today the iPad has about 25% of the global market while Amazon Fire has 15% and Samsung Galaxy has 14%. [5] However, the popularity of tablets has declined sharply in recent years.
Integrated Computing and Internet of Things (IoT)
Along with advances in computers themselves, computing technology is being integrated into many everyday products. From automobiles to refrigerators to airplanes, computing technology is enhancing what these devices can do and is adding capabilities into our every day lives thanks in part to IoT.
Internet of Things and the Cloud
The Internet of Things (IoT) is a network of billions of devices, each with their own unique network address, around the world with embedded electronics allowing them to connect to the Internet for the purpose of collecting and sharing data, all without the involvement of human beings.[6]
Objects ranging from a simple light bulb to a fitness band such as FitBit to a driverless truck are all part of IoT thanks to the processors inside them. A smartphone app can control and/or communicate with each of these devices as well as others such as electric garage door openers (for those who can’t recall if the door has been closed), kitchen appliances (“Buy milk after work today.”), thermostats such as Nest, home security, audio speakers, and the feeding of pets.
Here are three of the latest ways that computing technologies are being integrated into everyday products through IoT:
• How IoT Works
• The Smart House
• The Self-Driving Car
The Commoditization of the Personal Computer
Over the past forty years, as the personal computer has gone from technical marvel to part of everyday life, it has also become a commodity. There is very little differentiation between computer models and manufacturers, and the primary factor that controls their sale is their price. Hundreds of manufacturers all over the world now create parts for personal computers which are purchased and assembled. As commodities, there are essentially little or no differences between computers made by these different companies. Profit margins for personal computers are minimal, leading hardware developers to find the lowest-cost manufacturing methods.
There is one brand of computer for which this is not the case – Apple. Because Apple does not make computers that run on the same open standards as other manufacturers, they can design and manufacture a unique product that no one can easily copy. By creating what many consider to be a superior product, Apple can charge more for their computers than other manufacturers. Just as with the iPad and iPhone, Apple has chosen a strategy of differentiation, an attempt to avoid commoditization.
Summary
Information systems hardware consists of the components of digital technology that you can touch. This chapter covered the components that make up a personal computer, with the understanding that the configuration of a personal computer is very similar to that of any type of digital computing device. A personal computer is made up of many components, most importantly the CPU, motherboard, RAM, hard disk, removable media, and input/output devices. Variations on the personal computer, such as the smartphone, were also examined. Finally, commoditization of the personal computer was addressed.
Study Questions
1. Write your own description of what the term information systems hardware means.
2. What has lead to the shift toward mobility in computing?
3. What is the impact of Moore’s Law on the various hardware components described in this chapter?
4. Write a one page summary of one of the items linked to in the “Integrated Computing” section.
5. Explain why the personal computer is now considered a commodity.
6. The CPU can also be thought of as the _____________ of the computer.
7. List the units of measure for data storage in increasing order from smallest to largest, kilobyte to yottabyte.
8. What is the bus of a computer?
9. Name two differences between RAM and a hard disk.
10. What are the advantages of solid-state drives over hard disks?
Exercises
1. If you could build your own personal computer, what components would you purchase? Put together a list of the components you would use to create it, including a computer case, motherboard, CPU, hard disk, RAM, and DVD drive. How can you be sure they are all compatible with each other? How much would it cost? How does this compare to a similar computer purchased from a vendor such as Dell or HP?
2. Re-read the section on IoT, then find at least two scholarly articles about IoT. Prepare a minimum of three slides that address issues related to IoT. Be sure to give attribution to your sources.
3. What is the current status of solid-state drives vs. hard disks? Research online and compare prices, capacities, speed, and durability. Again, be sure to give attribution to your sources.
Labs
1. Review the sidebar on the binary number system. Represent the following decimal numbers in binary: 16, 100. Represent the following binary numbers in decimal: 1011, 100100. Write the decimal number 254 in an 8-bit byte.
2. Re-read the section on IoT, then look around your building (dorm, apartment, or house) and make a list of possible instances of IoTs. Be sure to list their location and likely function.
1. Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics Magazine, 4.
2. Huang, J. (2018, April 2). Move Over Moore’s Law: Make Room for Huang’s Law. IEEE Spectrum. Retrieved from https://spectrum.ieee.org/view-from-...for-huangs-law
3. Wikipedia. (n.d.) Universal Serial Bus. Retrieved from en.Wikipedia.org/wiki/USB.
4. Stevens, J. (2017). Mobile Internet Statistics and Facts 2017. Hosting Facts, August 17, 2017. Retrieved from https://hostingfacts.com/internet-facts-stats-2016/
5. Statista. (2018). Global market share held by tablet vendors 4th quarter 2017. Retrieved from https://www.statista.com/statistics/...ablet-vendors/
6. Ranger, S. (2018, January 19). What is the IoT? ZDNet. Retrieved from http://www.zdnet.com/article/what-is...iot-right-now/. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/102%3A_Hardware.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the term software;
• identify and describe the two primary categories of software;
• describe the role ERP software plays in an organization;
• describe cloud computing and its advantages and disadvantages for use in an organization; and
• define the term open-source and identify its primary characteristics.
Introduction
The second component of an information system is software, the set of instructions that tells the hardware what to do. Software is created by developers through the process of programming (covered in more detail in Chapter 10). Without software, the hardware would not be functional.
Types of Software
Software can be broadly divided into two categories: operating systems and application software. Operating systems manage the hardware and create the interface between the hardware and the user. Application software performs specific tasks such as word processing, accounting, database management, video games, or browsing the web.
Operating Systems
An operating system is first loaded into the computer by the boot program, then it manages all of the programs in the computer, including both programs native to the operating system such as file and memory management and application software. Operating systems provide you with these key functions:
1. managing the hardware resources of the computer;
2. providing the user-interface components;
3. providing a platform for software developers to write applications.
All computing devices require an operating system. The most popular operating systems for personal computers are: Microsoft Windows, Apple’s Mac OS, and various versions of Linux. Smartphones and tablets run operating systems as well, such as iOS (Apple), Android (Google), Windows Mobile (Microsoft), and Blackberry.
Microsoft provided the first operating system for the IBM-PC, released in 1981. Their initial venture into a Graphical User Interface (GUI) operating system, known as Windows, occurred in 1985. Today’s Windows 10 supports the 64-bit Intel CPU. Recall that “64-bit” indicates the size of data that can be moved within the computer.
Apple introduced the Macintosh computer 1984 with the first commercially successful GUI. Apple’s operating system for the Macintosh is known as “Mac OS ” and also uses an Intel CPU supporting 64-bit processing. Mac OS versions have been named after mountains such as El Capitan, Sierra, and High Sierra. Multitasking, virtual memory, and voice input have become standard features of both operating systems.
The Linux operating system is open source, meaning individual developers are allowed to make modifications to the programming code. Linux is a version of the Unix operating. Unix runs on large and expensive minicomputers. Linux developer Linus Torvalds, a professor in Finland and the creator of Linux, wanted to find a way to make Unix run on less expensive personal computers. Linux has many variations and now powers a large percentage of web servers in the world.
Sidebar: Why Is Microsoft Software So Dominant in the Business World?
If you’ve worked in business, you may have noticed that almost all computers in business run a version of Microsoft Windows. However, in classrooms from elementary to college, there is almost a balance between Macs and PCs. Why has this not extended into the business world?
As discussed in Chapter 1, many businesses used IBM mainframe computers back in the 1960s and 1970s. When businesses migrated to the microcomputer (personal computer) market, they elected to stay with IBM and chose the PC. Companies took the safe route, invested in the Microsoft operating system and in Microsoft software/applications.
Microsoft soon found itself with the dominant personal computer operating system for businesses. As the networked PC began to replace the mainframe computer, Microsoft developed a network operating system along with a complete suite of programs focused on business users. Today Microsoft Office in its various forms controls 85% of the market. [1]
Application Software
The second major category of software is application software. Application software is utilized directly today to accomplish a specific goal such as word processing, calculations on a spreadsheet, or surfing the Internet using your favorite browser.
The “Killer” App
When a new type of digital device is invented, there are generally a small group of technology enthusiasts who will purchase it just for the joy of figuring out how it works. A “killer” application is one that becomes so essential that large numbers of people will buy a device just to run that application. For the personal computer, the killer application was the spreadsheet.
The first spreadsheet was created by an MBA student at Harvard University who tired of making repeated calculations to determine the optimal result on a problem and decided to create a tool that allowed the user to easily change values and recalculate formulas. The result was the spreadsheet. Today’s dominant spreadsheet is Microsoft Excel which still retains the basic functionality of the first spreadsheet.
Productivity Software
Along with the spreadsheet, several other software applications have become standard tools for the workplace. Known as productivity software, these programs allow office employees to complete their daily work efficiently. Many times these applications come packaged together, such as in Microsoft’s Office suite. Here is a list of some of these applications and their basic functions:
• Word processing Users can create and edit documents using this class of software. Functions include the ability to type and edit text, format fonts and paragraphs, as well as add, move, and delete text throughout the document. Tables and images can be inserted. Documents can be saved in a variety of electronic file formats with Microsoft Word’s DOCX being the most popular. Documents can also be converted to other formats such as Adobe’s PDF (Portable Document Format) or a .TXT file.
• Spreadsheet This class of software provides a way to do numeric calculations and analysis, displaying the result in charts and graphs. The working area is divided into rows and columns, where users can enter numbers, text, or formulas. It is the formulas that make a spreadsheet powerful, allowing the user to develop complex calculations that can change based on the numbers entered. The most popular spreadsheet package is Microsoft Excel, which saves its files in the XLSX format.
• Presentation Users can create slideshow presentations using this class of software. The slides can be projected, printed, or distributed to interested parties. Text, images, audio, and visual can all be added to the slides. Microsoft’s PowerPoint is the most popular software right now, saving its files in PPTX format.
• Some office suites include other types of software. For example, Microsoft Office includes Outlook, its e-mail package, and OneNote, an information-gathering collaboration tool. The professional version of Office also includes Microsoft Access, a database package. (Databases are covered more in Chapter 4.)
Microsoft popularized the idea of the office-software productivity bundle with their release of the Microsoft Office Suite. This package continues to dominate the market and most businesses expect employees to know how to use this software. However, many competitors to Microsoft Office do exist and are compatible with the file formats used by Microsoft (see table below). Microsoft also offers a cloud-based version of their office suite named Microsoft Office 365. Similar to Google Drive, this suite allows users to edit and share documents online utilizing cloud-computing technology.
Utility Software and Programming Software
Utility software includes programs that allow you to fix or modify your computer in some way. Examples include anti-malware software and programs that totally remove software you no longer want installed. These types of software packages were created to fill shortcomings in operating systems. Many times a subsequent release of an operating system will include these utility functions as part of the operating system itself.
Programming software’s purpose is to produce software. Most of these programs provide developers with an environment in which they can write the code, test it, and convert/compile it into the format that can then be run on a computer. This software is typically identified as the Integrated Development Environment (IDE) and is provided free from the corporation that developed the programming language that will be used to write the code.
Sidebar: “PowerPointed” to Death
As presentation software has gained acceptance as the primary method to formally present information to a group or class, the art of giving an engaging presentation is becoming rare. Many presenters now just read the bullet points in the presentation and immediately bore those in attendance, who can already read it for themselves. The real problem is not with PowerPoint as much as it is with the person creating and presenting. Author and chief evangelist Guy Kawasaki has developed the 10/20/30 rule for Powerpoint users. Just remember: 10 slides, 20 minutes, 30 point font.”[2] If you are determined to improve your PowerPoint skills, read Presentation Zen by Garr Reynolds.
New digital presentation technologies are being developed that go beyond Powerpoint. For example, Prezi uses a single canvas for the presentation, allowing presenters to place text, images, and other media on the canvas, and then navigate between these objects as they present. Tools such as Tableau allow users to analyze data in depth and create engaging interactive visualizations.
Sidebar: I Own This Software, Right? Well…
When you purchase software and install it on your computer, are you the owner of that software? Technically, you are not! When you install software, you are actually just being given a license to use it. When you first install a package, you are asked to agree to the terms of service or the license agreement. In that agreement, you will find that your rights to use the software are limited. For example, in the terms of the Microsoft Office software license, you will find the following statement: “This software is licensed, not sold. This agreement only gives you some rights to use the features included in the software edition you licensed.”
For the most part, these restrictions are what you would expect. You cannot make illegal copies of the software and you may not use it to do anything illegal. However, there are other, more unexpected terms in these software agreements. For example, many software agreements ask you to agree to a limit on liability. Again, from Microsoft: “Limitation on and exclusion of damages. You can recover from Microsoft and its suppliers only direct damages up to the amount you paid for the software. You cannot recover any other damages, including consequential, lost profits, special, indirect or incidental damages.” This means if a problem with the software causes harm to your business, you cannot hold Microsoft or the supplier responsible for damages.
Applications for the Enterprise
As the personal computer proliferated inside organizations, control over the information generated by the organization began splintering. For instance, the customer service department creates a customer database to keep track of calls and problem reports, and the sales department also creates a database to keep track of customer information. Which one should be used as the master list of customers? Or perhaps someone in sales might create a spreadsheet to calculate sales revenue, while someone in finance creates a different revenue document that meets the needs of their department, but calculates revenue differently. The two spreadsheets will report different revenue totals. Which one is correct? And who is managing all of this information?
Enterprise Resource Planning
In the 1990s the need to bring an organization’s information back under centralized control became more apparent. The Enterprise Resource Planning (ERP) system (sometimes just called enterprise software) was developed to bring together an entire organization within one program. ERP software utilizes a central database that is implemented throughout the entire organization. Here are some key points about ERP.
• A software application. ERP is an application that is used by many of an organization’s employees.
• Utilizes a central database. All users of the ERP edit and save their information from the same data source. For example, this means there is only one customer table in the database, there is only one sales (revenue) table in the database, etc.
• Implemented organization-wide. ERP systems include functionality that covers all of the essential components of a business. An organization can purchase modules for its ERP system that match specific needs such as order entry, manufacturing, or planning.
ERP systems were originally marketed to large corporations. However, as more and more large companies began installing them, ERP vendors began targeting mid-sized and even smaller businesses. Some of the more well-known ERP systems include those from SAP, Oracle, and Microsoft.
In order to effectively implement an ERP system in an organization, the organization must be ready to make a full commitment. All aspects of the organization are affected as old systems are replaced by the ERP system. In general, implementing an ERP system can take two to three years and cost several million dollars.
So why implement an ERP system? If done properly, an ERP system can bring an organization a good return on their investment. By consolidating information systems across the enterprise and using the software to enforce best practices, most organizations see an overall improvement after implementing an ERP. Business processes as a form of competitive advantage will be covered in Chapter 9.
Customer Relationship Management
A Customer Relationship Management (CRM) system manages an organization’s customers. In today’s environment, it is important to develop relationships with your customers, and the use of a well-designed CRM can allow a business to personalize its relationship with each of its customers. Some ERP software systems include CRM modules. An example of a well-known CRM package is Salesforce.
Supply Chain Management
Supply Chain
Many organizations must deal with the complex task of managing their supply chains. At its simplest, a supply chain is the linkage between an organization’s suppliers, its manufacturing facilities, and the distributors of its products. Each link in the chain has a multiplying effect on the complexity of the process. For example, if there are two suppliers, one manufacturing facility, and two distributors, then the number of links to manage = 4 ( 2 x 1 x 2 ). However, if two more suppliers are added, plus another manufacturing facility, and two more distributors, then the number of links to manage = 32 ( 4 x 2 x 4 ). Also, notice in the above illustration that all arrows have two heads, indicating that information flows in both directions. Suppliers are part of a business’s supply chain. They provide information such as price, size, quantity, etc. to the business. In turn, the business provides information such as quantity on hand at every store to the supplier. The key to successful supply chain management is the information system.
A Supply Chain Management (SCM) system handles the interconnection between these links as well as the inventory of the products in their various stages of development. As discussed previously much of Walmart’s success has come from its ability to identify and control the supply chain for its products. Walmart invested heavily in their information system so they could communicate with their suppliers and manage the thousands of products they sell.
Walmart realized in the 1980s that the key to their success was information systems. Specifically, they needed to manage their complex supply chain with its thousands of suppliers, thousands of retail outlets, and millions of customers. Their success came from being able to integrate information systems to every entity (suppliers, warehouses, retail stores) through the sharing of sales and inventory data. Take a moment to study the diagram above…look for the double-headed arrow. Notice that data flows down the supply chain from suppliers to retail stores. But it also flows up the supply chain, back to the suppliers so they can be up to date regarding production and shipping.
Mobile Applications
Just as with the personal computer, mobile devices such as smartphones and electronic tablets also have operating systems and application software. These mobile devices are in many ways just smaller versions of personal computers. A mobile app is a software application designed to run specifically on a mobile device.
As shown in Chapter 2, smartphones are becoming a dominant form of computing, with more smartphones being sold than personal computers. A greater discussion of PC and smartphone sales appears in Chapter 13, along with statistics regarding the decline in tablet sales. Businesses have adjusted to this trend by increasing their investment in the development of apps for mobile devices. The number of mobile apps in the Apple App Store has increased from zero in 2008 to over 2 million in 2017.[3]
Building a mobile app will will be covered in Chapter 10.
Cloud Computing
Historically, for software to run on a computer an individual copy of the software had to be installed on the computer. The concept of “cloud” computing changes this.
Cloud Computing
The “cloud” refers to applications, services, and data storage located on the Internet. Cloud service providers rely on giant server farms and massive storage devices that are connected via the Internet. Cloud computing allows users to access software and data storage services on the Internet.
You probably already use cloud computing in some form. For example, if you access your e-mail via your web browser, you are using a form of cloud computing if you are using Google Drive’s applications. While these are free versions of cloud computing, there is big business in providing applications and data storage over the web. Cloud computing is not limited to web applications. It can also be used for services such as audio or video streaming.
Advantages of Cloud Computing
• No software to install or upgrades to maintain.
• Available from any computer that has access to the Internet.
• Can scale to a large number of users easily.
• New applications can be up and running very quickly.
• Services can be leased for a limited time on an as-needed basis.
• Your information is not lost if your hard disk crashes or your laptop is lost or stolen.
• You are not limited by the available memory or disk space on your computer.
Disadvantages of Cloud Computing
• Your information is stored on someone else’s computer.
• You must have Internet access to use it.
• You are relying on a third-party to provide these services.
Cloud computing has the ability to really impact how organizations manage technology. For example, why is an IT department needed to purchase, configure, and manage personal computers and software when all that is really needed is an Internet connection?
Using a Private Cloud
Many organizations are understandably nervous about giving up control of their data and some of their applications by using cloud computing. But they also see the value in reducing the need for installing software and adding disk storage to local computers. A solution to this problem lies in the concept of a private cloud. While there are various models of a private cloud, the basic idea is for the cloud service provider to section off web server space for a specific organization. The organization has full control over that server space while still gaining some of the benefits of cloud computing.
Virtualization
Virtualization is the process of using software to simulate a computer or some other device. For example, using virtualization a single physical computer can perform the functions of several virtual computers, usually referred to as Virtual Machines (VMs). Organizations implement virtual machines in an effort to reduce the number of physical servers needed to provide the necessary services to users. This reduction in the number of physical servers also reduces the demand for electricity to run and cool the physical servers. For more detail on how virtualization works, see this informational page from VMWare.
Software Creation
Modern software applications are written using a programming language such as Java, Visual C, C++, Python, etc. A programming language consists of a set of commands and syntax that can be organized logically to execute specific functions. Using this language a programmer writes a program (known as source code) that can then be compiled into machine-readable form, the ones and zeroes necessary to be executed by the CPU. Languages such as HTML and Javascript are used to develop web pages.
Open-Source Software
When the personal computer was first released, computer enthusiasts banded together to build applications and solve problems. These computer enthusiasts were motivated to share any programs they built and solutions to problems they found. This collaboration enabled them to more quickly innovate and fix problems.
As software began to become a business, however, this idea of sharing everything fell out of favor with many developers. When a program takes hundreds of hours to develop, it is understandable that the programmers do not want to just give it away. This led to a new business model of restrictive software licensing which required payment for software, a model that is still dominant today. This model is sometimes referred to as closed source, as the source code is not made available to others.
There are many, however, who feel that software should not be restricted. Just as with those early hobbyists in the 1970s, they feel that innovation and progress can be made much more rapidly if they share what has been learned. In the 1990s, with Internet access connecting more people together, the open-source movement gained steam.
Open Office Suite
Open-source software makes the source code available for anyone to copy and use. For most people having access to the source code of a program does little good since it is challenging to modify existing programming code. However, open-source software is also available in a compiled format that can be downloaded and installed. The open-source movement has led to the development of some of the most used software in the world such as the Firefox browser, the Linux operating system, and the Apache web server.
Many businesses are wary of open-source software precisely because the code is available for anyone to see. They feel that this increases the risk of an attack. Others counter that this openness actually decreases the risk because the code is exposed to thousands of programmers who can incorporate code changes to quickly patch vulnerabilities.
There are thousands of open-source applications available for download. For example, you can get the productivity suite from Open Office. One good place to search for open-source software is sourceforge.net, where thousands of programs are available for free download.
Summary
Software gives the instructions that tell the hardware what to do. There are two basic categories of software: operating systems and applications. Operating systems interface with the computer hardware and make system resources available. Application software allows users to accomplish specific tasks such as word processing, presentations, or databases. This group is also referred to as productivity software. An ERP system stores all data in a centralized database that is made accessible to all programs and departments across the organization. Cloud computing provides access to software and databases from the Internet via a web browser. Developers use various programming languages to develop software.
Study Questions
1. Develop your own definition of software being certain to explain the key terms.
2. What are the primary functions of an operating system?
3. Which of the following are operating systems and which are applications: Microsoft Excel, Google Chrome, iTunes, Windows, Android, Angry Birds.
4. What is your favorite software application? What tasks does it help you accomplish?
5. How would you categorize the software that runs on mobile devices? Break down these apps into at least three basic categories and give an example of each.
6. What does an ERP system do?
7. What is open-source software? How does it differ from closed-source software? Give an example of each.
8. What does a software license grant to the purchaser of the software?
Exercises
1. Find a case study online about the implementation of an ERP system. Was it successful? How long did it take? Does the case study tell you how much money the organization spent?
2. If you were running a small business with limited funds for information technology, would you consider using cloud computing? Find some web-based resources that support your decision.
3. Go to sourceforge.net and review their most downloaded software applications. Report on the variety of applications you find. Then pick one that interests you and report back on what it does, the kind of technical support offered, and the user reviews.
4. Review this article on the security risks of open-source software. Write a short analysis giving your opinion on the different risks discussed.
5. List three examples of programming languages? What features in each language makes it useful to developers?
Lab
1. Download Apache Open Office and create a document. Note: If your computer does not have Java Runtime Environment (JRE) 32-bit (x86) installed, you will need to download it first from this site.Open Office runs only in 32-bit (x86) mode. Here is a link to the Getting Started documentation for Open Office. How does it compare to Microsoft Office? Does the fact that you got it for free make it feel less valuable?
1. Statista. (2017). Microsoft – Statistics & Facts. Retrieved from https://www.statista.com/topics/823/microsoft/
2. Kawasaki, G. (n.d.). The 10/20/30 Rules for PowerPoint. Retrieved from https://guykawasaki.com/the_102030_rule/.
3. Statista. (2018). Number of apps in Apple App Store July 2008 to January 2017. Retrieved from https:https://www.statista.com/statistics/...ple-app-store/. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/103%3A_Software.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Describe the differences between data, information, and knowledge;
• Describe why database technology must be used for data resource management;
• Define the term database and identify the steps to creating one;
• Describe the role of a database management system;
• Describe the characteristics of a data warehouse; and
• Define data mining and describe its role in an organization.
Introduction
You have already been introduced to the first two components of information systems: hardware and software. However, those two components by themselves do not make a computer useful. Imagine if you turned on a computer, started the word processor, but could not save a document. Imagine if you opened a music player but there was no music to play. Imagine opening a web browser but there were no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.
Data, Information, and Knowledge
There have been many definitions and theories about data, information, and knowledge. The three terms are often used interchangeably, although they are distinct in nature. We define and illustrate the three terms from the perspective of information systems.
Data are the raw facts, and may be devoid of context or intent. For example, a sales order of computers is a piece of data. Data can be quantitative or qualitative. Quantitative data is numeric, the result of a measurement, count, or some other mathematical calculation. Qualitative data is descriptive. “Ruby Red,” the color of a 2013 Ford Focus, is an example of qualitative data. A number can be qualitative too: if I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the result of a measurement or mathematical calculation.
Information is processed data that possess context, relevance, and purpose. For example, monthly sales calculated from the collected daily sales data for the past year are information. Information typically involves the manipulation of raw data to obtain an indication of magnitude, trends, in patterns in the data for a purpose.
Knowledge in a certain area is human beliefs or perceptions about relationships among facts or concepts relevant to that area. For example, the conceived relationship between the quality of goods and the sales is knowledge. Knowledge can be viewed as information that facilitates action.
Once we have put our data into context, aggregated and analyzed it, we can use it to make decisions for our organization. We can say that this consumption of information produces knowledge. This knowledge can be used to make decisions, set policies, and even spark innovation.
Explicit knowledge typically refers to knowledge that can be expressed into words or numbers. In contrast, tacit knowledge includes insights and intuitions, and is difficult to transfer to another person by means of simple communications.
Evidently, when information or explicit knowledge is captured and stored in computer, it would become data if the context or intent is devoid.
The final step up the information ladder is the step from knowledge (knowing a lot about a topic) to wisdom. We can say that someone has wisdom when they can combine their knowledge and experience to produce a deeper understanding of a topic. It often takes many years to develop wisdom on a particular topic, and requires patience.
Big Data
Almost all software programs require data to do anything useful. For example, if you are editing a document in a word processor such as Microsoft Word, the document you are working on is the data. The word-processing software can manipulate the data: create a new document, duplicate a document, or modify a document. Some other examples of data are: an MP3 music file, a video file, a spreadsheet, a web page, a social media post, and an e-book.
Recently, big data has been capturing the attention of all types of organizations. The term refers to such massively large data sets that conventional data processing technologies do not have sufficient power to analyze them. For example, Walmart must process millions customer transactions every hour across the world. Storing and analyzing that much data is beyond the power of traditional data management tools. Understanding and developing the best tools and techniques to manage and analyze these large data sets are a problem that governments and businesses alike are trying to solve.
Databases
The goal of many information systems is to transform data into information in order to generate knowledge that can be used for decision making. In order to do this, the system must be able to take data, allow the user to put the data into context, and provide tools for aggregation and analysis. A database is designed for just such a purpose.
Why Databases?
Data is a valuable resource in the organization. However, many people do not know much about database technology, but use non-database tools, such as Excel spreadsheet or Word document, to store and manipulate business data, or use poorly designed databases for business processes. As a result, the data are redundant, inconsistent, inaccurate, and corrupted. For a small data set, the use of non-database tools such as spreadsheet may not cause serious problem. However, for a large organization, corrupted data could lead to serious errors and destructive consequences. The common defects in data resources management are explained as follows.
(1) No control of redundant data
People often keep redundant data for convenience. Redundant data could make the data set inconsistent. We use an illustrative example to explain why redundant data are harmful. Suppose the registrar’s office has two separate files that store student data: one is the registered student roster which records all students who have registered and paid the tuition, and the other is student grade roster which records all students who have received grades.
As you can see from the two spreadsheets, this data management system has problems. The fact that “Student 4567 is Mary Brown, and her major is Finance” is stored more than once. Such occurrences are called data redundancy. Redundant data often make data access convenient, but can be harmful. For example, if Mary Brown changes her name or her major, then all her names and major stored in the system must be changed altogether. For small data systems, such a problem looks trivial. However, when the data system is huge, making changes to all redundant data is difficult if not impossible. As a result of data redundancy, the entire data set can be corrupted.
(2) Violation of data integrity
Data integrity means consistency among the stored data. We use the above illustrative example to explain the concept of data integrity and how data integrity can be violated if the data system is flawed. You can find that Alex Wilson received a grade in MKT211; however, you can’t find Alex Wilson in the student roster. That is, the two rosters are not consistent. Suppose we have a data integrity control to enforce the rules, say, “no student can receive a grade unless she/he has registered and paid tuition”, then such a violation of data integrity can never happen.
(3) Relying on human memory to store and to search needed data
The third common mistake in data resource management is the over use of human memory for data search. A human can remember what data are stored and where the data are stored, but can also make mistakes. If a piece of data is stored in an un-remembered place, it has actually been lost. As a result of relying on human memory to store and to search needed data, the entire data set eventually becomes disorganized.
To avoid the above common flaws in data resource management, database technology must be applied. A database is an organized collection of related data. It is an organized collection, because in a database, all data is described and associated with other data. For the purposes of this text, we will only consider computerized databases.
Though not good for replacing databases, spreadsheets can be ideal tools for analyzing the data stored in a database. A spreadsheet package can be connected to a specific table or query in a database and used to create charts or perform analysis on that data.
Data Models and Relational Databases
Databases can be organized in many different ways by using different models. The data model of a database is the logical structure of data items and their relationships. There have been several data models. Since the 1980s, the relational data model has been popularized. Currently, relational database systems are commonly used in business organizations with few exceptions. A relational data model is easy to understand and use.
In a relational database, data is organized into tables (or relations). Each table has a set of fields which define the structure of the data stored in the table. A record is one instance of a set of fields in a table. To visualize this, think of the records as the rows (or tuple) of the table and the fields as the columns of the table.
In the example below, we have a table of student data, with each row representing a student record , and each column representing one filed of the student record. A special filed or a combination of fields that determines the unique record is called primary key (or key). A key is usually the unique identification number of the records.
Designing a Database
Suppose a university wants to create a School Database to track data. After interviewing several people, the design team learns that the goal of implementing the system is to give better insight into students’ performance and academic resources. From this, the team decides that the system must keep track of the students, their grades, courses, and classrooms. Using this information, the design team determines that the following tables need to be created:
• STUDENT: student name, major, and e-mail.
• COURSE: course title, enrollment capacity.
• GRADE: this table will correlate STUDENT with COURSE, allowing us to have any given student to enroll multiple courses and to receive a grade for each course.
• CLASSROOM: classroom location, classroom type, and classroom capacity
Now that the design team has determined which tables to create, they need to define the specific data items that each table will hold. This requires identifying the fields that will be in each table. For example, course title would be one of the fields in the COURSE table. Finally, since this will be a relational database, every table should have a field in common with at least one other table (in other words, they should have relationships with each other).
A primary key must be selected for each table in a relational database. This key is a unique identifier for each record in the table. For example, in the STUDENT table, it might be possible to use the student name as a way to identify a student. However, it is more than likely that some students share the same name. A student’s e-mail address might be a good choice for a primary key, since e-mail addresses are unique. However, a primary key cannot change, so this would mean that if students changed their e-mail address we would have to remove them from the database and then re-insert them – not an attractive proposition. Our solution is to use student ID as the primary key of the STUDENT table. We will also do this for the COURSE table and the CLASSROOM table. This solution is quite common and is the reason you have so many IDs! The primary key of table can be just one field, but can also be a combination of two or more fields. For example, the combination of StudentID and CourseID the GRADE table can be the primary key of the GRADE table, which means that a grade is received by a particular student for a specific course.
The next step of design of database is to identify and make the relationships between the tables so that you can pull the data together in meaningful ways. A relationship between two tables is implemented by using a foreign key. A foreign key is a field in one table that connects to the primary key data in the original table. For example, ClassroomID in the COURSE table is the foreign key that connects to the primary key ClassroomID in the CLASSROOM table. With this design, not only do we have a way to organize all of the data we need and have successfully related all the table together to meet the requirements, but have also prevented invalid data from being entered into the database. You can see the final database design in the figure below:
Normalization
When designing a database, one important concept to understand is normalization. In simple terms, to normalize a database means to design it in a way that: 1) reduces data redundancy; and 2) ensure data integrity.
In the School Database design, the design team worked to achieve these objectives. For example, to track grades, a simple (and wrong) solution might have been to create a Student field in the COURSE table and then just list the names of all of the students there. However, this design would mean that if a student takes two or more courses, then his or her data would have to be entered twice or more times. This means the data are redundant. Instead, the designers solved this problem by introducing the GRADE table.
In this design, when a student registers into the school system before taking a course, we first must add the student to the STUDENT table, where their ID, name, major, and e-mail address are entered. Now we will add a new entry to denote that the student takes a specific course. This is accomplished by adding a record with the StudentD and the CourseID in the GRADE table. If this student takes a second course, we do not have to duplicate the entry of the student’s name, major, and e-mail; instead, we only need to make another entry in the GRADE table of the second course’s ID and the student’s ID.
The design of the School database also makes it simple to change the design without major modifications to the existing structure. For example, if the design team were asked to add functionality to the system to track instructors who teach the courses, we could easily accomplish this by adding a PROFESSOR table (similar to the STUDENT table) and then adding a new field to the COURSE table to hold the professors’ ID.
Data Types
When defining the fields in a database table, we must give each field a data type. For example, the field StudentName is text string, while EnrollmentCapacity is number. Most modern databases allow for several different data types to be stored. Some of the more common data types are listed here:
• Text: for storing non-numeric data that is brief, generally under 256 characters. The database designer can identify the maximum length of the text.
• Number: for storing numbers. There are usually a few different number types that can be selected, depending on how large the largest number will be.
• Boolean: a data type with only two possible values, such as 0 or 1, “true” or “false”, “yes” or “no”.
• Date/Time: a special form of the number data type that can be interpreted as a number or a time.
• Currency: a special form of the number data type that formats all values with a currency indicator and two decimal places.
• Paragraph Text: this data type allows for text longer than 256 characters.
• Object: this data type allows for the storage of data that cannot be entered via keyboard, such as an image or a music file.
There are two important reasons that we must properly define the data type of a field. First, a data type tells the database what functions can be performed with the data. For example, if we wish to perform mathematical functions with one of the fields, we must be sure to tell the database that the field is a number data type. For example, we can subtract the course capacity from the classroom capacity to find out the number of extra seats available.
The second important reason to define data type is so that the proper amount of storage space is allocated for our data. For example, if the StudentName field is defined as a Text(50) data type, this means 50 characters are allocated for each name we want to store. If a student’s name is longer than 50 characters, the database will truncate it.
Database Management Systems
To the computer, a database looks like one or more files. In order for the data in the database to be stored, read, changed, added, or removed, a software program must access it. Many software applications have this ability: iTunes can read its database to give you a listing of its songs (and play the songs); your mobile-phone software can interact with your list of contacts. But what about applications to create or manage a database? What software can you use to create a database, change a database’s structure, or simply do analysis? That is the purpose of a category of software applications called database management systems (DBMS).
DBMS packages generally provide an interface to view and change the design of the database, create queries, and develop reports. Most of these packages are designed to work with a specific type of database, but generally are compatible with a wide range of databases.
A database that can only be used by a single user at a time is not going to meet the needs of most organizations. As computers have become networked and are now joined worldwide via the Internet, a class of database has emerged that can be accessed by two, ten, or even a million people. These databases are sometimes installed on a single computer to be accessed by a group of people at a single location. Other times, they are installed over several servers worldwide, meant to be accessed by millions. In enterprises the relational DBMS are built and supported by companies such as Oracle, Microsoft SQL Server, and IBM Db2. The open-source MySQL is also an enterprise database.
Microsoft Access and Open Office Base are examples of personal database-management systems. These systems are primarily used to develop and analyze single-user databases. These databases are not meant to be shared across a network or the Internet, but are instead installed on a particular device and work with a single user at a time. Apache OpenOffice.org Base (see screen shot) can be used to create, modify, and analyze databases in open-database (ODB) format. Microsoft’s Access DBMS is used to work with databases in its own Microsoft Access Database format. Both Access and Base have the ability to read and write to other database formats as well.
Structured Query Language
Once you have a database designed and loaded with data, how will you do something useful with it? The primary way to work with a relational database is to use Structured Query Language, SQL (pronounced “sequel,” or simply stated as S-Q-L). Almost all applications that work with databases (such as database management systems, discussed below) make use of SQL as a way to analyze and manipulate relational data. As its name implies, SQL is a language that can be used to work with a relational database. From a
simple request for data to a complex update operation, SQL is a mainstay of programmers and database administrators. To give you a taste of what SQL might look like, here are a couple of examples using our School database:
The following query will retrieve the major of student John Smith from the STUDENT table:
```SELECT StudentMajor
FROM STUDENT
WHERE StudentName = ‘John Smith’;```
The following query will list the total number of students in the STUDENT table:
```SELECT COUNT(*)
FROM STUDENT;```
SQL can be embedded in many computer languages that are used to develop platform-independent web-based applications. An in-depth description of how SQL works is beyond the scope of this introductory text, but these examples should give you an idea of the power of using SQL to manipulate relational databases. Many DBMS, such as Microsoft Access, allow you to use QBE (Query-by-Example), a graphical query tool, to retrieve data though visualized commands. QBE generates SQL for you, and is easy to use. In comparison with SQL, QBE has limited functionalities and is unable to work without the DBMS environment.
Other Types of Databases
The relational database model is the most used database model today. However, many other database models exist that provide different strengths than the relational model. The hierarchical database model, popular in the 1960s and 1970s, connected data together in a hierarchy, allowing for a parent/child relationship between data. The document-centric model allowed for a more unstructured data storage by placing data into “documents” that could then be manipulated.
Perhaps the most interesting new development is the concept of NoSQL (from the phrase “not only SQL”). NoSQL arose from the need to solve the problem of large-scale databases spread over several servers or even across the world. For a relational database to work properly, it is important that only one person be able to manipulate a piece of data at a time, a concept known as record-locking. But with today’s large-scale databases (think Google and Amazon), this is just not possible. A NoSQL database can work with data in a looser way, allowing for a more unstructured environment, communicating changes to the data over time to all the servers that are part of the database.
As stated earlier, the relational database model does not scale well. The term scale here refers to a database getting larger and larger, being distributed on a larger number of computers connected via a network. Some companies are looking to provide large-scale database solutions by moving away from the relational model to other, more flexible models. For example, Google now offers the App Engine Datastore, which is based on NoSQL. Developers can use the App Engine Datastore to develop applications that access data from anywhere in the world. Amazon.com offers several database services for enterprise use, including Amazon RDS, which is a relational database service, and Amazon DynamoDB, a NoSQL enterprise solution.
Sidebar: What Is Metadata?
The term metadata can be understood as “data about data.” Examples of metadata of database are:
• number of records
• data type of field
• size of field
• description of field
• default value of field
• rules of use.
When a database is being designed, a “data dictionary” is created to hold the metadata, defining the fields and structure of the database.
Finding Value in Data: Business Intelligence
With the rise of Big Data and a myriad of new tools and techniques at their disposal, businesses are learning how to use information to their advantage. The term business intelligence is used to describe the process that organizations use to take data they are collecting and analyze it in the hopes of obtaining a competitive advantage. Besides using their own data, stored in data warehouses (see below), firms often purchase information from data brokers to get a big-picture understanding of their industries and the economy. The results of these analyses can drive organizational strategies and provide competitive advantage.
Data Visualization
Data visualization is the graphical representation of information and data. These graphical representations (such as charts, graphs, and maps) can quickly summarize data in a way that is more intuitive and can lead to new insights and understandings. Just as a picture of a landscape can convey much more than a paragraph of text attempting to describe it, graphical representation of data can quickly make meaning of large amounts of data. Many times, visualizing data is the first step towards a deeper analysis and understanding of the data collected by an organization. Examples of data visualization software include Tableau and Google Data Studio.
Data Warehouses
As organizations have begun to utilize databases as the centerpiece of their operations, the need to fully understand and leverage the data they are collecting has become more and more apparent. However, directly analyzing the data that is needed for day-to-day operations is not a good idea; we do not want to tax the operations of the company more than we need to. Further, organizations also want to analyze data in a historical sense: How does the data we have today compare with the same set of data this time last month, or last year? From these needs arose the concept of the data warehouse.
The concept of the data warehouse is simple: extract data from one or more of the organization’s databases and load it into the data warehouse (which is itself another database) for storage and analysis. However, the execution of this concept is not that simple. A data warehouse should be designed so that it meets the following criteria:
• It uses non-operational data. This means that the data warehouse is using a copy of data from the active databases that the company uses in its day-to-day operations, so the data warehouse must pull data from the existing databases on a regular, scheduled basis.
• The data is time-variant. This means that whenever data is loaded into the data warehouse, it receives a time stamp, which allows for comparisons between different time periods.
• The data is standardized. Because the data in a data warehouse usually comes from several different sources, it is possible that the data does not use the same definitions or units. For example, each database uses its own format for dates (e.g., mm/dd/yy, or dd/mm/yy, or yy/mm/dd, etc.). In order for the data warehouse to match up dates, a standard date format would have to be agreed upon and all data loaded into the data warehouse would have to be converted to use this standard format. This process is called extraction-transformation-load (ETL).
There are two primary schools of thought when designing a data warehouse: bottom-up and top-down. The bottom-up approach starts by creating small data warehouses, called data marts, to solve specific business problems. As these data marts are created, they can be combined into a larger data warehouse. The top- down approach suggests that we should start by creating an enterprise-wide data warehouse and then, as specific business needs are identified, create smaller data marts from the data warehouse.
Benefits of Data Warehouses
Organizations find data warehouses quite beneficial for a number of reasons:
• The process of developing a data warehouse forces an organization to better understand the data that it is currently collecting and, equally important, what data is not being collected.
• A data warehouse provides a centralized view of all data being collected across the enterprise and provides a means for determining data that is inconsistent.
• Once all data is identified as consistent, an organization can generate “one version of the truth”. This is important when the company wants to report consistent statistics about itself, such as revenue or number of employees.
• By having a data warehouse, snapshots of data can be taken over time. This creates a historical record of data, which allows for an analysis of trends.
• A data warehouse provides tools to combine data, which can provide new information and analysis.
Data Mining and Machine Learning
Data mining is the process of analyzing data to find previously unknown and interesting trends, patterns, and associations in order to make decisions. Generally, data mining is accomplished through automated means against extremely large data sets, such as a data warehouse. Some examples of data mining include:
• An analysis of sales from a large grocery chain might determine that milk is purchased more frequently the day after it rains in cities with a population of less than 50,000.
• A bank may find that loan applicants whose bank accounts show particular deposit and withdrawal patterns are not good credit risks.
• A baseball team may find that collegiate baseball players with specific statistics in hitting, pitching, and fielding make for more successful major league players.
One data mining method that an organization can use to do these analyses is called machine learning. Machine learning is used to analyze data and build models without being explicitly programmed to do so. Two primary branches of machine learning exist: supervised learning and unsupervised learning.
Supervised learning occurs when an organization has data about past activity that has occurred and wants to replicate it. For example, if they want to create a new marketing campaign for a particular product line, they may look at data from past marketing campaigns to see which of their consumers responded most favorably. Once the analysis is done, a machine learning model is created that can be used to identify these new customers. It is called “supervised” learning because we are directing (supervising) the analysis towards a result (in our example: consumers who respond favorably). Supervised learning techniques include analyses such as decision trees, neural networks, classifiers, and logistic regression.
Unsupervised learning occurs when an organization has data and wants to understand the relationship(s) between different data points. For example, if a retailer wants to understand purchasing patterns of its customers, an unsupervised learning model can be developed to find out which products are most often purchased together or how to group their customers by purchase history. Is it called “unsupervised” learning because no specific outcome is expected. Unsupervised learning techniques include clustering and association rules.
Privacy Concerns
The increasing power of data mining has caused concerns for many, especially in the area of privacy. In today’s digital world, it is becoming easier than ever to take data from disparate sources and combine them to do new forms of analysis. In fact, a whole industry has sprung up around this technology: data brokers. These firms combine publicly accessible data with information obtained from the government and other sources to create vast warehouses of data about people and companies that they can then sell. This subject will be covered in much more detail in chapter 12 – the chapter on the ethical concerns of information systems.
Sidebar: What is data science? What is data analytics?
The term “data science” is a popular term meant to describe the analysis of large data sets to find new knowledge. For the past several years, it has been considered one of the best career fields to get into due to its explosive growth and high salaries. While a data scientist does many different things, their focus is generally on analyzing large data sets using various programming methods and software tools to create new knowledge for their organization. Data scientists are skilled in machine learning and data visualization techniques. The field of data science is constantly changing, and data scientists are on the cutting edge of work in areas such as artificial intelligence and neural networks.
Knowledge Management
We end the chapter with a discussion on the concept of knowledge management (KM). All companies accumulate knowledge over the course of their existence. Some of this knowledge is written down or saved, but not in an organized fashion. Much of this knowledge is not written down; instead, it is stored inside the heads of its employees. Knowledge management is the process of creating, formalizing the capture, indexing, storing, and sharing of the company’s knowledge in order to benefit from the experiences and insights that the company has captured during its existence.
Summary
In this chapter, we learned about the role that data and databases play in the context of information systems. Data is made up of facts of the world. If you process data in a particular context, then you have information. Knowledge is gained when information is consumed and used for decision making. A database is an organized collection of related data. Relational databases are the most widely used type of database, where data is structured into tables and all tables must be related to each other through unique identifiers. A database management system (DBMS) is a software application that is used to create and manage databases, and can take the form of a personal DBMS, used by one person, or an enterprise DBMS that can be used by multiple users. A data warehouse is a special form of database that takes data from other databases in an enterprise and organizes it for analysis. Data mining is the process of looking for patterns and relationships in large data sets. Many businesses use databases, data warehouses, and data-mining techniques in order to produce business intelligence and gain a competitive advantage.
Study Questions
1. What is the difference between data, information, and knowledge?
2. Explain in your own words how the data component relates to the hardware and software components of information systems.
3. What is the difference between quantitative data and qualitative data? In what situations could the number 42 be considered qualitative data?
4. What are the characteristics of a relational database?
5. When would using a personal DBMS make sense?
6. What is the difference between a spreadsheet and a database? List three differences between them.
7. Describe what the term normalization means.
8. Why is it important to define the data type of a field when designing a relational database?
9. Name a database you interact with frequently. What would some of the field names be?
10. What is metadata?
11. Name three advantages of using a data warehouse.
12. What is data mining?
13. In your own words, explain the difference between supervised learning and unsupervised learning. Give an example of each (not from the book).
Exercises
1. Review the design of the School database earlier in this chapter. Reviewing the lists of data types given, what data types would you assign to each of the fields in each of the tables. What lengths would you assign to the text fields?
2. Download Apache OpenOffice.org and use the database tool to open the “Student Clubs.odb” file available here. Take some time to learn how to modify the database structure and then see if you can add the required items to support the tracking of faculty advisors, as described at the end of the Normalization section in the chapter. Here is a link to the Getting Started documentation.
3. Using Microsoft Access, download the database file of comprehensive baseball statistics from the website SeanLahman.com. (If you don’t have Microsoft Access, you can download an abridged version of the file here that is compatible with Apache Open Office). Review the structure of the tables included in the database. Come up with three different data-mining experiments you would like to try, and explain which fields in which tables would have to be analyzed.
4. Do some original research and find two examples of data mining. Summarize each example and then write about what the two examples have in common.
5. Conduct some independent research on the process of business intelligence. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of how business intelligence is being used.
6. Conduct some independent research on the latest technologies being used for knowledge management. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of software applications or new technologies being used in this field. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/104%3A_Data_and_Databases.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• understand the history and development of networking technologies;
• define the key terms associated with networking technologies;
• understand the importance of broadband technologies; and
• describe organizational networking.
Introduction
In the early days of computing, computers were seen as devices for making calculations, storing data, and automating business processes. However, as the devices evolved, it became apparent that many of the functions of telecommunications could be integrated into the computer. During the 1980s, many organizations began combining their once-separate telecommunications and information systems departments into an Information Technology (IT) department. This ability for computers to communicate with one another and to facilitate communication between individuals and groups has had a major impact on the growth of computing over the past several decades.
Computer networking began in the 1960s with the birth of the Internet. However, while the Internet and web were evolving, corporate networking was also taking shape in the form of local area networks and client-server computing. The Internet went commercial in 1994 as technologies began to pervade all areas of the organization. Today it would be unthinkable to have a computer that did not include communications capabilities. This chapter reviews the different technologies that have been put in place to enable this communications revolution.
A Brief History of the Internet
In the Beginning: ARPANET
The story of the Internet, and networking in general, can be traced back to the late 1950s. The United States was in the depths of the Cold War with the USSR as each nation closely watched the other to determine which would gain a military or intelligence advantage. In 1957, the Soviets surprised the U.S. with the launch of Sputnik, propelling us into the space age. In response to Sputnik, the U.S. Government created the Advanced Research Projects Agency (ARPA), whose initial role was to ensure that the U.S. was not surprised again. It was from ARPA, now called DARPA ((Defense Advanced Research Projects Agency), that the Internet first sprang.
ARPA was the center of computing research in the 1960s, but there was just one problem. Many of the computers could not communicate with each other. In 1968 ARPA sent out a request for proposals for a communication technology that would allow different computers located around the country to be integrated together into one network. Twelve companies responded to the request, and a company named Bolt, Beranek, and Newman (BBN) won the contract. They immediately began work and were able to complete the job just one year later.
ARPA Net 1969
Professor Len Kleinrock of UCLA along with a group of graduate students were the first to successfully send a transmission over the ARPANET. The event occurred on October 29, 1969 when they attempted to send the word “login” from their computer at UCLA to the Stanford Research Institute. You can read their actual notes. The first four nodes were at UCLA, University of California, Stanford, and the University of Utah.
The Internet and the World Wide Web
Over the next decade, the ARPANET grew and gained popularity. During this time, other networks also came into existence. Different organizations were connected to different networks. This led to a problem. The networks could not communicate with each other. Each network used its own proprietary language, or protocol (see sidebar for the definition of protocol) to send information back and forth. This problem was solved by the invention of Transmission Control Protocol/Internet Protocol (TCP/IP). TCP/IP was designed to allow networks running on different protocols to have an intermediary protocol that would allow them to communicate. So as long as your network supported TCP/IP, you could communicate with all of the other networks running TCP/IP. TCP/IP quickly became the standard protocol and allowed networks to communicate with each other. It is from this breakthrough that we first got the term Internet, which simply means “an interconnected network of networks.”
Sidebar: An Internet Vocabulary Lesson
Network communication is full of some very technical concepts based on simple principles. Learn the following terms and you’ll be able to hold your own in a conversation about the Internet.
• Packet The fundamental unit of data transmitted over the Internet. When a host (PC, workstation, server, printer, etc.) intends to send a message to another host (for example, your PC sends a request to YouTube to open a video), it breaks the message down into smaller pieces, called packets. Each packet has the sender’s address, the destination address, a sequence number, and a piece of the overall message to be sent. Different packets in a single message can take a variety of routes to the destination and they can arrive at different times. For this reason the sequence number is used to reassemble the packets in the proper order at the destination.
• Switch A network device that connects multiple hosts together and forwards packets based on their destination within the local network which is commonly known as a Local Area Network (LAN).
• Router A device that receives and analyzes packets and then routes them towards their destination. In some cases a router will send a packet to another router. In other cases it will send it directly to its destination. Routers are used to connect one network to another network.
• IP Address Every device on the Internet (personal computer, a tablet, a smartphone, etc.) is assigned a unique identifying number called an IP (Internet Protocol) address. Originally, the IPv4 (version 4) standard was used. It had a format of four numbers with values ranging from 0 and 255 separated by a period. For example, the domain Dell.com has the IPv4 address 107.23.196.166. The IPv4 standard has a limit of 4,294,967,296 possible addresses. As the use of the Internet has grown, the number of IP addresses needed has increased to the point where the use of IPv4 addresses will be exhausted. This has led to the new IPv6 standard.The IPv6 standard is formatted as eight groups of four hexadecimal digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334. The IPv6 standard has a limit of 3.4×1038 possible addresses. For example, the domain LinkedIn.com has an IPv6 address of: [2620:109:c002::6cae:a0a]. You probably noticed that the address has only five groups of numbers. That’s because IPv6 allows the use of two semi-colons ( :: ) to indicate groups that are all zeroes and do not need to be displayed. For more detail about the IPv6 standard, see this Wikipedia article.
• Domain name If you had to try to remember the IP address of every web site you wanted to access, the Internet would not be nearly as easy to use. A domain name is a human-friendly name, convenient for remembering a website. These names generally consist of a descriptive word followed by a dot (period) and the Top-Level Domain (TLD). For example, Wikipedia’s domain name is Wikipedia.org. Wikipedia describes the organization and .org is the TLD. Other well-known TLDs include .com, .net, and .gov. For a list and description of top level domain names, see this Wikipedia article.
• DNS DNS stands for “domain name server or system.” DNS acts as the directory of websites on the Internet. When a request to access a host with a domain name is given, a DNS server is queried. It returns the IP address of the host requested, allowing for proper routing.
• Packet-switching When a message’s packets are sent on the Internet, routers try to find the optimal route for each packet. This can result in packets being sent on different routes to their destination. After the packets arrive they are re-assembled into the original message for the recipient. For more details on packet-switching, see this interactive web page.
• Protocol A protocol is the set of rules that govern how communications take place on a network. For example, File Transfer Protocol (FTP) are the communication rules for transferring files from one host to another. TCP/IP, discussed earlier, is known as a protocol suite since it contains numerous protocols.
Internet Users Worldwide, December 2017.
(Public Domain. Courtesy of the Miniwatts Marketing Group)
The 1980s witnessed a significant growth in Internet usage. Internet access came primarily from government, academic, and research organizations. Much to the surprise of the engineers, the early popularity of the Internet was driven by the use of electronic mail (see the next sidebar ).
Initially, Internet use meant having to type commands, even including IP addresses, in order to access a web server. That all changed in 1990 when Tim Berners-Lee introduced his World Wide Web project which provided an easy way to navigate the Internet through the use of hypertext. The World Wide Web gained even more steam in 1993 with the release of the Mosaic browser which allowed graphics and text to be combined as a way to present information and navigate the Internet.
The Dot-Com Bubble
In the 1980s and early 1990s, the Internet was being managed by the National Science Foundation (NSF). The NSF had restricted commercial ventures on the Internet, which meant that no one could buy or sell anything online. In 1991, the NSF transferred its role to three other organizations, thus getting the US government out of direct control over the Internet and essentially opening up commerce online.
This new commercialization of the Internet led to what is now known as the dot-com bubble. A frenzy of investment in new dot-com companies took place in the late 1990s with new tech companies issuing Initial Public Offerings (IPO) and heating up the stock market. This investment bubble was driven by the fact that investors knew that online commerce would change everything. Unfortunately, many of these new companies had poor business models and anemic financial statements showing little or no profit. In 2000 and 2001, the bubble burst and many of these new companies went out of business. Some companies survived, including Amazon (started in 1994) and eBay (1995). After the dot-com bubble burst, a new reality became clear. In order to succeed online, e-business companies would need to develop business models appropriate for the online environment.
Web 2.0
In the first few years of the World Wide Web, creating and hosting a website required a specific set of knowledge. A person had to know how to set up a web server, get a domain name, create web pages in HTML, and troubleshoot various technical issues.
Starting in the early 2000s, major changes came about in how the Internet was being used. These changes have come to be known as Web 2.0. Here are some key characteristics in Web 2.0.
• Universal access to Apps
• Value is found in content, not display software
• Data can be easily shared
• Distribution is bottom up, not top down
• Employees and customers can use access and use tools on their own
• Informal networking is encouraged since more contributors results in better content
• Social tools encourage people to share information
[1]
Social networking, the last item in the list, has led to major changes in society. Prior to Web 2.0 major news outlets investigated and reported important news stories of the day. But in today’s world individuals are able to easily share their own views on various events. Apps such as Facebook, Twitter, Youtube, and personal blogs allow people to express their own viewpoint.
Sidebar: E-mail Is the “Killer” App for the Internet
As discussed in chapter 3, a “killer app” is a use of a device that becomes so essential that large numbers of people will buy the device just to run that application. The killer app for the personal computer was the spreadsheet, enabling users to enter data, write formulas, and easily make “what if” decisions. With the introduction of the Internet came another killer app – E-mail.
The Internet was originally designed as a way for the Department of Defense to manage projects. However, the invention of electronic mail drove demand for the Internet. While this wasn’t what developers had in mind, it turned out that people connecting with people was the killer app for the Internet. As we look back today, we can see this being repeated again and again with new technologies that enable people to connect with each other.
Sidebar: The Internet and the World Wide Web Are Not the Same Thing
Many times the terms “Internet” and “World Wide Web,” or even just “the web,” are used interchangeably. But really, they are not the same thing.
The Internet is an interconnected network of networks. Services such as email, voice and video, file transfer, and the World Wide Web all run across the Internet.The World Wide Web is simply one part of the Internet. It is made up of web servers that have HTML pages that are being viewed on devices with web browsers.
The Growth of High Speed Internet
In the early days of the Internet, most access was accomplished via a modem over an analog telephone line. A modem was connected to the incoming phone line when then connected to a computer. Speeds were measured in bits-per-second (bps), with speeds growing from 1200 bps to 56,000 bps over the years. Connection to the Internet via modems is called dial-up access. As the web became more interactive, dial-up hindered usage when users wanted to transfer more and more data. As a point of reference, downloading a typical 3.5 MB song would take 24 minutes at 1200 bps and 2 minutes at 28,800 bps.
High speed Internet speeds, by definition, are a minimum of 256,000 bps, though most connections today are much faster, measured in millions of bits per second (megabits or Mbps) or even billions (gigabits). For the home user, a high speed connection is usually accomplished via the cable television lines or phone lines using a Digital Subscriber Line (DSL). Both cable and DSL have similar prices and speeds, though price and speed can vary in local communities. According to the website Recode, the average home broadband speed ranges from 12 Mbps and 125 Mbps.[2] Telecommunications companies provide T1 and T3 lines for greater bandwidth and reliability.
High speed access, also known as broadband, is important because it impacts how the Internet is used. Communities with high speed Internet have found residences and businesses increase usage of digital resources. Access to high speed Internet is now considered a basic human right by the United Nations, as declared in their 2011 statement:
“Broadband technologies are fundamentally transforming the way we live,” the Broadband Commission for Digital Development, set up in 2017 by the UN Educational Scientific and Cultural Organization (UNESCO) and the UN International Telecommunications Union (ITU), said in issuing “The Broadband Challenge” at a leadership summit in Geneva.
“It is vital that no one be excluded from the new global knowledge societies we are building. We believe that communication is not just a human need – it is a right.”[3]
Wireless Networking
Thanks to wireless technology, access to the Internet is virtually everywhere, especially through a smartphone.
Wi-Fi
Wi-Fi takes an Internet signal and converts it into radio waves. These radio waves can be picked up within a radius of approximately 65 feet by devices with a wireless adapter. Several Wi-Fi specifications have been developed over the years, starting with 802.11b in 1999, followed by the 802.11g specification in 2003 and 802.11n in 2009. Each new specification improved the speed and range of Wi-Fi, allowing for more uses. One of the primary places where Wi-Fi is being used is in the home. Home users access Wi-Fi via in-home routers provided by the telecommunications firm that services the residence.
Mobile Network
As the cellphone has evolved into the smartphone, the desire for Internet access on these devices has led to data networks being included as part of the mobile phone network. While Internet connections were technically available earlier, it was really with the release of the 3G networks in 2001 (2002 in the US) that smartphones and other cellular devices could access data from the Internet. This new capability drove the market for new and more powerful smartphones, such as the iPhone, introduced in 2007. In 2011, wireless carriers began offering 4G data speeds, giving the cellular networks the same speeds that customers were accustomed to getting via their home connection.
Beginning in 2019, some part of the world began seeing the implementation of 5G communication networks. Speeds associated with 5G will be greater than 1 GB/second, providing connection speeds to handle just about any type of application. Some have speculated that the 5G implementation will lead households to eliminate the purchase of wired Internet connections for their homes, just using 5G wireless connections instead.
3G, 4G, and 5G Comparison
3G 4G 5G
Deployed 2004-2005 2006-2010 By 2020
Bandwidth 2 mbps 200 mbps > 1 gbps,
Service Integrated high-quality audio, video and data Dynamic information access, variable devices Dynamic information access, variable devices with all capabilities
(James Dean, Raconteur, December 7, 2014)
[4]
Sidebar: Why Doesn’t My Cellphone Work When I Travel Abroad?
As mobile phone technologies have evolved, providers in different countries have chosen different communication standards for their mobile phone networks. There are two competing standards in the US: GSM (used by AT&T and T-Mobile) and CDMA (used by the other major carriers). Each standard has its pros and cons, but the bottom line is that phones using one standard cannot easily switch to the other. This is not a big deal in the US because mobile networks exist to support both standards. But when traveling to other countries, you will find that most of them use GSM networks. The one exception is Japan which has standardized on CDMA. It is possible for a mobile phone using one type of network to switch to the other type of network by changing out the SIM card, which controls your access to the mobile network. However, this will not work in all cases. If you are traveling abroad, it is always best to consult with your mobile provider to determine the best way to access a mobile network.
Bluetooth
While Bluetooth is not generally used to connect a device to the Internet, it is an important wireless technology that has enabled many functionalities that are used every day. When created in 1994 by Ericsson, it was intended to replace wired connections between devices. Today, it is the standard method for wirelessly connecting nearby devices. Bluetooth has a range of approximately 300 feet and consumes very little power, making it an excellent choice for a variety of purposes. Some applications of Bluetooth include: connecting a printer to a personal computer, connecting a mobile phone and headset, connecting a wireless keyboard and mouse to a computer, or connecting your mobile phone to your car, resulting in hands free operation of your phone.
VoIP
Voice over IP (VoIP) allows analog signals to be converted to digital signals, then transmitted on a network. By using existing technologies and software, voice communication over the Internet is now available to anyone with a browser (think Skype, WebEx, Google Hangouts). Beyond this, many companies are now offering VoIP-based telephone service for business and home use.
Organizational Networking
LAN and WAN
Scope of business networks
While the Internet was evolving and creating a way for organizations to connect to each other and the world, another revolution was taking place inside organizations. The proliferation of personal computers led to the need to share resources such as printers, scanners, and data. Organizations solved this problem through the creation of local area networks (LANs), which allowed computers to connect to each other and to peripherals.
A LAN is a local network, usually operating in the same building or on the same campus. A Wide Area Network (WAN) provides connectivity over a wider area such as an organization’s locations in different cities or states.
Client-Server
Client-server computing provides stand-alone devices such as personal computers, printers, and file servers to work together. The personal computer originally was used as a stand-alone computing device. A program was installed on the computer and then used to do word processing or calculations. With the advent of networking and local area networks, computers could work together to solve problems. Higher-end computers were installed as servers, and users on the local network could run applications and share information among departments and organizations.
Intranet
An intranet, as the name implies, provides web-based resources for the users within an organization. These web pages are not accessible to those outside the company. The pages typically contain information useful to employees such as policies and procedures. In an academic setting the intranet provides an interface to learning resources for students.
Extranet
Sometimes an organization wants to be able to collaborate with its customers or suppliers while at the same time maintaining the security of being inside its own network. In cases like this a company may want to create an extranet, which is a part of a company’s network that can be made available securely to those outside of the company. Extranets can be used to allow customers to log in and place orders, or for suppliers to check their customers’ inventory levels.
Sometimes an organization will need to allow someone who is not located physically within its internal network to gain secure access to the intranet. This access can be provided by a virtual private network (VPN). VPNs will be discussed further in Chapter 6 which focuses on Information Security).
Sidebar: Microsoft’s SharePoint Powers the Intranet
As organizations begin to see the power of collaboration between their employees, they often look for solutions that will allow them to leverage their intranet to enable more collaboration. Since most companies use Microsoft products for much of their computing, some are using Microsoft’s SharePoint to support employee collaboration.
SharePoint provides a communication and collaboration platform that integrates seamlessly with Microsoft’s Office suite of applications. Using SharePoint, employees can share a document and edit it together, avoiding the need to email the document for others to review. Projects and documents can be managed collaboratively across the organization. Corporate documents are indexed and made available for search.
Cloud Computing
Cloud computing was covered in Chapter 3. The universal availability of the Internet combined with increases in processing power and data-storage capacity have made cloud computing a viable option for many companies. Using cloud computing, companies or individuals can contract to store data on storage devices somewhere on the Internet. Applications can be “rented” as needed, giving a company the ability to quickly deploy new applications. The I.T. department benefits from not having to maintain software that is provided on the cloud.
Sidebar: Metcalfe’s Law
Just as Moore’s Law describes how computing power is increasing over time, Metcalfe’s Law describes the power of networking. Metcalfe’s Law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system, or N2. If a network has 10 nodes, the inherent value is 100, or 102.
Metcalfe’s Law is attributed to Robert Metcalfe, the co-inventor of Ethernet. It attempts to address the added value provided by each node on the network. Think about it this way: If none of your friends were on Instagram, would you spend much time there? If no one else at your school or place of work had e-mail, would it be very useful to you? Metcalfe’s Law tries to quantify this value.
Summary
The networking revolution has completely changed how personal computers are used. Today, no one would imagine using a computer that was not connected to one or more networks. The development of the Internet and World Wide Web, combined with wireless access, has made information available at our fingertips. The Web 2.0 revolution has made everyone potential authors of web content. As networking technology has matured, the use of Internet technologies has become a standard for every type of organization. The use of intranets and extranets has allowed organizations to deploy functionality to employees and business partners alike, increasing efficiencies and improving communications. Cloud computing has truly made information available everywhere.
Study Questions
1. What were the first four locations hooked up to the Internet (ARPANET)?
2. What does the term packet mean?
3. Which came first, the Internet or the World Wide Web?
4. What was revolutionary about Web 2.0?
5. What was the so-called killer app for the Internet?
6. What does the term VoIP mean?
7. What is a LAN?
8. What is the difference between an intranet and an extranet?
9. What is Metcalfe’s Law?
Exercises
1. What is the difference between the Internet and the World Wide Web? Create at least three statements that identify the differences between the two.
2. Who are the broadband providers in your area? What are the prices and speeds offered?
3. Pretend you are planning a trip to three foreign countries in the next month. Consult your wireless carrier to determine if your mobile phone would work properly in those countries. What would the costs be? What alternatives do you have if it would not work?
Labs
1. Check the speed of your Internet connection by going to the following web site: speedtest.net
What is your download and upload speed?
2. What is the IP address of your computer? How did you find it? Hint for Windows: Go to the start icon and click Run. Then open the Command Line Interface by typing: cmd Then type: ipconfigWhat is your IPv4 address?What is your IPv6 address?
3. When you enter an address in your web browser, a Domain Name Server (DNS) is used to lookup the IP address of the site you are seeking. To locate the DNS server your computer is using, type: nslookupWrite down the name and address of your DNS server.Use the nslookup command to find the address for a favorite web site. For example, to find the IP address of espn type: nslookup espnWrite down your website’s name and address. Note: it is on the line following the name of the web site you entered.
4. You can use the tracert (trace route) command to display the path from your computer to the web site’s IP address you used in the previous lab. For example, tracert 199.181.132.250Be patient as tracert contacts each router in the path to your website’s server. A “Request timed out” message indicates the tracing is taking too long, probably due to a lack of bandwidth. You can stop the trace by pressing Ctrl + C
5. The ping command allows you check connectivity between the local host (your computer) and another host. If you are unable to connect to another host, the ping command can be used to incrementally test your connectivity. The IP address 127.0.0.1 is known as your home address (local host).Begin your test by going to your command line interface (command promkpt) and pinging your local host: ping 127.0.0.1You should get a series of “Reply from 127.0.0.1” messagesNext, ping the IP address you used in lab #3.Sometimes a failed ping is not the result of a lack of connectivity. Network administrators of some IP addresses/hosts do not want their site pinged so they block all ICMP packets. That’s the protocol used for pinging.
• The whois.domaintools.com site provides you with information about a web site. For example, to find information about google.com open your web browser and type: whoisdomaintools.com Then in the Lookup window, type: google.comFind information about a favorite site of yours. Record the following: administrator name, phone number, when the site was created, and the site’s name servers (the names begin with “ns”).
• Network statistics can be displayed using the netstat command. In the command line window (see lab #2 for instructions on how to get to the command line), type: netstat -eHow many bytes were sent and how many were received?Execute the command again and record your results. You should see an increase in both received and sent bytes.To see a complete list of options/switches for the netstat command, type: netstat ?
1. Wolcott, M. (2017). What is Web 2.0? MoneyWatch. Retrieved from https://www.cbsnews.com/news/what-is-web-20/
2. Molla, R. (2017). These are the fastest and slowest Internet speeds”. Recode. Retrieved from https://www.recode.net/2017/6/9/1576...nternet-speeds
3. International Telecommunications Union. (2018, January 23). UN Broadband Commission sets goal broadband targets to bring online the world’s 3.8 billion not connected to the Internet. Retrieved from https://www.itu.int/en/mediacentre/P...2018-PR01.aspx
4. “Dean, J. (2014). 4G vs 5G Mobile Technology. Raconteur Retrieved from https://www.raconteur.net/technology...ile-technology. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/105%3A_Networking_and_Communication.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• identify the information security triad;
• identify and understand the high-level concepts surrounding information security tools; and
• secure yourself digitally.
Introduction
As computers and other digital devices have become essential to business and commerce, they have also increasingly become a target for attacks. In order for a company or an individual to use a computing device with confidence, they must first be assured that the device is not compromised in any way and that all communications will be secure. This chapter reviews the fundamental concepts of information systems security and discusses some of the measures that can be taken to mitigate security threats. The chapter begins with an overview focusing on how organizations can stay secure. Several different measures that a company can take to improve security will be discussed. Finally, you will review a list of security precautions that individuals can take in order to secure their personal computing environment.
The Information Security Triad: Confidentiality, Integrity, Availability (CIA)
Confidentiality
Protecting information means you want to want to be able to restrict access to those who are allowed to see it. This is sometimes referred to as NTK, Need to Know. Everyone else should be disallowed from learning anything about its contents. This is the essence of confidentiality. For example, federal law requires that universities restrict access to private student information. Access to grade records should be limited to those who have authorized access.
Integrity
Integrity is the assurance that the information being accessed has not been altered and truly represents what is intended. Just as a person with integrity means what he or she says and can be trusted to consistently represent the truth, information integrity means information truly represents its intended meaning. Information can lose its integrity through malicious intent, such as when someone who is not authorized makes a change to intentionally misrepresent something. An example of this would be when a hacker is hired to go into the university’s system and change a student’s grade.
Integrity can also be lost unintentionally, such as when a computer power surge corrupts a file or someone authorized to make a change accidentally deletes a file or enters incorrect information.
Availability
Information availability is the third part of the CIA triad. Availability means information can be accessed and modified by anyone authorized to do so in an appropriate timeframe. Depending on the type of information, appropriate timeframe can mean different things. For example, a stock trader needs information to be available immediately, while a sales person may be happy to get sales numbers for the day in a report the next morning. Online retailers require their servers to be available twenty-four hours a day, seven days a week. Other companies may not suffer if their web servers are down for a few minutes once in a while.
Tools for Information Security
In order to ensure the confidentiality, integrity, and availability of information, organizations can choose from a variety of tools. Each of these tools can be utilized as part of an overall information-security policy.
Authentication
The most common way to identify someone is through their physical appearance, but how do we identify someone sitting behind a computer screen or at the ATM? Tools for authentication are used to ensure that the person accessing the information is, indeed, who they present themselves to be.
Authentication can be accomplished by identifying someone through one or more of three factors:
1. Something they know,
2. Something they have, or
3. Something they are.
For example, the most common form of authentication today is the user ID and password. In this case, the authentication is done by confirming something that the user knows (their ID and password). But this form of authentication is easy to compromise (see sidebar) and stronger forms of authentication are sometimes needed. Identifying someone only by something they have, such as a key or a card, can also be problematic. When that identifying token is lost or stolen, the identity can be easily stolen. The final factor, something you are, is much harder to compromise. This factor identifies a user through the use of a physical characteristic, such as a retinal scan, fingerprint, or facial geometry. Identifying someone through their physical characteristics is called biometrics.
A more secure way to authenticate a user is through multi-factor authentication. By combining two or more of the factors listed above, it becomes much more difficult for someone to misrepresent themselves. An example of this would be the use of an RSA SecurID token. The RSA device is something you have, and it generates a new access code every sixty seconds. To log in to an information resource using the RSA device, you combine something you know, such as a four-digit PIN, with the code generated by the device. The only way to properly authenticate is by both knowing the code and having the RSA device.
Access Control
Once a user has been authenticated, the next step is to ensure that they can only access the information resources that are appropriate. This is done through the use of access control. Access control determines which users are authorized to read, modify, add, and/or delete information. Several different access control models exist. Two of the more common are: the Access Control List (ACL) and Role-Based Access Control (RBAC).
An information security employee can produce an ACL which identifies a list of users who have the capability to take specific actions with an information resource such as data files. Specific permissions are assigned to each user such as read, write, delete, or add. Only users with those permissions are allowed to perform those functions.
ACLs are simple to understand and maintain, but there are several drawbacks. The primary drawback is that each information resource is managed separately, so if a security administrator wanted to add or remove a user to a large set of information resources, it would be quite difficult. And as the number of users and resources increase, ACLs become harder to maintain. This has led to an improved method of access control, called role-based access control, or RBAC. With RBAC, instead of giving specific users access rights to an information resource, users are assigned to roles and then those roles are assigned the access. This allows the administrators to manage users and roles separately, simplifying administration and, by extension, improving security.
The following image shows an ACL with permissions granted to individual users. RBAC allows permissions to be assigned to roles, as shown in the middle grid, and then in the third grid each user is assigned a role. Although not modeled in the image, each user can have multiple roles such as Reader and Editor.
Sidebar: Password Security
So why is using just a simple user ID and password not considered a secure method of authentication? It turns out that this single-factor authentication is extremely easy to compromise. Good password policies must be put in place in order to ensure that passwords cannot be compromised. Below are some of the more common policies that organizations should use.
• Require complex passwords. One reason passwords are compromised is that they can be easily guessed. A recent study found that the top three passwords people used were password, 123456 and 12345678.[1] A password should not be simple, or a word that can be found in a dictionary. Hackers first attempt to crack a password by testing every term in the dictionary. Instead, a good password policy should require the use of a minimum of eight characters, at least one upper-case letter, one special character, and one digit.
• Change passwords regularly. It is essential that users change their passwords on a regular basis. Also, passwords may not be reused. Users should change their passwords every sixty to ninety days, ensuring that any passwords that might have been stolen or guessed will not be able to be used against the company.
• Train employees not to give away passwords. One of the primary methods used to steal passwords is to simply figure them out by asking the users for their password. Pretexting occurs when an attacker calls a helpdesk or security administrator and pretends to be a particular authorized user having trouble logging in. Then, by providing some personal information about the authorized user, the attacker convinces the security person to reset the password and tell him what it is. Another way that employees may be tricked into giving away passwords is through e-mail phishing. Phishing occurs when a user receives an e-mail that looks as if it is from a trusted source, such as their bank or employer. In the e-mail the user is asked to click a link and log in to a website that mimics the genuine website, then enter their ID and password. The userID and password are then captured by the attacker.
Encryption
Many times an organization needs to transmit information over the Internet or transfer it on external media such as a flash drive. In these cases, even with proper authentication and access control, it is possible for an unauthorized person to gain access to the data. Encryption is a process of encoding data upon its transmission or storage so that only authorized individuals can read it. This encoding is accomplished by software which encodes the plain text that needs to be transmitted (encryption). Then the recipient receives the cipher text and decodes it (decryption). In order for this to work, the sender and receiver need to agree on the method of encoding so that both parties have the same message. Known as symmetric key encryption, both parties share the encryption key, enabling them to encode and decode each other’s messages.
An alternative to symmetric key encryption is public key encryption. In public key encryption, two keys are used: a public key and a private key. To send an encrypted message, you obtain the public key, encode the message, and send it. The recipient then uses their private key to decode it. The public key can be given to anyone who wishes to send the recipient a message. Each user simply needs one private key and one public key in order to secure messages. The private key is necessary in order to decrypt a message sent with the public key.
Notice in the image how the sender on the left creates a plaintext message which is then encrypted with a public key. The ciphered text is transmitted through the communication channel and the recipient uses their private key to decrypt the message and then read the plain text.
Sidebar: Blockchain and Bitcoin
Blockchain
Introduced in 2008 as part of a proposal for Bitcoin, Blockchain is a peer-to-peer network which provides an open, distributed record of transactions between two parties. A “peer-to-peer” network is one where there is no server between the two nodes trying to communicate. Essentially, this means that each node acts as a server and a client.
Supporters see blockchain as a tool to simplify all types of transactions: payments, contracts, etc. Motivation comes from the desire to remove the middleman (lawyer, banker, broker) from transactions, making them more efficient and readily available across the Internet. Blockchain is already being used to track products through supply chains.
Blockchain is considered a foundational technology, potentially creating new foundations in economics and social systems. There are numerous concerns about Blockchain and its adoption. Consider the following:
• Speed of adoption. Initially there is a great deal of enthusiasm by a small group. However, adoption on a larger scale can take a great number of years even decades for a worldwide acceptance of a new method of doing business.
• Governance. The banking sector, both in individual countries (U. S. Federal Reserve System) and the world at large (the International Monetary Fund), controls financial transactions. One purpose of these organizations is an attempt to avoid banking and financial systems collapse. Blockchain will result in the governance of financial transactions shifting away from these government-controlled institutions.
• Smart contracts. The smart contract will re-shape how businesses interact. It is possible for blockchain to automatically send payment to a vendor the instant the product is delivered to the customer. Such “self-executing” contracts are already taking place in banking and venture capital funding. [9]
Many are forecasting some universal form of payment or value transfer for business transactions. Blockchain and Bitcoin are being used to transform banking in various locations around the world. The following Bitcoin section includes a look at a new banking venture in Tanzania, East Africa.
Bitcoin
Bitcoin logo
Bitcoin is a world wide payment system using cryptocurrency. It functions without a central bank, operating as a peer-to-peer network with transactions happening directly between vendors and buyers. Records for transactions are recorded in the blockchain. Bitcoin technology was released in 2009. The University of Cambridge estimated there were 2.9 and 5.8 million unique users of bitcoin in 2017.[10] This web site provides more information about bitcoin.
A major bitcoin project is underway in Tanzania. Business transactions in this East African country are fraught with many challenges such as counterfeit currency and a 28% transaction fee on individuals who do not have a bank account. Seventy percent of the country’s population fall into this category. Benjamin Fernandes, a Tanzanian and 2017 graduate of Stanford Graduate School of Business, is co-founder of NALA, a Tanzanian firm working to bring cryptocurrency to a country where 96% of the population have access to mobile devices. NALA’s goal is to provide low cost transactions to all of the country’s citizens through cryptocurrency.[11] You can read more of this cryptocurrency venture here.
Backups
Another essential tool for information security is a comprehensive backup plan for the entire organization. Not only should the data on the corporate servers be backed up, but individual computers used throughout the organization should also be backed up. A good backup plan should consist of several components.
• Full understanding of the organization’s information resources. What information does the organization actually have? Where is it stored? Some data may be stored on the organization’s servers, other data on users’ hard drives, some in the cloud, and some on third-party sites. An organization should make a full inventory of all of the information that needs to be backed up and determine the best way to back it up.
• Regular backups of all data. The frequency of backups should be based on how important the data is to the company, combined with the ability of the company to replace any data that is lost. Critical data should be backed up daily, while less critical data could be backed up weekly. Most large organizations today use data redundancy so their records are always backed up.
• Offsite storage of backup data sets. If all backup data is being stored in the same facility as the original copies of the data, then a single event such as an earthquake, fire, or tornado would destroy both the original data and the backup. It is essential the backup plan includes storing the data in an offsite location.
• Test of data restoration. Backups should be tested on a regular basis by having test data deleted then restored from backup. This will ensure that the process is working and will give the organization confidence in the backup plan.
Besides these considerations, organizations should also examine their operations to determine what effect downtime would have on their business. If their information technology were to be unavailable for any sustained period of time, how would it impact the business?
Additional concepts related to backup include the following:
• Uninterruptible Power Supply (UPS). A UPS provides battery backup to critical components of the system, allowing them to stay online longer and/or allowing the IT staff to shut them down using proper procedures in order to prevent data loss that might occur from a power failure.
• Alternate, or “hot” sites. Some organizations choose to have an alternate site where an exact replica of their critical data is always kept up to date. When the primary site goes down, the alternate site is immediately brought online so that little or no downtime is experienced.
As information has become a strategic asset, a whole industry has sprung up around the technologies necessary for implementing a proper backup strategy. A company can contract with a service provider to back up all of their data or they can purchase large amounts of online storage space and do it themselves. Technologies such as Storage Area Networks (SAN) and archival systems are now used by most large businesses for data backup.
Firewalls
Firewalls are another method that an organization can use for increasing security on its network. A firewall can exist as hardware or software, or both. A hardware firewall is a device that is connected to the network and filters the packets based on a set of rules. One example of these rules would be preventing packets entering the local network that come from unauthorized users. A software firewall runs on the operating system and intercepts packets as they arrive to a computer.
A firewall protects all company servers and computers by stopping packets from outside the organization’s network that do not meet a strict set of criteria. A firewall may also be configured to restrict the flow of packets leaving the organization. This may be done to eliminate the possibility of employees watching YouTube videos or using Facebook from a company computer.
A demilitarized zone (DMZ) implements multiple firewalls as part of network security configuration, creating one or more sections of their network that are partially secured. The DMZ typically contains resources that need broader access but still need to be secured.
Intrusion Detection Systems
Intrusion Detection Systems (IDS) can be placed on the network for security purposes. An IDS does not add any additional security. Instead, it provides the capability to identify if the network is being attacked. An IDS can be configured to watch for specific types of activities and then alert security personnel if that activity occurs. An IDS also can log various types of traffic on the network for analysis later. It is an essential part of any good security system.
Sidebar: Virtual Private Networks
Using firewalls and other security technologies, organizations can effectively protect many of their information resources by making them invisible to the outside world. But what if an employee working from home requires access to some of these resources? What if a consultant is hired who needs to do work on the internal corporate network from a remote location? In these cases, a Virtual Private Network (VPN) is needed.
A VPN allows a user who is outside of a corporate network to take a detour around the firewall and access the internal network from the outside. Through a combination of software and security measures, a VPN provides off-site access to the organization’s network while ensuring overall security.
The Internet cloud is essentially an insecure channel through which people communicate to various web sites/servers. Implementing a VPN results in a secure pathway, usually referred to as a tunnel, through the insecure cloud, virtually guaranteeing secure access to the organization’s resources. The diagram represents security by way of the functionality of a VPN as it “tunnels” through the insecure Internet Cloud. Notice that the remote user is given access to the organization’s intranet, as if the user was physically located within the intranet.
Physical Security
An organization can implement the best authentication scheme in the world, develop superior access control, and install firewalls and intrusion detection, but its security cannot be complete without implementation of physical security. Physical security is the protection of the actual hardware and networking components that store and transmit information resources. To implement physical security, an organization must identify all of the vulnerable resources and take measures to ensure that these resources cannot be physically tampered with or stolen. These measures include the following.
• Locked doors. It may seem obvious, but all the security in the world is useless if an intruder can simply walk in and physically remove a computing device. High value information assets should be secured in a location with limited access.
• Physical intrusion detection. High value information assets should be monitored through the use of security cameras and other means to detect unauthorized access to the physical locations where they exist.
• Secured equipment. Devices should be locked down to prevent them from being stolen. One employee’s hard drive could contain all of your customer information, so it is essential that it be secured.
• Environmental monitoring. An organization’s servers and other high value equipment should always be kept in a room that is monitored for temperature, humidity, and airflow. The risk of a server failure rises when these factors exceed acceptable ranges.
• Employee training. One of the most common ways thieves steal corporate information is the theft of employee laptops while employees are traveling. Employees should be trained to secure their equipment whenever they are away from the office.
Security Policies
Besides the technical controls listed above, organizations also need to implement security policies as a form of administrative control. In fact, these policies should really be a starting point in developing an overall security plan. A good information security policy lays out the guidelines for employee use of the information resources of the company and provides the company recourse in the event that an employee violates a policy.
According to the SANS Institute, a good policy is “a formal, brief, and high-level statement or plan that embraces an organization’s general beliefs, goals, objectives, and acceptable procedures for a specified subject area.” Policies require compliance. Failure to comply with a policy will result in disciplinary action. A policy does not list the specific technical details, instead it focuses on the desired results. A security policy should be based on the guiding principles of confidentiality, integrity, and availability.[2]
Web use is a familiar example of a security policy. A web use policy lays out the responsibilities of company employees as they use company resources to access the Internet. A good example of a web use policy is included in Harvard University’s “Computer Rules and Responsibilities” policy, which can be found here.
A security policy should also address any governmental or industry regulations that apply to the organization. For example, if the organization is a university, it must be aware of the Family Educational Rights and Privacy Act (FERPA), which restricts access to student information. Health care organizations are obligated to follow several regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
A good resource for learning more about security policies is the SANS Institute’s Information Security Policy Page.
Sidebar: Mobile Security
As the use of mobile devices such as laptops and smartphones proliferates, organizations must be ready to address the unique security concerns that the use of these devices bring. One of the first questions an organization must consider is whether to allow mobile devices in the workplace at all. Many employees already have these devices, so the question becomes: Should we allow employees to bring their own devices and use them as part of their employment activities? Or should we provide the devices to our employees? Creating a BYOD (“Bring Your Own Device”) policy allows employees to integrate themselves more fully into their job and can bring higher employee satisfaction and productivity. In many cases, it may be virtually impossible to prevent employees from having their own smartphones or laptops in the workplace. If the organization provides the devices to its employees, it gains more control over use of the devices, but it also increases the burden of having to administrate distribution and use.
Mobile devices can pose many unique security challenges to an organization. Probably one of the biggest concerns is theft of intellectual property. For an employee with malicious intent, it would be a very simple process to connect a mobile device either to a computer via the USB port, or wirelessly to the corporate network, and download confidential data. It would also be easy to secretly take a high-quality picture using a built-in camera.
When an employee does have permission to access and save company data on his or her device, a different security threat emerges. Namely, that device now becomes a target for thieves. Theft of mobile devices (in this case, including laptops) is one of the primary methods that data thieves use.
So what can be done to secure mobile devices? Begin with a good policy regarding their use. According to a 2013 SANS study, organizations should consider developing a mobile device policy that addresses the following issues: use of the camera, use of voice recording, application purchases, encryption at rest, Wi-Fi autoconnect settings, Bluetooth settings, VPN use, password settings, lost or stolen device reporting, and backup. [3]
Besides policies, there are several different tools that an organization can use to mitigate some of these risks. For example, if a device is stolen or lost, geolocation software can help the organization find it. In some cases, it may even make sense to install remote data removal software, which will remove data from a device if it becomes a security risk.
Usability
When looking to secure information resources, organizations must balance the need for security with users’ needs to effectively access and use these resources. If a system’s security measures make it difficult to use, then users will find ways around the security, which may make the system more vulnerable than it would have been without the security measures. Consider password policies. If the organization requires an extremely long password with several special characters, an employee may resort to writing it down and putting it in a drawer since it will be impossible to memorize.
Personal Information Security
As a final topic for this chapter, consider what measures each of us, as individual users, can take to secure our computing technologies. There is no way to have 100% security, but there are several simple steps each individual can take to be more secure.
• Keep your software up to date. Whenever a software vendor determines that a security flaw has been found in their software, an update will be released so you can download the patch to fix the problem. You should turn on automatic updating on your computer to automate this process.
• Install antivirus software and keep it up to date. There are many good antivirus software packages on the market today, including some that are free.
• Be smart about your connections. You should be aware of your surroundings. When connecting to a Wi-Fi network in a public place, be aware that you could be at risk of being spied on by others sharing that network. It is advisable not to access your financial or personal data while attached to a Wi-Fi hotspot. You should also be aware that connecting USB flash drives to your device could also put you at risk. Do not attach an unfamiliar flash drive to your device unless you can scan it first with your security software.
• Backup your data. Just as organizations need to backup their data, individuals need to so as well. The same rules apply. Namely, do it regularly and keep a copy of it in another location. One simple solution for this is to set up an account with an online backup service to automate your backups.
• Secure your accounts with two-factor authentication. Most e-mail and social media providers now have a two-factor authentication option. When you log in to your account from an unfamiliar computer for the first time, it sends you a text message with a code that you must enter to confirm that you are really you. This means that no one else can log in to your accounts without knowing your password and having your mobile phone with them.
• Make your passwords long, strong, and unique. Your personal passwords should follow the same rules that are recommended for organizations. Your passwords should be long (at least 12 random characters) and contain at least two of the following: uppercase and lowercase letters, digits, and special characters. Passwords should not include words that could be tied to your personal information, such as the name of your pet. You also should use different passwords for different accounts, so that if someone steals your password for one account, they still are locked out of your other accounts.
• Be suspicious of strange links and attachments. When you receive an e-mail, tweet, or Facebook post, be suspicious of any links or attachments included there. Do not click on the link directly if you are at all suspicious. Instead, if you want to access the website, find it yourself with your browser and navigate to it directly. The I Love You virus was distributed via email in May 2000 and contained an attachment which when opened copied itself into numerous folders on the user’s computer and modified the operating system settings. An estimated 50,000 computers were affected, all of which could have been avoided if users had followed the warning to not open the attachment.
You can find more about these steps and many other ways to be secure with your computing by going to Stop. Think. Connect. This website is part of a campaign by the STOP. THINK. CONNECT. Messaging Convention in partnership with the U.S. government, including the White House.
Summary
As computing and networking resources have become more an integral part of business, they have also become a target of criminals. Organizations must be vigilant with the way they protect their resources. The same holds true for individuals. As digital devices become more intertwined in everyone’s life, it becomes crucial for each person to understand how to protect themselves.
Study Questions
1. Briefly define each of the three members of the information security triad.
2. What does the term authentication mean?
3. What is multi-factor authentication?
4. What is role-based access control?
5. What is the purpose of encryption?
6. What are two good examples of a complex password?
7. What is pretexting?
8. What are the components of a good backup plan?
9. What is a firewall?
10. What does the term physical security mean?
Exercises
1. Describe one method of multi-factor authentication that you have experienced and discuss the pros and cons of using multi-factor authentication.
2. What are some of the latest advances in encryption technologies? Conduct some independent research on encryption using scholarly or practitioner resources, then write a two- to three-page paper that describes at least two new advances in encryption technology.
3. Find favorable and unfavorable articles about both blockchain and bitcoin. Report your findings, then state your own opinion about these technologies
4. What is the password policy at your place of employment or study? Do you have to change passwords every so often? What are the minimum requirements for a password?
5. When was the last time you backed up your data? What method did you use? In one to two pages, describe a method for backing up your data. Ask your instructor if you can get extra credit for backing up your data.
6. Find the information security policy at your place of employment or study. Is it a good policy? Does it meet the standards outlined in the chapter?
7. How diligent are you in keeping your own information secure? Review the steps listed in the chapter and comment on your security status.
Labs
1. The Caesar Cipher. One of the oldest methods of encryption was used by Julius Caesar and involved simply shifting text a specified number of positions in the alphabet. The number of shifted positions is known as the key. So a key = 3 would encrypt ZOO to CRR. Decrypt the following message which has a key = 3: FRPSXWHU
2. The Vigenere Cipher. This cipher was used as recently as the Civil War by the Confederate forces. The key is slightly more complex than the Caesar Cipher. Vigenere used the number of letters after ‘A’ for his key. For example, if the key = COD, the first letter in the cypher is shifted 2 characters (because “C” is 2 letters after the letter ‘A’), the second letter is shifted 14 letters (O being 14 letters after ‘A’), and the third letter is shifted 3 letters (D being 3 letters after ‘A’). Then the pattern is repeated for subsequent letters. Decrypt the following message which has a key = COD: YSPGSWCHGCKQ
3. Frequency and Pattern Analysis. If you’ve ever watched Wheel of Fortune you know that contestants look for patterns and frequencies in trying to solve a puzzle. Your job in this lab is to analyze letter frequency and letter patterns to determine the plaintext message which in this case is a single word. The key is a simple substitution where the same letter in plaintext always results in the same letter in the cyphertext. The most frequently used letters in the English language are: E, A, O , I, T, S, N. Pattern analysis includes knowing words that have double letters such as “school.” Other patterns include “ing” at the end of a word, “qu” and “th” as a pairs of letters.Cyphertext = CAGGJWhat is the key and the plaintext?
1. Gallagher, S. (2012, November 3). Born to be breached. Arstechnica. Retrieved from http://arstechnica.com/information-t...e-most-common/
2. SANS Institute. (n.d.). Information Security Policy Templates. Retrieved from www.sans.org/security-resourc...icy_Primer.pdf on May 31, 2013.
3. SANS. (n.d.). SCORE: Checklists and Step by Step Guides. Retrieved from www.sans.org/score/checklists...-checklist.xls
4. Iansiti, M. and Lakhani, K. R. (2017, January). The truth about blockchain. Harvard Business Review. Retrieved from https://hbr.org/2017/01/the-truth-about-blockchain
5. Wikipedia. (n.d.). Bitcoin. Harvard Business Review. Retrieved from https://en.Wikipedia.org/wiki/Bitcoin
6. Fernandes, B. (2017, October 20). Personal telephone interview | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/01%3A_What_is_an_information_system/106%3A_Information_Systems_Security.txt |
• 7: Does IT Matter?
For over fifty years, computing technology has been a part of business. Organizations have spent trillions of dollars on information technologies. But has all this investment in IT made a difference? Have we seen increases in productivity? Are companies that invest in IT more competitive? In this chapter, we will look at the value IT can bring to an organization and try to answer these questions. We will begin by highlighting two important works from the past two decades.
• 8: Business Processes
The fourth component of information systems is process. But what is a process and how does it tie into information systems? And in what ways do processes have a role in business? This chapter will look to answer those questions and also describe how business processes can be used for strategic advantage.
• 9: The People in Information Systems
In this chapter, we will be discussing the last component of an information system: people. People are involved in information systems in just about every way you can think of: people imagine information systems, people develop information systems, people support information systems, and, perhaps most importantly, people use information systems.
• 2.4: Information Systems Development
When someone has an idea for a new function to be performed by a computer, how does that idea become reality? If a company wants to implement a new business process and needs new hardware or software to support it, how do they go about making it happen? In this chapter, we will discuss the different methods of taking those ideas and bringing them to reality, a process known as information systems development.
02: Information Systems for Strategic Advantage
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the productivity paradox and explain the current thinking on this topic;
• evaluate Carr’s argument in “Does IT Matter?”;
• describe the components of competitive advantage; and
• describe information systems that can provide businesses with competitive advantage.
Introduction
For over fifty years, computing technology has been a part of business. Organizations have spent trillions of dollars on information technologies. But has all this investment in IT made a difference? Have there been increases in productivity? Are companies that invest in IT more competitive? This chapter looks at the value IT can bring to an organization and attempts to answer these questions. Two important works in the past two decades have attempted to address this issue.
The Productivity Paradox
In 1991, Erik Brynjolfsson wrote an article, published in the Communications of the ACM, entitled “The Productivity Paradox of Information Technology: Review and Assessment.” After reviewing studies about the impact of IT investment on productivity, Brynjolfsson concluded that the addition of information technology to business had not improved productivity at all. He called this the “productivity paradox.” While he did not draw any specific conclusions from his work, [1] he did provide the following analysis.
Although it is too early to conclude that IT’s productivity contribution has been subpar, a paradox remains in our inability to unequivocally document any contribution after so much effort. The various explanations that have been proposed can be grouped into four categories:
1. Mismeasurement of outputs and inputs
2. Lags due to learning and adjustment
3. Redistribution and dissipation of profits
4. Mismanagement of information and technology
In 1998, Brynjolfsson and Lorin Hitt published a follow-up paper entitled “Beyond the Productivity Paradox [2] In this paper, the authors utilized new data that had been collected and found that IT did, indeed, provide a positive result for businesses. Further, they found that sometimes the true advantages in using technology were not directly relatable to higher productivity, but to “softer” measures, such as the impact on organizational structure. They also found that the impact of information technology can vary widely between companies.
IT Doesn’t Matter
Just as a consensus was forming about the value of IT, the Internet stock market bubble burst. Two years later in 2003, Harvard professor Nicholas Carr wrote his article “IT Doesn’t Matter” in the Harvard Business Review. In this article Carr asserted that as information technology had become ubiquitous, it has also become less of a differentiator, much like a commodity. Products that have the same features and are virtually indistinguishable are considered to be commodities. Price and availability typically become the only discriminators when selecting a source for a commodity. In Carr’s view all information technology was the same, delivering the same value regardless of price or supplier. Carr suggested that since IT is essentially a commodity, it should be managed like one. Just select the one with the lowest cost this is most easily accessible. He went on to say IT management should see themselves as a utility within the company and work to keep costs down. For Carr IT’s goal is to provide the best service with minimal downtime. Carr saw no competitive advantage to be gained through information technology.
As you can imagine, this article caused quite an uproar, especially from IT companies. Many articles were written in defense of IT while others supported Carr. In 2004 Carr released a book based on the article entitled Does IT Matter? A year later he was interviewed by CNET on the topic “IT still doesn’t matter.” Click here to watch the video of Carr being interviewed about his book on CNET.
Probably the best thing to come out of the article and subsequent book were discussions on the place of IT in a business strategy, and exactly what role IT could play in competitive advantage. That is the question to be addressed in this chapter.
Competitive Advantage
What does it mean when a company has a competitive advantage? What are the factors that play into it? Michael Porter in his book Competitive Advantage: Creating and Sustaining Superior Performance. writes that a company is said to have a competitive advantage over its rivals when it is able to sustain profits that exceed the average for the industry. According to Porter, there are two primary methods for obtaining competitive advantage: cost advantage and differentiation advantage. [3] So the question for I.T. becomes: How can information technology be a factor in one or both of these methods?
The following sections address this question by using two of Porter’s analysis tools: the value chain and the five forces model. Porter’s analysis in his 2001 article “Strategy and the Internet,” which examines the impact of the Internet on business strategy and competitive advantage, will be used to shed further light on the role of information technology in gaining competitive advantage.[4]
The Value Chain
In his book Competitive Advantage: Creating and Sustaining Performance Porter describes exactly how a company can create value and therefore profit. Value is built through the value chain: a series of activities undertaken by the company to produce a product or service. Each step in the value chain contributes to the overall value of a product or service. While the value chain may not be a perfect model for every type of company, it does provide a way to analyze just how a company is producing value. The value chain is made up of two sets of activities: primary activities and support activities. An explanation of these activities and a discussion of how information technology can play a role in creating value by contributing to cost advantage or differentiation advantage appears next.
Primary activities are the functions that directly impact the creation of a product or service. The goal of a primary activity is to add value that is greater than the cost of that activity. The primary activities are:
• Inbound logistics. These are the processes that bring in raw materials and other needed inputs. Information technology can be used to make these processes more efficient, such as with supply-chain management systems which allow the suppliers to manage their own inventory.
• Operations. Any part of a business that converts the raw materials into a final product or service is a part of operations. From manufacturing to business process management (covered in Chapter 8), information technology can be used to provide more efficient processes and increase innovation through flows of information.
• Outbound logistics. These are the functions required to get the product out to the customer. As with inbound logistics, IT can be used here to improve processes, such as allowing for real-time inventory checks. IT can also be a delivery mechanism itself.
• Sales/Marketing. The functions that will entice buyers to purchase the products are part of sales and marketing. Information technology is used in almost all aspects of this activity. From online advertising to online surveys, IT can be used to innovate product design and reach customers as never before. The company website can be a sales channel itself.
• Service. Service activity involves the functions a business performs after the product has been purchased to maintain and enhance the product’s value. Service can be enhanced via technology as well, including support services through websites and knowledge bases.
The support activities are the functions in an organization that support all of the primary activities. Support activities can be considered indirect costs to the organization. The support activities are:
• Firm infrastructure. An organization’s infrastructure includes finance, accounting, ERP systems (covered in Chapter 9) and quality control. All of these depend on information technology and represent functions where I.T. can have a positive impact.
• Human Resource Management Human Resource Management (HRM) consists of recruiting, hiring, and other services needed to attract and retain employees. Using the Internet, HR departments can increase their reach when looking for candidates. I.T. also allows employees to use technology for a more flexible work environment.
• Technology development. Technology development provides innovation that supports primary activities. These advances are integrated across the firm to add value in a variety of departments. Information technology is the primary generator of value in this support activity.
• Procurement. Procurement focuses on the acquisition of raw materials used in the creation of products. Business-to-business e-commerce can be used to improve the acquisition of materials.
This analysis of the value chain provides some insight into how information technology can lead to competitive advantage. Another important concept from Porter is the “Five Forces Model.”
Porter’s Five Forces
Porter developed the Five Forces model as a framework for industry analysis. This model can be used to help understand the degree of competition in an industry and analyze its strengths and weaknesses. The model consists of five elements, each of which plays a role in determining the average profitability of an industry. In 2001 Porter wrote an article entitled ”Strategy and the Internet,” in which he takes this model and looks at how the Internet impacts the profitability of an industry. Below is a quick summary of each of the Five Forces and the impact of the Internet.
• Threat of substitute products or services. The first force challenges the user to consider the likelihood of another produce or service replacing the product or service you offer. The more types of products or services there are that can meet a particular need, the less profitability there will be in an industry. In the communications industry, the smartphone has largely replaced the pager. In some construction projects, metal studs have replaced wooden studs for framing. The Internet has made people more aware of substitute products, driving down industry profits in those industries in which substitution occurs. Please notice that substitution refers to a product being replaced by a similar product for the purpose of accomplishing the same task. It does not mean dissimilar products or services such as flying to a destination rather than traveling by rail.
• Bargaining power of suppliers. A supplier’s bargaining power is strong when there are few suppliers from which your company can obtain a needed product or service. Conversely, when they are many suppliers their bargaining power is lower since your company would have many sources from which to source a product. When your company has several suppliers to choose from, you can negotiate a lower price. When a sole supplier exists, then your company is at the mercy of the supplier. For example, if only one company makes the controller chip for a car engine, that company can control the price, at least to some extent. The Internet has given companies access to more suppliers, driving down prices.
• Bargaining power of customers. A customer’s bargaining power is strong when your company along with your competitors is attempting to provide the same product to this customer. In this instance the customer has many sources from which to source a product so they can approach your company and seek a price reduction. If there are few suppliers in your industry, then the customer’s bargaining power is considered low.
• Barriers to entry. The easier it is to enter an industry, the more challenging it will be to make a profit in that industry. Imagine you are considering starting a lawn mowing business. The entry barrier is very low since all you need is a law mower. No special skills or licenses are required. However, this means your neighbor next door may decide to start mowing lawns also, resulting in increased competition. In contrast a highly technical industry such as manufacturing of medical devices has numerous barriers to entry. You would need to find numerous suppliers for various components, hire a variety of highly skilled engineers, and work closely with the Food and Drug Administration to secure approval for the sale of your products. In this example the barriers to entry are very high so you should expect few competitors.
• Rivalry among existing competitors: Rivalry among existing competitors helps you evaluate your entry into the market. When rivalry is fierce, each competitor is attempting to gain additional market share from the others. This can result in aggressive pricing, increasing customer support, or other factors which might lure a customer away from a competitor. Markets in which rivalry is low may be easier to enter and become profitable sooner because all of the competitors are accepting of each other’s presence.
Porter’s five forces are used to analyze an industry to determine the average profitability of a company within that industry. Adding in Porter’s analysis of the Internet to his Five Forces results in the realization that technology has lowered overall profitability. [5]
Using Information Systems for Competitive Advantage
Having learned about Porter’s Five Forces and their impact on a firm’s ability to generate a competitive advantage, it is time to look at some examples of competitive advantage. A strategic information system is designed specifically to implement an organizational strategy meant to provide a competitive advantage. These types of information systems began popping up in the 1980s, as noted in a paper by Charles Wiseman entitled “Creating Competitive Weapons From Information Systems.”[6]
A strategic information system attempts to do one or more of the following:
• Deliver a product or a service at a lower cost;
• Deliver a product or service that is differentiated;
• Help an organization focus on a specific market segment;
• Enable innovation.
Here are some examples of information systems that fall into this category.
Business Process Management Systems
In their book, IT Doesn’t Matter – Business Processes Do, Howard Smith and Peter Fingar argue that it is the integration of information systems with business processes that leads to competitive advantage. The authors state that Carr’s article is dangerous because it gave CEOs and IT managers approval to start cutting their technology budgets, putting their companies in peril. True competitive advantage can be found with information systems that support business processes. Chapter 8 focuses on the use of business processes for competitive advantage.
Electronic Data Interchange
Electronic Data Interchange (EDI) provides a competitive advantage through integrating the supply chain electronically. EDI can be thought of as the computer-to-computer exchange of business documents in a standard electronic format between business partners. By integrating suppliers and distributors via EDI, a company can vastly reduce the resources required to manage the relevant information. Instead of manually ordering supplies, the company can simply place an order via the computer and the products are ordered.
Collaborative Systems
As organizations began to implement networking technologies, information systems emerged that allowed employees to begin collaborating in different ways. These systems allowed users to brainstorm ideas together without the necessity of physical, face-to-face meetings. Tools such as video conferencing with Skype or WebEx, collaboration and document sharing with Microsoft SharePoint, and project management with SAP’s Project System make collaboration possible in a variety of endeavors.
Broadly speaking, any software that allows multiple users to interact on a document or topic could be considered collaborative. Electronic mail, a shared Word document, and social networks fall into this broad definition. However, many software tools have been created that are designed specifically for collaborative purposes. These tools offer a broad spectrum of collaborative functions. Here is just a short list of some collaborative tools available for businesses today:
• Google Drive. Google Drive offers a suite of office applications (such as a word processor, spreadsheet, drawing, presentation) that can be shared between individuals. Multiple users can edit the documents at the same time and the threaded comments option is available.
• Microsoft SharePoint. SharePoint integrates with Microsoft Office and allows for collaboration using tools most office workers are familiar with. SharePoint was covered in greater detail in chapter 5.
• Cisco WebEx. WebEx combines video and audio communications and allows participants to interact with each other’s computer desktops. WebEx also provides a shared whiteboard and the capability for text-based chat to be going on during the sessions, along with many other features. Mobile editions of WebEx allow for full participation using smartphones and tablets.
• GitHub. Programmers/developers use GitHub for web-based team development of computer software.
Decision Support Systems
A decision support system (DSS) helps an organization make a specific decision or set of decisions. DSSs can exist at different levels of decision-making within the organization, from the CEO to first level managers. These systems are designed to take inputs regarding a known (or partially-known) decision making process and provide the information necessary to make a decision. DSSs generally assist a management level person in the decision-making process, though some can be designed to automate decision-making.
An organization has a wide variety of decisions to make, ranging from highly structured decisions to unstructured decisions. A structured decision is usually one that is made quite often, and one in which the decision is based directly on the inputs. With structured decisions, once you know the necessary information you also know the decision that needs to be made. For example, inventory reorder levels can be structured decisions. Once your inventory of widgets gets below a specific threshold, automatically reorder ten more. Structured decisions are good candidates for automation, but decision-support systems are generally not built for them.
An unstructured decision involves a lot of unknowns. Many times unstructured decisions are made for the first time. An information system can support these types of decisions by providing the decision makers with information gathering tools and collaborative capabilities. An example of an unstructured decision might be dealing with a labor issue or setting policy for the implementation of a new technology.
Decision support systems work best when the decision makers are having to make semi-structured decisions. A semi-structured decision is one in which most of the factors needed for making the decision are known but human experience and other outside factors may still impact the decision. A good example of an semi-structured decision would be diagnosing a medical condition (see sidebar).
As with collaborative systems, DSSs can come in many different formats. A nicely designed spreadsheet that allows for input of specific variables and then calculates required outputs could be considered a DSS. Another DSS might be one that assists in determining which products a company should develop. Input into the system could include market research on the product, competitor information, and product development costs. The system would then analyze these inputs based on the specific rules and concepts programmed into it. The system would report its results with recommendations and/or key indicators to be used in making a decision. A DSS can be looked at as a tool for competitive advantage because it can give an organization a mechanism to make wise decisions about products and innovations.
Sidebar: Isabel – A Health Care DSS
A discussed in the text, DSSs are best applied to semi-structured decisions, in which most of the needed inputs are known but human experience and environmental factors also play a role. A good example for today is Isabel, a health care DSS. The creators of Isabel explain how it works:
Isabel uses the information routinely captured during your workup, whether free text or structured data, and instantaneously provides a diagnosis checklist for review. The checklist contains a list of possible diagnoses with critical “Don’t Miss Diagnoses” flagged. When integrated into your Electronic Medical Records (EMR) system, Isabel can provide “one click” seamless diagnosis support with no additional data entry. [7]
Investing in IT for Competitive Advantage
In 2008, Brynjolfsson and McAfee published a study in the Harvard Business Review on the role of IT in competitive advantage, entitled “Investing in the IT That Makes a Competitive Difference.” Their study confirmed that IT can play a role in competitive advantage if deployed wisely. In their study, they drew three conclusions[8]:
• First, the data show that IT has sharpened differences among companies instead of reducing them. This reflects the fact that while companies have always varied widely in their ability to select, adopt, and exploit innovations, technology has accelerated and amplified these differences.
• Second, good management matters. Highly qualified vendors, consultants, and IT departments might be necessary for the successful implementation of enterprise technologies themselves, but the real value comes from the process innovations that can now be delivered on those platforms. Fostering the right innovations and propagating them widely are both executive responsibilities – ones that can’t be delegated.
• Finally, the competitive shakeup brought on by IT is not nearly complete, even in the IT-intensive US economy. You can expect to see these altered competitive dynamics in other countries, as well, as their IT investments grow.
Information systems can be used for competitive advantage, but they must be used strategically. Organizations must understand how they want to differentiate themselves and then use all the elements of information systems (hardware, software, data, people, and process) to accomplish that differentiation.
Summary
Information systems are integrated into all components of business today, but can they bring competitive advantage? Over the years, there have been many answers to this question. Early research could not draw any connections between IT and profitability, but later studies have shown that the impact can be positive. IT is not a panacea. Just purchasing and installing the latest technology will not by itself make a company more successful. Instead, the combination of the right technologies and good management will give a company the best chance for a positive result.
Study Questions
1. What is the productivity paradox?
2. Summarize Carr’s argument in “Does IT Matter.”
3. How is the 2008 study by Brynjolfsson and McAfee different from previous studies? How is it the same?
4. What does it mean for a business to have a competitive advantage?
5. What are the primary activities and support activities of the value chain?
6. What has been the overall impact of the Internet on industry profitability? Who has been the true winner?
7. How does EDI work?
8. Give an example of a semi-structured decision and explain what inputs would be necessary to provide assistance in making the decision.
9. What does a collaborative information system do?
10. How can IT play a role in competitive advantage, according to the 2008 article by Brynjolfsson and McAfee?
Exercises
1. Analyze Carr’s position in regards to PC vs. Mac, Open Office vs. Microsoft Office, and Microsoft Powerpoint vs. Tableau.
2. Do some independent research on Nicholas Carr (the author of “IT Doesn’t Matter”) and explain his current position on the ability of IT to provide competitive advantage.
3. Review the WebEx website. What features of WebEx would contribute to good collaboration? Compare WebEx with other collaboration tools such as Skype or Google Hangouts?
Lab
1. Think of a semi-structured decision that you make in your daily life and build your own DSS using a spreadsheet that would help you make that decision.
1. Brynjolfsson, E. (1994). The Productivity Paradox of Information Technology: Review and Assessment. Center for Coordination Science MIT Sloan School of Management: Cambridge, Massachusetts.
2. Brynjolfsson, E. and Hitt, L. (1998). Beyond the Productivity Paradox. Communications of the ACM, 41, 49–55.
3. Porter, M. (1985). Competitive Advantage: Creating and Sustaining Superior Performance. New York: The Free Press.
4. Porter, M. (2001, March). Strategy and the Internet. Harvard Business Review, 79 ,3. Retrieved from http://hbswk.hbs.edu/item/2165.html
5. Porter, M. (2001, March). Strategy and the Internet. Harvard Business Review, 79, 3. Retrieved from http://hbswk.hbs.edu/item/2165.html
6. Wiseman, C. and MacMillan, I. C. (1984). Creating Competitive Weapons From Information Systems. Journal Of Business Strategy, 5(2)., 42.
7. Isabel. (n.d.). Broaden Your Differential Diagnosis. Retrieved from http://www.isabelhealthcare.com/home/ourmission.
8. McAfee, A. and Brynjolfsson, E. (2008, July-August). Investing in the IT That Makes a Competitive Difference. Harvard Business Review. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/02%3A_Information_Systems_for_Strategic_Advantage/200%3A_Does_IT_Matter.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• define the term business process;
• understand the tools of documentation of business processes;
• identify the different systems needed to support business processes in an organization;
• explain the value of an enterprise resource planning (ERP) system;
• explain how business process management and business process reengineering work; and
• understand how information technology combined with business processes can bring an organization competitive advantage.
Introduction
The fourth component of information systems is process. But what is a process and how does it tie into information systems? And in what ways do processes have a role in business? This chapter looks to answer those questions and also describe how business processes can be used for strategic advantage.
What Is a Business Process?
We have all heard the term process before, but what exactly does it mean? A process is a series of tasks that are completed in order to accomplish a goal. A business process, therefore, is a process that is focused on achieving a goal for a business. Processes are something that businesses go through every day in order to accomplish their mission. The better their processes, the more effective the business. Some businesses see their processes as a strategy for achieving competitive advantage. A process that achieves its goal in a unique way can set a company apart. A process that eliminates costs can allow a company to lower its prices (or retain more profit). If you have worked in a business setting, you have participated in a business process. Anything from a simple process for making a sandwich at Subway to building a space shuttle utilizes one or more business processes. In the context of information systems, a business process is a set of business activities performed by human actors and/or the information system to accomplish a specific outcome.
Documenting a Process
Every day each of us will perform many processes without even thinking about them such as getting ready for work, using an ATM, texting a friend, etc. As processes grow more complex, documenting becomes necessary. It is essential for businesses to do this because it allows them to ensure control over how activities are undertaken in their organization. It also allows for standardization. For example, McDonald’s has the same process for building a Big Mac in all of its restaurants.
The simplest way to document a process is to just create a list. The list shows each step in the process. Each step can be checked off upon completion. A simple process such as how to create an account on gmail might look like this:
1. Go to gmail.com.
2. Click “Create account.”
3. Enter your contact information in the “Create your Google Account” form.
4. Choose your username and password.
5. Agree to User Agreement and Privacy Policy by clicking on “Submit.”
For processes that are not so straightforward, documenting all of the steps as a checklist may not be sufficient. For example, here is the process for determining if an article for a term needs to be added to Wikipedia:
1. Search Wikipedia to determine if the term already exists.
2. If the term is found, then an article is already written, so you must think of another term. Go to step 1.
3. If the term is not found, then look to see if there is a related term.
4. If there is a related term, then create a redirect.
5. If there is not a related term, then create a new article.
This procedure is relatively simple. In fact it has the same number of steps as the previous example, but because it has some decision points, it is more difficult to track as a simple list. In these cases it may make more sense to use a diagram to document the process.
Business Process Modeling Notation
A diagramming tool for documentation of business process is a formalized visual language that provides systems analysts with the ability to describe the business processes unambiguously, to visualize the business processes for systematic understanding, and to communicate the business process for business process management. Natural languages (e.g., English) are incapable to explain complex business processes. Diagrams have been used as tools for business process modeling in the information systems field. There have been many types of business process diagramming tools, and each of them has its own style and syntax to serve its particular purpose. The most commonly used business process diagramming tools are Business Process Modeling Notation (BPMN), Data Flow Diagram (DFD), and the Unified Modeling Language (UML).
BPMN is an extension of the traditional flowchart method by adding more diagramming elements for descriptions of business process. The objective of BPMN is to support business process documentation by providing intuitive notations for business rules. The flowchart style diagrams in BPMN can provide detailed specifications business processes from start to end. However, BPMN is short of the ability of system decomposition for large information systems.
DFD has served as a foundation of many other tools of documentation of business process. The central concept of DFD is a top-down approach to understanding a system. The top-down approach is consistent with the system concept that views a system in a holistic manner and concerns an understanding of a system by examining the components and their interactions within the system. More importantly, while describing a business process by using DFD, the data stores used in the process and generated data flows in the process are also defined. We will provide an example of DFD in the Sidebar section of this chapter to illustrate the integration of data and business tasks in documenting a business process.
The Unified Modeling Language (UML) is a general-purpose modeling tool in the field of software engineering for constructing all types of computerized systems. UML includes a set of various types of diagrams with different subjects of modeling and diversified graphics styles. The diversified diagrams in UML can provide detailed specifications for software engineering in many perspectives for construction of information systems, but could be too complicated for documenting business processes from the perspective of business process management.
Managing Business Process Documentation
As organizations begin to document their processes, it becomes an administrative responsibility to keep track of them. As processes change and improve, it is important to know which processes are the most recent. It is also important to manage the process so that it can be easily updated. The requirement to manage process documentation has been one of the driving forces behind the creation of the document management system. A document management system stores and tracks documents and supports the following functions.
• Versions and timestamps. The document management system will keep multiple versions of documents. The most recent version of a document is easy to identify and will be considered the default.
• Approvals and workflows. When a process needs to be changed, the system will manage both access to the documents for editing and the routing of the document for approval.
• Communication. When a process changes, those who implement the process need to be made aware of the changes. The document management system will notify the appropriate people when a change to a document has been approved.
Of course, document management systems are not only used for managing business process documentation. Many other types of documents are managed in these systems, such as legal documents or design documents.
ERP Systems
An Enterprise Resource Planning (ERP) system is software with a centralized database that can be used to run an entire company. Here are some of the main components of an ERP system.
Computer program. The system is a computer program, which means that it has been developed with specific logic and rules behind it. It is customized and installed to work specifically for an individual organization.
• Centralized database. All data in an ERP system is stored in a single, central database. Centralization is key to the success of an ERP. Data entered in one part of the company can be immediately available to other parts of the company.
• Used to run an entire company. An ERP can be used to manage an entire organization’s operations. Companies can purchase modules for an ERP that represent different functions within the organization such as finance, manufacturing, and sales. Some companies choose to purchase many modules, others choose a subset of the modules.
An ERP system not only centralizes an organization’s data, but the processes it enforces are the processes the organization has adopted. When an ERP vendor designs a module, it has to implement the rules for the associated business processes. Best practices can be built into the ERP – a major selling point for ERP. In other words, when an organization implements an ERP, it also gets improved best practices as part of the deal.
For many organizations the implementation of an ERP system is an excellent opportunity to improve their business practices and upgrade their software at the same time. But for others an ERP brings a challenge. Is the process embedded in the ERP really better than the process they are currently utilizing? And if they implement this ERP and it happens to be the same one that all of their competitors have, will they simply become more like them, making it much more difficult to differentiate themselves? A large organization may have one version of the ERP, then acquire a subsidiary which has a more recent version. Imagine the challenge of requiring the subsidiary to change back to the earlier version.
One of the criticisms of ERP systems has been that they commoditize business processes, driving all businesses to use the same processes and thereby lose their uniqueness. The good news is that ERP systems also have the capability to be configured with custom processes. For organizations that want to continue using their own processes or even design new ones, ERP systems offer customization so the ERP is unique to the organization.
There is a drawback to customizing an ERP system. Namely, organizations have to maintain the changes themselves. Whenever an update to the ERP system comes out, any organization that has created a custom process will be required to add that change to their new ERP version. This requires someone to maintain a listing of these changes as well as re-testing the system every time an upgrade is made. Organizations will have to wrestle with this decision. When should they go ahead and accept the best-practice processes built into the ERP system and when should they spend the resources to develop their own processes?
Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.
Business Process Management
Organizations that are serious about improving their business processes will also create structures to manage those processes. Business process management (BPM) can be thought of as an intentional effort to plan, document, implement, and distribute an organization’s business processes with the support of information technology.
BPM is more than just automating some simple steps. While automation can make a business more efficient, it cannot be used to provide a competitive advantage. BPM, on the other hand, can be an integral part of creating that advantage.
Not all of an organization’s processes should be managed this way. An organization should look for processes that are essential to the functioning of the business and those that may be used to bring a competitive advantage. The best processes to look at are those that include employees from multiple departments, those that require decision-making that cannot be easily automated, and processes that change based on circumstances. Here is an example.
Suppose a large clothing retailer is looking to gain a competitive advantage through superior customer service. A task force is created to develop a state-of-the-art returns policy that allows customers to return any article of clothing, no questions asked. The organization also decides that, in order to protect the competitive advantage that this returns policy will bring, they will develop their own customization to their ERP system to implement this returns policy. In preparation for the rollout of the system, all customer service employees are trained, showing how to use the new system and specifically how to process returns. Once the updated returns process is implemented, the organization will be able to measure several key indicators about returns that will allow them to adjust the policy as needed. For example, if it is determined that many women are returning their high-end dresses after wearing them once, they could implement a change to the process that limits the return period to 14 days from the original purchase date. As changes to the returns policy are made, the changes are rolled out via internal communications and updates to the returns processing on the system are made.
If done properly, business process management will provide several key benefits to an organization, which can be used to contribute to competitive advantage. These benefits include:
• Empowering employees. When a business process is designed correctly and supported with information technology, employees will be able to implement it on their own authority. In the returns policy example, an employee would be able to accept returns made before fourteen days or use the system to make determinations on what returns would be allowed after fourteen days.
• Built-in reporting. By building measurement into the programming, the organization can stay current on key metrics regarding their processes. In this example, these can be used to improve the returns process and also, ideally, to reduce returns.
• Enforcing best practices. As an organization implements processes supported by information systems, it can work to implement the best practices for that class of business process. In this example, the organization may want to require that all customers returning a product without a receipt show a legal ID. This requirement can be built into the system so that the return will not be processed unless a valid ID number is entered.
• Enforcing consistency. By creating a process and enforcing it with information technology, it is possible to create consistency across the entire organization. In this example, all stores in the retail chain can enforce the same returns policy. If the returns policy changes, the change can be instantly enforced across the entire chain.
Business Process Re-engineering
As organizations look to manage their processes to gain a competitive advantage, it is also important to understand that existing ways of doing things may not be the most effective or efficient. A process developed in the 1950s is not going to be better just because it is now supported by technology.
In 1990 Michael Hammer published an article in the Harvard Business Review entitled “Reengineering Work: Don’t Automate, Obliterate.” This article suggested that simply automating a bad process does not make it better. Instead, companies should “blow up” their existing processes and develop new processes that take advantage of the new technologies and concepts. He states in the introduction to the article:
Many of our job designs, work flows, control mechanisms, and organizational structures came of age in a different competitive environment and before the advent of the computer. They are geared towards greater efficiency and control. Yet the watchwords of the new decade are innovation and speed, service, and quality.
It is time to stop paving the cow paths. Instead of embedding outdated processes in silicon and software, we should obliterate them and start over. We should “re-engineer” our businesses: use the power of modern information technology to radically redesign our business processes in order to achieve dramatic improvements in their performance.[1]
Business Process Re-engineering (BPR) is not just taking an existing process and automating it. BPR is fully understanding the goals of a process and then dramatically redesigning it from the ground up to achieve dramatic improvements in productivity and quality. But this is easier said than done. Most people think in terms of how to do small, local improvements to a process. Complete redesign requires thinking on a larger scale. Hammer provides some guidelines for how to go about doing business process re-engineering:
• Organize around outcomes, not tasks. This simply means design the process so that, if possible, one person performs all the steps. Instead of passing the task on to numerous people, one person does the entire process, resulting in greater speed and customer responsiveness.
• Have those who use the outcomes of the process perform the process. With the use of information technology many simple tasks are now automated so the person who needs the outcome should be empowered to perform it. Hammer provides the following example. Instead of having every department in the company use a purchasing department to order supplies, have the supplies ordered directly by those who need the supplies using an information system.
• Merge information processing work into the real work that produces the information. When one part of the company creates information, such as sales information or payment information, it should be processed by that same department. There is no need for one part of the company to process information created in another part of the company.
• Treat geographically dispersed resources as though they were centralized. With the communications technologies available today, it becomes easier than ever to focus on physical location. A multinational organization does not need separate support departments (such as IT, purchasing, etc.) for each location anymore.
• Link parallel activities instead of integrating their results. Departments that work in parallel should be sharing data and communicating with each other during a process instead of waiting until each group is done and then comparing notes. The outdated concept of only linking outcomes results in re-work, increased costs, and delays.
• Put the decision points where the work is performed, and build controls into the process. The people who do the work should have decision making authority and the process itself should have built-in controls using information technology. Today’s workforce is more educated and knowledgeable than in the past so providing workers with information technology can result in the employees controlling their processes.
• Capture information at the source. Requiring information to be entered more than once causes delays and errors. With information technology, an organization can capture it once and then make it available whenever needed.
These principles may seem like common sense today, but in 1990 they took the business world by storm. Hammer gives example after example of how organizations improved their business processes by many orders of magnitude without adding any new employees, simply by changing how they did things (see sidebar).
Unfortunately, business process re-engineering got a bad name in many organizations. This was because it was used as an excuse for cost cutting that really had nothing to do with BPR. For example, many companies simply used it as a reason for laying off part of their workforce. However, today many of the principles of BPR have been integrated into businesses and are considered part of good business-process management.
Sidebar: Reengineering the College Bookstore
The process of purchasing the correct textbooks in a timely manner for college classes has always been problematic. Now with online bookstores competing directly with the college bookstore for students’ purchases, the college bookstore is under pressure to justify its existence.
But college bookstores have one big advantage over their competitors, namely they have access to students’ data. Once a student has registered for classes, the bookstore knows exactly what books that student will need for the upcoming term. To leverage this advantage and take advantage of new technologies, the bookstore wants to implement a new process that will make purchasing books through the bookstore advantageous to students. Though they may not be able to compete on price, they can provide other advantages such as reducing the time it takes to find the books and the ability to guarantee that the book is the correct one for the class. In order to do this, the bookstore will need to undertake a process redesign.
The goal of the process redesign is simple. Capture a higher percentage of students as customers of the bookstore. After diagramming the existing process and meeting with student focus groups, the bookstore comes up with a new process. In the new process the bookstore utilizes information technology to reduce the amount of work the students need to do in order to get their books. In this new process the bookstore sends the students an e-mail with a list of all the books required for their upcoming classes. By clicking a link in this e-mail the students can log into the bookstore, confirm their books, and complete the purchase. The bookstore will then deliver the books to the students. And there is an additional benefit to the faculty: Professors are no longer asked to delay start of semester assignments while students wait for books to arrive in the mail. Instead, students can be expected to promptly complete their assignments and the course proceeds on schedule.
Here are the changes to this process shown as data flow diagrams:
Sidebar: ISO Certification
Many organizations now claim that they are using best practices when it comes to business processes. In order to set themselves apart and prove to their customers, and potential customers, that they are indeed doing this, these organizations are seeking out an ISO 9000 certification. ISO is an acronym for International Standards Organization (website here). This body defines quality standards that organizations can implement to show that they are, indeed, managing business processes in an effective way. The ISO 9000 certification is focused on quality management.
In order to receive ISO certification, an organization must be audited and found to meet specific criteria. In its most simple form, the auditors perform the following review.
• Tell me what you do (describe the business process).
• Show me where it says that (reference the process documentation).
• Prove that this is what happened (exhibit evidence in documented records).
Over the years, this certification has evolved and many branches of the certification now exist. ISO certification is one way to separate an organization from others. You can find out more about the ISO 9000 standard here.
Summary
The advent of information technologies has had a huge impact on how organizations design, implement, and support business processes. From document management systems to ERP systems, information systems are tied into organizational processes. Using business process management, organizations can empower employees and leverage their processes for competitive advantage. Using business process reengineering, organizations can vastly improve their effectiveness and the quality of their products and services. Integrating information technology with business processes is one way that information systems can bring an organization lasting competitive advantage.
Study Questions
1. What does the term business process mean?
2. What are three examples of business process from a job you have had or an organization you have observed?
3. What is the value in documenting a business process?
4. What is an ERP system? How does an ERP system enforce best practices for an organization?
5. What is one of the criticisms of ERP systems?
6. What is business process re-engineering? How is it different from incrementally improving a process?
7. Why did BPR get a bad name?
8. List the guidelines for redesigning a business process.
9. What is business process management? What role does it play in allowing a company to differentiate itself?
10. What does ISO certification signify?
Exercises
1. Think of a business process that you have had to perform in the past. How would you document this process? Would a diagram make more sense than a checklist? Document the process both as a checklist and as a diagram.
2. Review the return policies at your favorite retailer, then answer this question. What information systems do you think would need to be in place to support their return policy?
3. If you were implementing an ERP system, in which cases would you be more inclined to modify the ERP to match your business processes? What are the drawbacks of doing this?
4. Which ERP is the best? Do some original research and compare three leading ERP systems to each other. Write a two- to three-page paper that compares their features.
Labs
1. Visit a fast food restaurant of your choice. Observe the processes used in taking an order, filling the order, and receiving payment. Create a flowchart showing the steps used. Then create a second flowchart indicating where you would recommend improvements to the processes.
2. Virginia Mason Medical Center, located in Seattle, Washington, needed to radically change some of their business processes. Download the case study. Then read the case study and respond to the following items.
1. Number of campuses
2. Number of employees
3. Number of physicians
4. Nature of the issue at Virginia Mason
5. “You cannot improve a process until…”
6. Discuss staff walking distance and inventory levels
7. How were patient spaces redesigned?
8. What happened to walking distance after this redesign?
9. Inventory was reduced by what percent?
10. Total cost savings =
1. Hammer, M. (1990). Reengineering work: don’t automate, obliterate. Harvard Business Review 68.4, 104–112. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/02%3A_Information_Systems_for_Strategic_Advantage/202%3A_Business_Processes.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe each of the different roles that people play in the design, development, and use of information systems;
• understand the different career paths available to those who work with information systems;
• explain the importance of where the information-systems function is placed in an organization; and
• describe the different types of users of information systems.
Introduction
The opening chapters of this text focused on the technology behind information systems, namely hardware, software, data, and networking. The last chapter covered business processes and the key role they can play in the success of a business. This chapter discusses people, the last component of an information system.
People are involved in information systems in just about every way. People imagine information systems, people develop information systems, people support information systems, and, perhaps most importantly, people use information systems.
The Creators of Information Systems
The first group of people to be considered play a role in designing, developing, and building information systems. These people are generally technical and have a background in programming, analysis, information security, or database design. Just about everyone who works in the creation of information systems has a minimum of a bachelor’s degree in computer science or information systems, though that is not necessarily a requirement. The process of creating information systems will be covered in more detail in Chapter 10.
The following chart shows the U. S. Bureau of Labor Statistics projections for computing career employment in 2020.
Systems Analyst
The systems analyst straddles the divide between identifying business needs and imagining a new or redesigned system to fulfill those needs. This individual works with a team or department seeking to identify business requirements and analyze the specific details of an existing system or a system that needs to be built. Generally, the analyst is required to have a good understanding of the business itself, the purpose of the business, the business processes involved, and the ability to document them well. The analyst identifies the different stakeholders in the system and works to involve the appropriate individuals in the analysis process.
Prior to analyzing the problem or the system of concern, the analyst needs to a) clearly identify the problem, b) gain approval for the project, c) identify the stakeholders, and d) develop a plan to monitor the project. The analysis phase of the project can be broken down into five steps.
1. Seek out and identify the details
2. Specify requirements
3. Decide which requirements are most important
4. Create a dialog showing how the user interacts with the existing system
5. Ask users to critique the list of requirements that have been developed
The analysis phase involves both the systems analyst and the users. It is important to realize the role the users take in the analysis of the system. Users can have significant insights into how well the current system functions as well as suggest improvements.
Once the requirements are determined, the analyst begins the process of translating these requirements into an information systems design. It is important to understand which different technological solutions will work and provide several alternatives to the client, based on the company’s budgetary constraints, technology constraints, and culture. Once the solution is selected, the analyst will create a detailed document describing the new system. This new document will require that the analyst understand how to speak in the technical language of systems developers.
The design phase results in the components of the new system being identified, including how they relate to one another. The designer needs to communicate clearly with software developers as well database administrators by using terminology that is consistent with both of these specialties. The design phase of the project can be broken down into six steps.
1. Design the hardware environment
2. Design the software
3. Design how the new system will interface with the users
4. Design hardware interfaces
5. Design database tables
6. Design system security
A systems analyst generally is not the one who does the actual development of the information system. The design document created by the systems analyst provides the detail needed to create the system and is handed off to a developer to actually write the software and to the database administrator to build the database and tables that will be in the database.
Sometimes the system may be assembled from off-the-shelf components by a person called a systems integrator. This is a specific type of systems analyst that understands how to get different software packages to work with each other.
To become a systems analyst, you should have a background both in the business analysis and in systems design. Many analysts first work as developers and have business experience before becoming system analysts. It is vital for analysts to clearly understand the purpose of the business of interest, realizing that all businesses are unique.
Programmer/Developer
Programmers spend their time writing computer code in a programming language. In the case of systems development, programmers generally attempt to fulfill the design specifications given to them by a systems analyst/designer. Many different styles of software development exist A programmer may work alone for long stretches of time or work as part of a team with other developers. A programmer needs to be able to understand complex processes and also the intricacies of one or more programming languages.
Computer Engineer
Computer engineers design the computing devices that are used every day. There are many types of computer engineers who work on a variety of different types of devices and systems. Some of the more prominent computer engineering jobs are as follows:
• Hardware engineer. A hardware engineer designs hardware and test components such as microprocessors, memory devices, routers, and networks. Many times, a hardware engineer is at the cutting edge of computing technology, creating something brand new. Other times, the hardware engineer’s job is to re-engineer an existing component to work faster or use less power. Many times a hardware engineer’s job is to write code to create a program that will be implemented directly on a computer chip.
• Software engineer. Software engineers tend to focus on a specific area of software such as operating systems, networks, applications, or databases. Software engineers use three primary skill areas: computer science, engineering, and mathematics.
• Systems engineer. A systems engineer takes the components designed by other engineers and makes them all work together, focusing on the integration of hardware and software. For example, to build a computer the mother board, processor, memory, and hard disk all have to work together. A systems engineer has experience with many different types of hardware and software and knows how to integrate them to create new functionality.
• Network engineer. A network engineer understands the networking requirements of an organization and then designs a communications system to meet those needs, using the networking hardware and software, sometimes referred to as a network operating system. Network engineers design both local area networks as well as wide area networks.
There are many different types of computer engineers, and often the job descriptions overlap. While many may call themselves engineers based on a company job title, there is also a professional designation of “professional engineer” which has specific requirements. In the United States each state has its own set of requirements for the use of this title, as do different countries around the world. Most often, it involves a professional licensing exam.
Information Systems Operations and Administration
Another group of information systems professionals are involved in the day-to-day operations and administration of IT. These people must keep the systems running and up-to-date so that the rest of the organization can make the most effective use of these resources.
Computer Operator
A computer operator is the person who oversees the mainframe computers and data centers in organizations. Some of their duties include keeping the operating systems up to date, ensuring available memory and disk storage, providing for redundancy (think electricity, connectivity to the Internet, and database backups), and overseeing the physical environment of the computer. Since mainframe computers increasingly have been replaced with servers, storage management systems, and other platforms, computer operators’ jobs have grown broader and include working with these specialized systems.
Database Administrator
A Database Administrator (DBA) is the person who designs and manages the databases for an organization. This person creates and maintains databases that are used as part of applications or the data warehouse. The DBA also consults with systems analysts and programmers on projects that require access to or the creation of databases.
Help Desk/Support Analyst
Most mid-size to large organizations have their own information technology help desk. The help desk is the first line of support for computer users in the company. Computer users who are having problems or need information can contact the help desk for assistance. Many times a help desk worker is a junior level employee who is able to answer basic issues that users need assistance with. Help desk analysts work with senior level support analysts or have a computer knowledgebase at their disposal to help them investigate the problem at hand. The help desk is a great place to break into working in IT because it exposes you to all of the different technologies within the company. A successful help desk analyst should have good communications skills and a sincere interest in helping users.
Trainer
A computer trainer conducts classes to teach people specific computer skills. For example, if a new ERP system is being installed in an organization, one part of the implementation process is to teach all of the users how to use the new system. A trainer may work for a software company and be contracted to come in to conduct classes when needed; a trainer may work for a company that offers regular training sessions. Or a trainer may be employed full time for an organization to handle all of their computer instruction needs. To be successful as a trainer you need to be able to communicate technical concepts clearly and demonstrate patience with learners.
Managing Information Systems
The management of information-systems functions is critical to the success of information systems within the organization. Here are some of the jobs associated with the management of information systems.
CIO
The Chief Information Officer (CIO) is the head of the information-systems function. This person aligns the plans and operations of the information systems with the strategic goals of the organization. Tasks include budgeting, strategic planning, and personnel decisions for the information systems function. The CIO must also be the face of the IT department within the organization. This involves working with senior leaders in all parts of the organization to ensure good communication, planning, and budgeting.
Interestingly, the CIO position does not necessarily require a lot of technical expertise. While helpful, it is more important for this person to have good management skills and understand the business. Many organizations do not have someone with the title of CIO. Instead, the head of the information systems function is called the Vice President of Information Systems or Director of Information Systems.
Functional Manager
As an information systems organization becomes larger, many of the different functions are grouped together and led by a manager. These functional managers report to the CIO and manage the employees specific to their function. For example, in a large organization there are a group of systems analysts who report to a manager of the systems analysis function. For more insight into how this might look, see the discussion later in the chapter of how information systems are organized.
ERP Management
Organizations using an ERP require one or more individuals to manage these systems. EPR managers make sure that the ERP system is completely up to date, work to implement any changes to the ERP that are needed, and consult with various user departments on needed reports or data extracts.
Project Managers
Information systems projects are notorious for going over budget and being delivered late. In many cases a failed IT project can spell doom for a company. A project manager is responsible for keeping projects on time and on budget. This person works with the stakeholders of the project to keep the team organized and communicates the status of the project to management. Gantt charts, shown above, are used to graphically illustrate a project’s schedule, tasks, and resources.
A project manager does not have authority over the project team. Instead, the project manager coordinates schedules and resources in order to maximize the project outcomes. This leader must be a good communicator and an extremely organized person. A project manager should also have good people skills. Many organizations require each of their project managers to become certified as a Project Management Professional (PMP).
Information Security Officer
An information security officer is in charge of setting information security policies for an organization and then overseeing the implementation of those policies. This person may have one or more people reporting to them as part of the information security team. As information has become a critical asset, this position has become highly valued. The information security officer must ensure that the organization’s information remains secure from both internal and external threats.
Emerging Roles
As technology evolves many new roles are becoming more common as other roles diminish. For example, as we enter the age of “big data,” we are seeing the need for more data analysts and business intelligence specialists. Many companies are now hiring social media experts and mobile technology specialists. The increased use of cloud computing and Virtual Machine (VM) technologies also is increasing demand for expertise in those areas.
Career Paths in Information Systems
These job descriptions do not represent all possible jobs within an information systems organization. Larger organizations will have more specialized roles, while smaller organizations may combine some of these roles. Many of these roles may exist outside of a traditional information-systems organization, as we will discuss below.
Working with information systems can be a rewarding career choice. Whether you want to be involved in very technical jobs (programmer, database administrator), or you want to be involved in working with people (systems analyst, trainer, project manager), there are many different career paths available.
Many times those in technical jobs who want career advancement find themselves in a dilemma. A person can continue doing technical work, where sometimes their advancement options are limited, or become a manager of other employees and put themselves on a management career track. In many cases those proficient in technical skills are not gifted with managerial skills. Some organizations, especially those that highly value their technically skilled employees, create a technical track that exists in parallel to the management track so that they can retain employees who are contributing to the organization with their technical skills.
Sidebar: Are Certifications Worth Pursuing?
As technology becomes more important to businesses, hiring employees with technical skills is becoming critical. But how can an organization ensure that the person they are hiring has the necessary skills? Many organizations are including technical certifications as a prerequisite for getting hired.
Cisco Certified Internetwork Expert.
Certifications are designations given by a certifying body that someone has a specific level of knowledge in a specific technology. This certifying body is often the vendor of the product itself, though independent certifying organizations, such as CompTIA, also exist. Many of these organizations offer certification tracks, allowing a beginning certificate as a prerequisite to getting more advanced certificates. To get a certificate, you generally attend one or more training classes and then take one or more certification exams. Passing the exams with a certain score will qualify you for a certificate. In most cases, these classes and certificates are not free. In fact a highly technical certification can cost thousands dollars. Some examples of the certifications in highest demand include Microsoft (software certifications), Cisco (networking), and SANS (security).
For many working in IT, determining whether to pursue one or more of these certifications is an important question. For many jobs, such as those involving networking or security, a certificate will be required by the employer as a way to determine which potential employees have a basic level of skill. For those who are already in an IT career, a more advanced certificate may lead to a promotion. For those wondering about the importance of certification, the best solution is to talk to potential employers and those already working in the field to determine the best choice.
Organizing the Information Systems Function
In the early years of computing, the information-systems function (generally called “data processing”) was placed in the finance or accounting department of the organization. As computing became more important, a separate information-systems function was formed, but it still was generally placed under the Chief Financial Officer and considered to be an administrative function of the company. By the 1980s and 1990s, when companies began networking internally and then connecting to the Internet, the information systems function was combined with the telecommunications functions and designated as the Information Technology (IT) department. As the role of information technology continued to increase, its place in the organization became more important. In many organizations today, the head of IT (the CIO) reports directly to the CEO.
Where in the Organization Should IS Be?
Before the advent of the personal computer, the information systems function was centralized within organizations in order to maximize control over computing resources. When the PC began proliferating, many departments within organizations saw it as a chance to gain some computing resources for themselves. Some departments created an internal information systems group, complete with systems analysts, programmers, and even database administrators. These departmental IS groups were dedicated to the information needs of their own departments, providing quicker turnaround and higher levels of service than a centralized IT department. However, having several IS groups within an organization led to a lot of inefficiencies. There were now several people performing the same jobs in different departments. This decentralization also led to company data being stored in several places all over the company.
In some organizations a matrix reporting structure developed in which IT personnel were placed within a department and reported to both the department management and the functional management within IS. The advantages of dedicated IS personnel for each department must be weighed against the need for more control over the strategic information resources of the company.
For many companies, these questions are resolved by the implementation of the ERP system (see discussion of ERP in Chapter 8). Because an ERP system consolidates most corporate data back into a single database, the implementation of an ERP system requires organizations to find “silos” of data so that they can integrate them back into the corporate system. The ERP allows organizations to regain control of their information and influences organizational decisions throughout the company.
Outsourcing
Frequently an organization needs a specific skill for a limited period of time. Instead of training existing employees or hiring new staff, it may make more sense to outsource the job. Outsourcing can be used in many different situations within the information systems function, such as the design and creation of a new website or the upgrade of an ERP system. Some organizations see outsourcing as a cost-cutting move, contracting out a whole group or department.
New Models of Organizations
The integration of information technology has influenced the structure of organizations. The increased ability to communicate and share information has led to a “flattening” of the organizational structure due to the removal of one or more layers of management.
The network-based organizational structure is another changed enabled by information systems. In a network-based organizational structure, groups of employees can work somewhat independently to accomplish a project. People with the right skills are brought together for a project and then released to work on other projects when that project is over. These groups are somewhat informal and allow for all members of the group to maximize their effectiveness.
Information Systems Users – Types of Users
Besides the people who work to create, administer, and manage information systems, there is one more extremely important group of people, namely, the users of information systems. This group represents a very large percentage of an organization’s employees. If the user is not able to successfully learn and use an information system, the system is doomed to failure.
Technology adoption user types
One tool that can be used to understand how users will adopt a new technology comes from a 1962 study by Everett Rogers. In his book, Diffusion of Innovation,[1]Rogers studied how farmers adopted new technologies and noticed that the adoption rate started slowly and then dramatically increased once adoption hit a certain point. He identified five specific types of technology adopters:
• Innovators. Innovators are the first individuals to adopt a new technology. Innovators are willing to take risks, are the youngest in age, have the highest social class, have great financial liquidity, are very social, and have the closest contact with scientific sources and interaction with other innovators. Risk tolerance is high so there is a willingness to adopt technologies thast may ultimately fail. Financial resources help absorb these failures (Rogers, 1962, p. 282).
• Early adopters. The early adopters are those who adopt innovation soon after a technology has been introduced and proven. These individuals have the highest degree of opinion leadership among the other adopter categories, which means that these adopters can influence the opinions of the largest majority. Characteristics include being younger in age, having a higher social status, possessing more financial liquidity, having advanced education, and being more socially aware than later adopters. These adopters are more discrete in adoption choices than innovators, and realize judicious choice of adoption will help them maintain a central communication position (Rogers, 1962, p. 283).
• Early majority. Individuals in this category adopt an innovation after a varying degree of time. This time of adoption is significantly longer than the innovators and early adopters. This group tends to be slower in the adoption process, has above average social status, has contact with early adopters, and seldom holds positions of opinion leadership in a system (Rogers, 1962, p. 283).
• Late majority. The late majority will adopt an innovation after the average member of the society. These individuals approach an innovation with a high degree of skepticism, have below average social status, very little financial liquidity, are in contact with others in the late majority and the early majority, and show very little opinion leadership.
• Laggards. Individuals in this category are the last to adopt an innovation. Unlike those in the previous categories, individuals in this category show no opinion leadership. These individuals typically have an aversion to change agents and tend to be advanced in age. Laggards typically tend to be focused on “traditions,” are likely to have the lowest social status and the lowest financial liquidity, be oldest of all other adopters, and be in contact with only family and close friends.[2]
These five types of users can be translated into information technology adopters as well, and provide additional insight into how to implement new information systems within the organization. For example, when rolling out a new system, IT may want to identify the innovators and early adopters within the organization and work with them first, then leverage their adoption to drive the rest of the implementation to the other users.
Summary
In this chapter we have reviewed the many different categories of individuals who make up the people component of information systems. The world of information technology is changing so fast that new roles are being created all the time and roles that existed for decades are being phased out. This chapter this chapter should have given you a good idea and appreciation for the importance of the people component of information systems.
Study Questions
1. Describe the role of a systems analyst.
2. What are some of the different roles for a computer engineer?
3. What are the duties of a computer operator?
4. What does the CIO do?
5. Describe the job of a project manager.
6. Explain the point of having two different career paths in information systems.
7. What are the advantages and disadvantages of centralizing the IT function?
8. What impact has information technology had on the way companies are organized?
9. What are the five types of information-systems users?
10. Why would an organization outsource?
Exercises
1. Which IT job would you like to have? Do some original research and write a two-page paper describing the duties of the job you are interested in.
2. Spend a few minutes on Dice or Monster to find IT jobs in your area. What IT jobs are currently available? Write up a two-page paper describing three jobs, their starting salary (if listed), and the skills and education needed for the job.
3. How is the IT function organized in your school or place of employment? Create an organization chart showing how the IT organization fits into your overall organization. Comment on how centralized or decentralized the IT function is.
4. What type of IT user are you? Take a look at the five types of technology adopters and then write a one-page summary of where you think you fit in this model.
Lab
1. Define each job in the list, then ask 10 friends to identify which jobs they have heard about or know something about. Tabulate your results.
2. Chief marketing technologist
3. Developer evangelist
4. Ethical hacker
5. Business intelligence analyst
6. Digital marketing manager
7. Growth hacker
8. UX designer
9. Cloud architect
10. Data detective
11. Master of edge computing
12. Digital prophet
13. NOC specialist
14. SEO/SEM specialist
1. Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press
2. Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/02%3A_Information_Systems_for_Strategic_Advantage/203%3A_The_People_in_Information_Systems.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Explain the overall process of developing new software;
• Explain the differences between software development methodologies;
• Understand the different types of programming languages used to develop software;
• Understand some of the issues surrounding the development of websites and mobile applications; and
• Identify the four primary implementation policies.
Introduction
When someone has an idea for a new function to be performed by a computer, how does that idea become reality? If a company wants to implement a new business process and needs new hardware or software to support it, how do they go about making it happen? This chapter covers the different methods of taking those ideas and bringing them to reality, a process known as information systems development.
Programming
Software is created via programming, as discussed in Chapter 2. Programming is the process of creating a set of logical instructions for a digital device to follow using a programming language. The process of programming is sometimes called “coding” because the developer takes the design and encodes it into a programming language which then runs on the computer.
The process of developing good software is usually not as simple as sitting down and writing some code. Sometimes a programmer can quickly write a short program to solve a need, but in most instances the creation of software is a resource-intensive process that involves several different groups of people in an organization. In order to do this effectively, the groups agree to follow a specific software development methodology. The following sections review several different methodologies for software development, as summarized in the table below and more fully described in the following sections.
Systems Development Life Cycle
The Systems Development Life Cycle (SDLC) was first developed in the 1960s to manage the large software projects associated with corporate systems running on mainframes. This approach to software development is very structured and risk averse, designed to manage large projects that include multiple programmers and systems that have a large impact on the organization. It requires a clear, upfront understanding of what the software is supposed to do and is not amenable to design changes. This approach is roughly similar to an assembly line process, where it is clear to all stakeholders what the end product should do and that major changes are difficult and costly to implement.
Various definitions of the SDLC methodology exist, but most contain the following phases.
1. Preliminary Analysis. A request for a replacement or new system is first reviewed. The review includes questions such as: What is the problem-to-be-solved? Is creating a solution possible? What alternatives exist? What is currently being done about it? Is this project a good fit for our organization? After addressing these question, a feasibility study is launched. The feasibility study includes an analysis of the technical feasibility, the economic feasibility or affordability, and the legal feasibility. This step is important in determining if the project should be initiated and may be done by someone with a title of Requirements Analyst or Business Analyst
2. System Analysis. In this phase one or more system analysts work with different stakeholder groups to determine the specific requirements for the new system. No programming is done in this step. Instead, procedures are documented, key players/users are interviewed, and data requirements are developed in order to get an overall impression of exactly what the system is supposed to do. The result of this phase is a system requirements document and may be done by someone with a title of Systems Analyst
3. System Design. In this phase, a designer takes the system requirements document created in the previous phase and develops the specific technical details required for the system. It is in this phase that the business requirements are translated into specific technical requirements. The design for the user interface, database, data inputs and outputs, and reporting are developed here. The result of this phase is a system design document. This document will have everything a programmer needs to actually create the system and may be done by someone with a title of Systems Analyst, Developer, or Systems Architect, based on the scale of the project.
4. Programming. The code finally gets written in the programming phase. Using the system design document as a guide, programmers develop the software. The result of this phase is an initial working program that meets the requirements specified in the system analysis phase and the design developed in the system design phase. These tasks are done by persons with titles such as Developer, Software Engineer, Programmer, or Coder.
5. Testing. In the testing phase the software program developed in the programming phase is put through a series of structured tests. The first is a unit test, which evaluates individual parts of the code for errors or bugs. This is followed by a system test in which the different components of the system are tested to ensure that they work together properly. Finally, the user acceptance test allows those that will be using the software to test the system to ensure that it meets their standards. Any bugs, errors, or problems found during testing are resolved and then the software is tested again. These tasks are done by persons with titles such as Tester, Testing Analyst, or Quality Assurance.
6. Implementation. Once the new system is developed and tested, it has to be implemented in the organization. This phase includes training the users, providing documentation, and data conversion from the previous system to the new system. Implementation can take many forms, depending on the type of system, the number and type of users, and how urgent it is that the system become operational. These different forms of implementation are covered later in the chapter.
7. Maintenance. This final phase takes place once the implementation phase is complete. In the maintenance phase the system has a structured support process in place. Reported bugs are fixed and requests for new features are evaluated and implemented. Also, system updates and backups of the software are made for each new version of the program. Since maintenance is normally an Operating Expense (OPEX) while much of development is a Capital Expense (CAPEX), funds normally come out of different budgets or cost centers.
The SDLC methodology is sometimes referred to as the waterfall methodology to represent how each step is a separate part of the process. Only when one step is completed can another step begin. After each step an organization must decide when to move to the next step. This methodology has been criticized for being quite rigid, allowing movement in only one direction, namely, forward in the cycle. For example, changes to the requirements are not allowed once the process has begun. No software is available until after the programming phase.
Again, SDLC was developed for large, structured projects. Projects using SDLC can sometimes take months or years to complete. Because of its inflexibility and the availability of new programming techniques and tools, many other software development methodologies have been developed. Many of these retain some of the underlying concepts of SDLC, but are not as rigid.
Rapid Application Development
Rapid Application Development (RAD) focuses on quickly building a working model of the software, getting feedback from users, and then using that feedback to update the working model. After several iterations of development, a final version is developed and implemented.
The RAD methodology consists of four phases.
1. Requirements Planning. This phase is similar to the preliminary analysis, system analysis, and design phases of the SDLC. In this phase the overall requirements for the system are defined, a team is identified, and feasibility is determined.
2. User Design. In the user design phase representatives of the users work with the system analysts, designers, and programmers to interactively create the design of the system. Sometimes a Joint Application Development (JAD) session is used to facilitate working with all of these various stakeholders. A JAD session brings all of the stakeholders for a structured discussion about the design of the system. Application developers also participate and observe, trying to understand the essence of the requirements.
3. Construction. In the construction phase the application developers, working with the users, build the next version of the system through an interactive process. Changes can be made as developers work on the program. This step is executed in parallel with the User Design step in an iterative fashion, making modifications until an acceptable version of the product is developed.
4. Cutover. Cutover involves switching from the old system to the new software. Timing of the cutover phase is crucial and is usually done when there is low activity. For example, IT systems in higher education undergo many changes and upgrades during the summer or between fall semester and spring semester. Approaches to the migration from the old to the new system vary between organizations. Some prefer to simply start the new software and terminate use of the old software. Others choose to use an incremental cutover, bringing one part online at a time. A cutover to a new accounting system may be done one module at a time such as general ledger first, then payroll, followed by accounts receivable, etc. until all modules have been implemented. A third approach is to run both the old and new systems in parallel, comparing results daily to confirm the new system is accurate and dependable. A more thorough discussion of implementation strategies appears near the end of this chapter.
As you can see, the RAD methodology is much more compressed than SDLC. Many of the SDLC steps are combined and the focus is on user participation and iteration. This methodology is much better suited for smaller projects than SDLC and has the added advantage of giving users the ability to provide feedback throughout the process. SDLC requires more documentation and attention to detail and is well suited to large, resource-intensive projects. RAD makes more sense for smaller projects that are less resource intensive and need to be developed quickly.
Agile Methodologies
Agile methodologies are a group of methodologies that utilize incremental changes with a focus on quality and attention to detail. Each increment is released in a specified period of time (called a time box), creating a regular release schedule with very specific objectives. While considered a separate methodology from RAD, the two methodologies share some of the same principles such as iterative development, user interaction, and flexibility to change. The agile methodologies are based on the “Agile Manifesto,” first released in 2001.
Agile and Iterative Development
The diagram above emphasizes iterations in the center of agile development. You should notice how the building blocks of the developing system move from left to right, a block at a time, not the entire project. Blocks that are not acceptable are returned through feedback and the developers make the needed modifications. Finally, notice the Daily Review at the top of the diagram. Agile Development means constant evaluation by both developers and customers (notice the term “Collaboration”) of each day’s work.
The characteristics of agile methodology include:
• Small cross-functional teams that include development team members and users;
• Daily status meetings to discuss the current state of the project;
• Short time-frame increments (from days to one or two weeks) for each change to be completed; and
• Working project at the end of each iteration which demonstrates progress to the stakeholders.
The goal of agile methodologies is to provide the flexibility of an iterative approach while ensuring a quality product.
Lean Methodology
One last methodology to discuss is a relatively new concept taken from the business bestseller The Lean Startup by Eric Reis. Lean focuses on taking an initial idea and developing a Minimum Viable Product (MVP). The MVP is a working software application with just enough functionality to demonstrate the idea behind the project. Once the MVP is developed, the development team gives it to potential users for review. Feedback on the MVP is generated in two forms. First, direct observation and discussion with the users and second, usage statistics gathered from the software itself. Using these two forms of feedback, the team determines whether they should continue in the same direction or rethink the core idea behind the project, change the functions, and create a new MVP. This change in strategy is called a pivot. Several iterations of the MVP are developed, with new functions added each time based on the feedback, until a final product is completed.
The biggest difference between the iterative and non-iterative methodologies is that the full set of requirements for the system are not known when the project is launched. As each iteration of the project is released, the statistics and feedback gathered are used to determine the requirements. The lean methodology works best in an entrepreneurial environment where a company is interested in determining if their idea for a program is worth developing.
Sidebar: The Quality Triangle
When developing software or any sort of product or service, there exists a tension between the developers and the different stakeholder groups such as management, users, and investors. This tension relates to how quickly the software can be developed (time), how much money will be spent (cost), and how well it will be built (quality). The quality triangle is a simple concept. It states that for any product or service being developed, you can only address two of the following: time, cost, and quality.
So why can only two of the three factors in the triangle be considered? Because each of these three components are in competition with each other! If you are willing and able to spend a lot of money, then a project can be completed quickly with high quality results because you can provide more resources towards its development. If a project’s completion date is not a priority, then it can be completed at a lower cost with higher quality results using a smaller team with fewer resources. Of course, these are just generalizations, and different projects may not fit this model perfectly. But overall, this model is designed to help you understand the trade-offs that must be made when you are developing new products and services.
There are other, fundamental reasons why low-cost, high-quality projects done quickly are so difficult to achieve.
1. The human mind is analog and the machines the software run on are digital. These are completely different natures that depend upon context and nuance versus being a 1 or a 0. Things that seem obvious to the human mind are not so obvious when forced into a 1 or 0 binary choice.
2. Human beings leave their imprints on the applications or systems they design. This is best summed up by Conway’s Law (1968) – “Organizations that design information systems are constrained to do so in a way that mirrors their internal communication processes.” Organizations with poor communication processes will find it very difficult to communicate requirements and priorities, especially for projects at the enterprise level (i.e., that affect the whole organization.
Programming Languages
As noted earlier, developers create programs using one of several programming languages. A programming language is an artificial language that provides a way for a developer to create programming code to communicate logic in a format that can be executed by the computer hardware. Over the past few decades, many different types of programming languages have evolved to meet a variety of needs. One way to characterize programming languages is by their “generation.”
Generations of Programming Languages
Early languages were specific to the type of hardware that had to be programmed. Each type of computer hardware had a different low level programming language. In those early languages very specific instructions had to be entered line by line – a tedious process.
First generation languages were called machine code because programming was done in the format the machine/computer could read. So programming was done by directly setting actual ones and zeroes (the bits) in the program using binary code. Here is an example program that adds 1234 and 4321 using machine language:
```10111001 00000000
11010010 10100001
00000100 00000000
10001001 00000000
00001110 10001011
00000000 00011110
00000000 00011110
00000000 00000010
10111001 00000000
11100001 00000011
00010000 11000011
10001001 10100011
00001110 00000100
00000010 00000000```
Assembly language is the second generation language and uses English-like phrases rather than machine-code instructions, making it easier to program. An assembly language program must be run through an assembler, which converts it into machine code. Here is a sample program that adds 1234 and 4321 using assembly language.
```MOV CX,1234
MOV DS:[0],CX
MOV CX,4321
MOV AX,DS:[0]
MOV BX,DS:[2]
ADD AX,BX
MOV DS:[4],AX```
Third-generation languages are not specific to the type of hardware on which they run and are similar to spoken languages. Most third generation languages must be compiled. The developer writes the program in a form known generically as source code, then the compiler converts the source code into machine code, producing an executable file. Well-known third generation languages include BASIC, C, Python, and Java. Here is an example using BASIC:
```A=1234
B=4321
C=A+B
END```
Fourth generation languages are a class of programming tools that enable fast application development using intuitive interfaces and environments. Many times a fourth generation language has a very specific purpose, such as database interaction or report-writing. These tools can be used by those with very little formal training in programming and allow for the quick development of applications and/or functionality. Examples of fourth-generation languages include: Clipper, FOCUS, SQL, and SPSS.
Why would anyone want to program in a lower level language when they require so much more work? The answer is similar to why some prefer to drive manual transmission vehicles instead of automatic transmission, namely, control and efficiency. Lower level languages, such as assembly language, are much more efficient and execute much more quickly. The developer has finer control over the hardware as well. Sometimes a combination of higher and lower level languages is mixed together to get the best of both worlds. The programmer can create the overall structure and interface using a higher level language but use lower level languages for the parts of the program that are used many times, require more precision, or need greater speed.
Compiled vs. Interpreted
Besides identifying a programming language based on its generation, we can also classify it through the distinction of whether it is compiled or interpreted. A computer language is written in a human-readable form. In a compiled language the program code is translated into a machine-readable form called an executable that can be run on the hardware. Some well-known compiled languages include C, C++, and COBOL.
Interpreted languages require a runtime program to be installed in order to execute. Each time the user wants to run the software the runtime program must interpret the program code line by line, then run it. Interpreted languages are generally easier to work with but also are slower and require more system resources. Examples of popular interpreted languages include BASIC, PHP, PERL, and Python. The web languages of HTML and JavaScript are also considered interpreted because they require a browser in order to run.
The Java programming language is an interesting exception to this classification, as it is actually a hybrid of the two. A program written in Java is partially compiled to create a program that can be understood by the Java Virtual Machine (JVM). Each type of operating system has its own JVM which must be installed before any program can be executed. The JVM approach allows a single Java program to run on many different types of operating systems.
Procedural vs. Object-Oriented
A procedural programming language is designed to allow a programmer to define a specific starting point for the program and then execute sequentially. All early programming languages worked this way. As user interfaces became more interactive and graphical, it made sense for programming languages to evolve to allow the user to have greater control over the flow of the program. An object-oriented programming language is designed so that the programmer defines “objects” that can take certain actions based on input from the user. In other words, a procedural program focuses on the sequence of activities to be performed while an object oriented program focuses on the different items being manipulated.
Consider a human resources system where an “EMPLOYEE” object would be needed. If the program needed to retrieve or set data regarding an employee, it would first create an employee object in the program and then set or retrieve the values needed. Every object has properties, which are descriptive fields associated with the object. Also known as a Schema, it is the logical view of the object (i.e., each row of properties represents a column in the actual table, which is known as the physical view). The employee object has the properties “EMPLOYEEID”, “FIRSTNAME”, “LASTNAME”, “BIRTHDATE” and “HIREDATE”. An object also has methods which can take actions related to the object. There are two methods in the example. The first is “ADDEMPLOYEE()”, which will create another employee record. The second is “EDITEMPLOYEE()” which will modify an employee’s data.
Programming Tools
To write a program, you need little more than a text editor and a good idea. However, to be productive you must be able to check the syntax of the code, and, in some cases, compile the code. To be more efficient at programming, additional tools, such as an Integrated Development Environment (IDE) or computer-aided software-engineering (CASE) tools can be used.
Integrated Development Environment
For most programming languages an Integrated Development Environment (IDE) can be used to develop the program. An IDE provides a variety of tools for the programmer, and usually includes:
• Editor. An editor is used for writing the program. Commands are automatically color coded by the IDE to identify command types. For example, a programming comment might appear in green and a programming statement might appear in black.
• Help system. A help system gives detailed documentation regarding the programming language.
• Compiler/Interpreter. The compiler/interpreter converts the programmer’s source code into machine language so it can be executed/run on the computer.
• Debugging tool. Debugging assists the developer in locating errors and finding solutions.
• Check-in/check-out mechanism. This tool allows teams of programmers to work simultaneously on a program without overwriting another programmer’s code.
Examples of IDEs include Microsoft’s Visual Studio and Oracle’s Eclipse. Visual Studio is the IDE for all of Microsoft’s programming languages, including Visual Basic, Visual C++, and Visual C#. Eclipse can be used for Java, C, C++, Perl, Python, R, and many other languages.
CASE Tools
While an IDE provides several tools to assist the programmer in writing the program, the code still must be written. Computer-Aided Software Engineering (CASE) tools allow a designer to develop software with little or no programming. Instead, the CASE tool writes the code for the designer. CASE tools come in many varieties. Their goal is to generate quality code based on input created by the designer.
Sidebar: Building a Website
In the early days of the World Wide Web, the creation of a website required knowing how to use HyperText Markup Language (HTML). Today most websites are built with a variety of tools, but the final product that is transmitted to a browser is still HTML. At its simplest HTML is a text language that allows you to define the different components of a web page. These definitions are handled through the use of HTML tags with text between the tags or brackets. For example, an HTML tag can tell the browser to show a word in italics, to link to another web page, or to insert an image. The HTML code below selects two different types of headings (h1 and h2) with text below each heading. Some of the text has been italicized. The output as it would appear in a browser is shown after the HTML code.
```<h1>This is a first-level heading</h1>
Here is some text. <em>Here is some emphasized text.</em>
<h2>Here is a second-level heading</h2)
Here is some more text.```
HTML code
While HTML is used to define the components of a web page, Cascading Style Sheets (CSS) are used to define the styles of the components on a page. The use of CSS allows the style of a website to be set and stay consistent throughout. For example, a designer who wanted all first-level headings (h1) to be blue and centered could set the “h1″ style to match. The following example shows how this might look.
```<style>
h1
{
color:blue;
text-align:center;
}
</style>
<h1>This is a first-level heading</h1>
Here is some text. <em>Here is some emphasized text.</em>
<h2>Here is a second-level heading</h2)
Here is some more text.```
HTML code with CSS added
The combination of HTML and CSS can be used to create a wide variety of formats and designs and has been widely adopted by the web design community. The standards for HTML are set by a governing body called the World Wide Web Consortium. The current version of HTML 5 includes new standards for video, audio, and drawing.
When developers create a website, they do not write it out manually in a text editor. Instead, they use web design tools that generate the HTML and CSS for them. Tools such as Adobe Dreamweaver allow the designer to create a web page that includes images and interactive elements without writing a single line of code. However, professional web designers still need to learn HTML and CSS in order to have full control over the web pages they are developing.
Sidebar: Building a Mobile App
In many ways building an application for a mobile device is exactly the same as building an application for a traditional computer. Understanding the requirements for the application, designing the interface, and working with users are all steps that still need to be carried out.
Mobile Apps
So what’s different about building an application for a mobile device? There are five primary differences:
1. Breakthroughs in component technologies. Mobile devices require multiple components that are not only smaller but more energy-efficient than those in full-size computers (laptops or desktops). For example, low-power CPUs combined with longer-life batteries, touchscreens, and Wi-Fi enable very efficient computing on a phone, which needs to do much less actual processing than their full-size counterparts.
2. Sensors have unlocked the notion of context. The combination of sensors like GPS, gyroscopes, and cameras enables devices to be aware of things like time, location, velocity, direction, altitude, attitude, and temperature. Location in particular provides a host of benefits.
3. Simple, purpose-built, task-oriented apps are easy to use. Mobile apps are much narrower in scope than enterprise software and therefore easier to use. Likewise, they need to be intuitive and not require any training.
4. Immediate access to data extends the value proposition. In addition to the app providing a simpler interface on the front end, cloud-based data services provide access to data in near real-time, from virtually anywhere (e.g., banking, travel, driving directions, and investing). Having access to the cloud is needed to keep mobile device size and power use down.
5. App stores have simplified acquisition. Developing, acquiring, and managing apps has been revolutionized by app stores such as Apple’s App Store and Google Play. Standardized development processes and app requirements allow developers outside Apple and Google to create new apps with a built-in distribution channel. Average low app prices (including many of which that are free) has fueled demand.
In sum, the differences between building a mobile app and other types of software development look like this:
Building a mobile app for both iOS and Android operating systems is known as cross platform development. There are a number of third-party toolkits available for creating your app. Many will convert existing code such as HTML5, JavaScript, Ruby, C++, etc. However, if your app requires sophisticated programming, a cross platform developer kit may not meet your needs.
Responsive Web Design (RWD) focuses on making web pages render well on every device: desktop, laptop, tablet, smartphone. Through the concept of fluid layout RWD automatically adjusts the content to the device on which it is being viewed. You can find out more about responsive design here.
Build vs. Buy
When an organization decides that a new program needs to be developed, they must determine if it makes more sense to build it themselves or to purchase it from an outside company. This is the “build vs. buy” decision.
There are many advantages to purchasing software from an outside company. First, it is generally less expensive to purchase software than to build it. Second, when software is purchased, it is available much more quickly than if the package is built in-house. Software can take months or years to build. A purchased package can be up and running within a few days. Third, a purchased package has already been tested and many of the bugs have already been worked out. It is the role of a systems integrator to make various purchased systems and the existing systems at the organization work together.
There are also disadvantages to purchasing software. First, the same software you are using can be used by your competitors. If a company is trying to differentiate itself based on a business process incorporated into purchased software, it will have a hard time doing so if its competitors use the same software. Another disadvantage to purchasing software is the process of customization. If you purchase software from a vendor and then customize it, you will have to manage those customizations every time the vendor provides an upgrade. This can become an administrative headache, to say the least.
Even if an organization determines to buy software, it still makes sense to go through the same analysis as if it was going to be developed. This is an important decision that could have a long-term strategic impact on the organization.
Web Services
Chapter 3 discussed how the move to cloud computing has allowed software to be viewed as a service. One option, known as web services, allows companies to license functions provided by other companies instead of writing the code themselves. Web services can greatly simplify the addition of functionality to a website.
Suppose a company wishes to provide a map showing the location of someone who has called their support line. By utilizing Google Maps API web services, the company can build a Google Map directly into their application. Or a shoe company could make it easier for its retailers to sell shoes online by providing a shoe sizing web service that the retailers could embed right into their website.
Web services can blur the lines between “build vs. buy.” Companies can choose to build an application themselves but then purchase functionality from vendors to supplement their system.
End-User Computing (EUC)
In many organizations application development is not limited to the programmers and analysts in the information technology department. Especially in larger organizations, other departments develop their own department-specific applications. The people who build these applications are not necessarily trained in programming or application development, but they tend to be adept with computers. A person who is skilled in a particular program, such as a spreadsheet or database package, may be called upon to build smaller applications for use by their own department. This phenomenon is referred to as end-user development, or end-user computing.
End-user computing can have many advantages for an organization. First, it brings the development of applications closer to those who will use them. Because IT departments are sometimes backlogged, it also provides a means to have software created more quickly. Many organizations encourage end-user computing to reduce the strain on the IT department.
End-user computing does have its disadvantages as well. If departments within an organization are developing their own applications, the organization may end up with several applications that perform similar functions, which is inefficient, since it is a duplication of effort. Sometimes these different versions of the same application end up providing different results, bringing confusion when departments interact. End-user applications are often developed by someone with little or no formal training in programming. In these cases, the software developed can have problems that then have to be resolved by the IT department.
End-user computing can be beneficial to an organization provided it is managed. The IT department should set guidelines and provide tools for the departments who want to create their own solutions. Communication between departments can go a long way towards successful use of end-user computing.
Sidebar: Risks of EUC’s as “Shadow IT”
The Federal Home Loan Mortgage Company, better known as Freddie Mac, was fined over \$100 million in 2003 in part for understating its earnings. This triggered a large-scale project to restate its financials, which involved automating financial reporting to comply with the Sarbanes-Oxley Act of 2002. Part of the restatement project found that EUCs (such as spreadsheets and databases on individual laptops) were feeding into the General Ledger. While EUCs were not the cause of Freddie Mac’s problems (they were a symptom of insufficient oversight) to have such poor IT governance in such a large company was a serious issue. It turns these EUCs were done in part to streamline the time it took to make changes to their business processes (a common complaint of IT departments in large corporations is that it takes too long to get things done). As such, these EUCs served as a form of “shadow IT” that had not been through a normal rigorous testing process.
Implementation Methodologies
Once a new system is developed or purchased, the organization must determine the best method for implementation. Convincing a group of people to learn and use a new system can be a very difficult process. Asking employees to use new software as well as follow a new business process can have far reaching effects within the organization.
There are several different methodologies an organization can adopt to implement a new system. Four of the most popular are listed below.
• Direct cutover. In the direct cutover implementation methodology, the organization selects a particular date to terminate the use of the old system. On that date users begin using the new system and the old system is unavailable. Direct cutover has the advantage of being very fast and the least expensive implementation method. However, this method has the most risk. If the new system has an operational problem or if the users are not properly prepared, it could prove disastrous for the organization.
• Pilot implementation. In this methodology a subset of the organization known as a pilot group starts using the new system before the rest of the organization. This has a smaller impact on the company and allows the support team to focus on a smaller group of individuals. Also, problems with the new software can be contained within the group and then resolved.
• Parallel operation. Parallel operations allow both the old and new systems to be used simultaneously for a limited period of time. This method is the least risky because the old system is still being used while the new system is essentially being tested. However, this is by far the most expensive methodology since work is duplicated and support is needed for both systems in full.
• Phased implementation. Phased implementation provides for different functions of the new application to be gradually implemented with the corresponding functions being turned off in the old system. This approach is more conservative as it allows an organization to slowly move from one system to another.
Your choice of an implementation methodology depends on the complexity of both the old and new systems. It also depends on the degree of risk you are willing to take.
Change Management
As new systems are brought online and old systems are phased out, it becomes important to manage the way change is implemented in the organization. Change should never be introduced in a vacuum. The organization should be sure to communicate proposed changes before they happen and plan to minimize the impact of the change that will occur after implementation. Change management is a critical component of IT oversight.
Sidebar: Mismanaging Change
Target Corporation, which operates more than 1,500 discount stores throughout the United States, opened 133 similar stores in Canada between 2013 and 2015. The company decided to implement a new Enterprise Resources Planning (ERP) system that would integrate data from vendors, customers, and do currency calculations (US Dollars and Canadian Dollars). This implementation was coincident with Target Canada’s aggressive expansion plan and stiff competition from Wal-Mart. A two-year timeline – aggressive by any standard for an implementation of this size – did not account for data errors from multiple sources that resulted in erroneous inventory counts and financial calculations. Their supply chain became chaotic and stores were plagued by not having sufficient stock of common items, which prevented the key advantage of “one-stop shopping” for customers. In early 2015, Target Canada announced it was closing all 133 stores. In sum, “This implementation broke nearly all of the cardinal sins of ERP projects. Target set unrealistic goals, didn’t leave time for testing, and neglected to train employees properly.”[1]
Maintenance
After a new system has been introduced, it enters the maintenance phase. The system is in production and is being used by the organization. While the system is no longer actively being developed, changes need to be made when bugs are found or new features are requested. During the maintenance phase, IT management must ensure that the system continues to stay aligned with business priorities and continues to run well.
Summary
Software development is about so much more than programming. It is fundamentally about solving business problems. Developing new software applications requires several steps, from the formal SDLC process to more informal processes such as agile programming or lean methodologies. Programming languages have evolved from very low-level machine-specific languages to higher-level languages that allow a programmer to write software for a wide variety of machines. Most programmers work with software development tools that provide them with integrated components to make the software development process more efficient. For some organizations, building their own software does not make the most sense. Instead, they choose to purchase software built by a third party to save development costs and speed implementation. In end-user computing, software development happens outside the information technology department. When implementing new software applications, there are several different types of implementation methodologies that must be considered.
Study Questions
1. What are the steps in the SDLC methodology?
2. What is RAD software development?
3. What makes the lean methodology unique?
4. What are three differences between second-generation and third-generation languages?
5. Why would an organization consider building its own software application if it is cheaper to buy one?
6. What is responsive design?
7. What is the relationship between HTML and CSS in website design?
8. What is the difference between the pilot implementation methodology and the parallel implementation methodology?
9. What is change management?
10. What are the four different implementation methodologies?
Exercises
1. Which software-development methodology would be best if an organization needed to develop a software tool for a small group of users in the marketing department? Why? Which implementation methodology should they use? Why?
2. Doing your own research, find three programming languages and categorize them in these areas: generation, compiled vs. interpreted, procedural vs. object-oriented.
3. Some argue that HTML is not a programming language. Doing your own research, find three arguments for why it is not a programming language and three arguments for why it is.
4. Read more about responsive design using the link given in the text. Provide the links to three websites that use responsive design and explain how they demonstrate responsive-design behavior.
Labs
1. Here’s a Python program for you to analyze. The code below deals with a person’s weight and height. See if you can guess what will be printed and then try running the code in a Python interpreter such as https://www.onlinegdb.com/online_python_interpreter.
```measurements = (8, 20)
print("Original measurements:")
for measurement in measurements:
print(measurement)
measurements = (170, 72)
print("\nModified measurements:")
for measurement in measurements:
print(measurement)```
2. Here’s a broken Java program for you to analyze. The code below deals with calculating tuition, multiplying the tuition rate and the number of credits taken. The number of credits is entered by the user of the program. The code below is broken and gives the incorrect answer. Review the problem below and determine what it would output if the user entered “6” for the number of credits. How would you fix the program so that it would give the correct output?
```package calcTuition;
//import Scanner
import java.util.Scanner;
public class CalcTuition
{
public static void main(String[] args)
{
//Declare variables
int credits;
final double TUITION_RATE = 100;
double tuitionTotal;
//Get user input
Scanner inputDevice = new Scanner(System.in);
System.out.println("Enter the number of credits: ");
credits = inputDevice.nextInt();
//Calculate tuition
tuitionTotal = credits + TUITION_RATE;
//Display tuition total
System.out.println("You total tuition is: " + tuitionTotal);
}
}```
1. Taken from ACC Software Solutions. "THE MANY FACES OF FAILED ERP IMPLEMENTATIONS (AND HOW TO AVOID THEM)" https://4acc.com/article/failed-erp-implementations/ | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/02%3A_Information_Systems_for_Strategic_Advantage/204%3A_Information_Systems_Development.txt |
• 11: Globalization and the Digital Divide
The Internet has wired the world. Today it is just as simple to communicate with someone on the other side of the world as it is to talk to someone next door. But keep in mind that many businesses attempted to outsource different needs in technology, only to discover that near-sourcing (outsourcing to countries to which your country is physically connected) had greater advantage. This chapter looks at the implications of globalization and the impact it is having on the world.
• 12: The Ethical and Legal Implications of Information Systems
New technologies create new situations that we have never dealt with before. How do we handle the new capabilities that these devices empower us with? What new laws are going to be needed to protect us from ourselves? This chapter will kick off with a discussion of the impact of information systems on how we behave (ethics). This will be followed with the new legal structures being put in place, with a focus on intellectual property and privacy.
• 13: Trends in Information Systems
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today devices you can hold in one hand are more powerful than the computers used to land a man on the moon in 1969. The Internet has made the entire world accessible to you, allowing you to communicate and collaborate like never before. This chapter examines current trends and looks ahead to what is coming next.
03: Information Systems Beyond the Organization
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• explain the concept of globalization;
• describe the role of information technology in globalization;
• identify the issues experienced by firms as they face a global economy; and
• define the digital divide and explain Nielsen’s three stages of the digital divide.
Introduction
The Internet has wired the world. Today it is just as simple to communicate with someone on the other side of the world as it is to talk to someone next door. But keep in mind that many businesses attempted to outsource different needs in technology, only to discover that near-sourcing (outsourcing to countries to which your country is physically connected) had greater advantage. This chapter looks at the implications of globalization and the impact it is having on the world.
What Is Globalization?
Globalization refers to the integration of goods, services, and culture among the nations of the world. Globalization is not necessarily a new phenomenon. In many ways globalization has existed since the days of European colonization. Further advances in telecommunication and transportation technologies accelerated globalization. The advent of the the worldwide Internet has made all nations virtual next door neighbors.
The Internet is truly a worldwide phenomenon. As of December 2017 the Internet was being used by over 4.1 billion people world wide.[1] From its initial beginnings in the United States in the 1970s to the development of the World Wide Web in the 1990s to the social networks and e-commerce of today, the Internet has continued to increase the integration between countries, making globalization a fact of life for citizens all over the world.
The Network Society
In 1996 social-sciences researcher Manuel Castells published The Rise of the Network Society, in which he identified new ways economic activity was being organized around the networks that the new telecommunication technologies had provided. This new, global economic activity was different from the past, because “it is an economy with the capacity to work as a unit in real time on a planetary scale.”[2] Having a world connected via the Internet has some massive implications.
The World Is Flat
Thomas Friedman’s 2005 book The World Is Flat uses anecdotal evidence to present the impact the personal computer, the Internet, and communication software have had on business, specifically the impact on globalization. Three eras of globalization are defined at the beginning of the book.[3]:
• “Globalization 1.0″ occurred from 1492 until about 1800. In this era globalization was centered around countries. It was about how much horsepower, wind power, and steam power a country had and how creatively it was deployed. The world shrank from size “large” to size “medium.”
• “Globalization 2.0″ occurred from about 1800 until 2000, interrupted only by the two World Wars. In this era, the dynamic force driving change was multinational companies. The world shrank from size “medium” to size “small.”
• “Globalization 3.0″ is our current era, beginning in the year 2000. The convergence of the personal computer, fiber-optic Internet connections, and software has created a “flat-world platform” that allows small groups and even individuals to go global. The world has shrunk from size “small” to size “tiny.”
According to Friedman, this third era of globalization was brought about, in many respects, by information technology. Some of the specific technologies include:
• Graphical user interface for the personal computer popularized in the late 1980s. Before the graphical user interface, using a computer was relatively difficult, requiring users to type commands rather than click a mouse. By making the personal computer something that anyone could use, the computer became a tool of virtually every person, not just those intrigued by technology. Friedman says the personal computer made people more productive and, as the Internet evolved, made it simpler to communicate information worldwide.
• Build-out of the Internet infrastructure during the dot-com boom during the late-1990s. During the late 1990s, telecommunications companies laid thousands of miles of fiber optic cable all over the world, turning network communications into a commodity. At the same time, the Internet protocols, such as SMTP (e-mail), HTML (web pages), and TCP/IP (network communications) became standards that were available for free and used by everyone through their email programs and web browsers.
• Introduction of software to automate and integrate business processes. As the Internet continued to grow and become the dominant form of communication, it became essential to build on the standards developed earlier so that the websites and applications running on the Internet would work well together. Friedman calls this “workflow software,” by which he means software that allows people to work together more easily, and allows different software and databases to integrate with each other more easily. Examples include payment processing systems and shipping calculators.
These three technologies came together in the late 1990s to create a “platform for global collaboration.” Once these technologies were in place, they continued to evolve. Friedman also points out a couple more technologies that have contributed to the flat-world platform, namely the open source movement discussed in Chapter 10 and the advent of mobile technologies.
Economist Pankaj Ghemawat authored the book World 3.0 in 2011 in an attempt to provide a more moderate and research-based analysis of globalization. While Friedman talked with individuals and produced an anecdotally-based book, Ghemawat’s approach was to research economic data, then draw conclusions about globalization. His research found the following:
• Mailed letters that cross international borders = 1%
• Telephone calling minutes that are international = 2%
• Internet traffic that is routed across international borders = 18%
• National, as opposed to international, TV news sources = 95%
• First generation immigrants as portion of world’s population = 3%
• People who at sometime in their lives will cross an international border = 10%
• Global exports as portion of the value of all goods produced in the world = 20%
[4]
According to Ghemawat, while the Internet has had an impact on the world’s economy, it may well be that domestic economies can be expected to continue to be the main focus in most countries. You can watch Ghemawat’s Ted Talk here. Current and future trends will be discussed in Chapter 13.
The Global Firm
The new era of globalization allows virtually any business to become international. By accessing this new platform of technologies, Castells’s vision of working as a unit in real time on a planetary scale can be a reality. Some of the advantages include:
• Ability to locate expertise and labor around the world. Instead of drawing employees from their local area, organizations can now hire people from the global labor pool. This also allows organizations to pay a lower labor cost for the same work based on the prevailing wage in different countries.
• Ability to operate 24 hours a day. With employees in different time zones all around the world, an organization can literally operate around the clock, handing off work on projects from one part of the world to another as the normal business day ends in one region and begins in another. A few years ago three people decided to open a web hosting company. They strategically relocated to three places in the world which were eight hours apart, giving their business 24 hour coverage while allowing each to work during the normal business day. Operating expenses were minimized and the business provided 24/7 support to customers world wide.
• Larger market for their products. Once a product is being sold online, it is available for purchase from a worldwide customer base. Even if a company’s products do not appeal beyond its own country’s borders, being online has made the product more visible to consumers within that country.
In order to fully take advantage of these new capabilities, companies need to understand that there are also challenges in dealing with employees and customers from different cultures. Some of these challenges include:
• Infrastructure differences. Each country has its own infrastructure with varying levels of quality and bandwidth. A business cannot expect every country it deals with to have the same Internet speeds. See the sidebar titled “How Does My Internet Speed Compare?”
• Labor laws and regulations. Different countries (even different states in the United States) have different laws and regulations. A company that wants to hire employees from other countries must understand the different regulations and concerns.
• Legal restrictions. Many countries have restrictions on what can be sold or how a product can be advertised. It is important for a business to understand what is allowed. For example, in Germany, it is illegal to sell anything Nazi related.
• Language, customs, and preferences. Every country has its own unique culture which a business must consider when trying to market a product there. Additionally, different countries have different preferences. For example, in many parts of Europe people prefer to eat their french fries with mayonnaise instead of ketchup. In South Africa a hamburger comes delivered to your table with gravy on top.
• International shipping. Shipping products between countries in a timely manner can be challenging. Inconsistent address formats, dishonest customs agents, and prohibitive shipping costs are all factors that must be considered when trying to deliver products internationally.
Because of these challenges, many businesses choose not to expand globally, either for labor or for customers. Whether a business has its own website or relies on a third-party, such as Amazon or eBay, the question of whether or not to globalize must be carefully considered.
Sidebar: How Does My Internet Speed Compare?
How does your Internet speed compare with others in the world? The following chart shows how Internet speeds compare in different countries. You can find the full list of countries by going to this article . You can also compare the evolution of Internet speeds among countries by using this tool .
So how does your own Internet speed compare? There are many online tools you can use to determine the speed at which you are connected. One of the most trusted sites is speedtest.net, where you can test both your download and upload speeds.
The Digital Divide
As the Internet continues to make inroads across the world, it is also creating a separation between those who have access to this global network and those who do not. This separation is called the “digital divide” and is of great concern. An article in Crossroads puts it this way:
Adopted by the ACM Council in 1992, the ACM Code of Ethics and Professional Conduct focuses on issues involving the Digital Divide that could prevent certain categories of people — those from low-income households, senior citizens, single-parent children, the undereducated, minorities, and residents of rural areas — from receiving adequate access to the wide variety of resources offered by computer technology. This Code of Ethics positions the use of computers as a fundamental ethical consideration: “In a fair society, all individuals would have equal opportunity to participate in, or benefit from, the use of computer resources regardless of race, sex, religion, age, disability, national origin, or other similar factors.” This article summarizes the digital divide in its various forms, and analyzes reasons for the growing inequality in people’s access to Internet services. It also describes how society can bridge the digital divide: the serious social gap between information “haves” and “have-nots.”[5]
The digital divide can occur between countries, regions, or even neighborhoods. In many US cities, there are pockets with little or no Internet access, while just a few miles away high-speed broadband is common.
Solutions to the digital divide have had mixed success over the years. Many times just providing Internet access and/or computing devices is not enough to bring true Internet access to a country, region, or neighborhood.
A New Understanding of the Digital Divide
In 2006, web-usability consultant Jakob Nielsen wrote an article that got to the heart of our understanding of this problem. In his article he breaks the digital divide up into three stages: the economic divide, the usability divide, and the empowerment divide[6].
• Economic divide. This is what many call the digital divide. The economic divide is the idea that some people can afford to have a computer and Internet access while others cannot. Because of Moore’s Law (see Chapter 2), the price of hardware has continued to drop and, at this point, we can now access digital technologies, such as smartphones, for very little. Nielsen asserts that for all intents and purposes, the economic divide is a moot point and we should not focus our resources on solving it.
• Usability divide. Usability is concerned with the fact that “technology remains so complicated that many people couldn’t use a computer even if they got one for free.” And even for those who can use a computer, accessing all the benefits of having one is beyond their understanding. Included in this group are those with low literacy and seniors. According to Nielsen, we know how to help these users, but we are not doing it because there is little profit in doing so.
• Empowerment divide. Empowerment is the most difficult to solve. It is concerned with how we use technology to empower ourselves. Very few users truly understand the power that digital technologies can give them. In his article, Nielsen explains that his and others’ research has shown that very few users contribute content to the Internet, use advanced search, or can even distinguish paid search ads from organic search results. Many people will limit what they can do online by accepting the basic, default settings of their computer and not work to understand how they can truly be empowered.
Understanding the digital divide using these three stages provides a more nuanced view of how we can work to alleviate it. More work needs to be done to address the second and third stages of the digital divide for a more holistic solution.
Refining the Digital Divide
The Miniwatts Marketing Group, host of Internet World Stats, has sought in 2018 to further clarify the meaning of digital divide by acknowledging that the divide is more than just who does or does not have access to the Internet. In addition to Nielsen’s economic, usability, and empowerment divides, this group sees the following concerns.
• Social mobility. Lack of computer education works to the disadvantage of children with lower socioeconomic status.
• Democracy. Greater use of the Internet can lead to healthier democracies especially in participation in elections.
• Economic growth. Greater use of the Internet in developing countries could provide a shortcut to economic advancement. Using the latest technology could give companies in these countries a competitive advantage.
The focus on the continuing digital divide has led the European Union to create an initiative known as The European 2020 Strategy. Five major areas are being targeted: a) research and development, b) climate/energy, c) education, d) social inclusion, and e) poverty reduction.[7]
Sidebar: Using Gaming to Bridge the Digital Divide
Paul Kim, the Assistant Dean and Chief Technology Officer of the Stanford Graduate School of Education, designed a project to address the digital divide for children in developing countries. [8] In their project the researchers wanted to learn if children can adopt and teach themselves mobile learning technology, without help from teachers or other adults, and the processes and factors involved in this phenomenon. The researchers developed a mobile device called TeacherMate, which contained a game designed to help children learn math. The unique part of this research was that the researchers interacted directly with the children. They did not channel the mobile devices through the teachers or the schools. There was another important factor to consider. In order to understand the context of the children’s educational environment, the researchers began the project by working with parents and local nonprofits six months before their visit. While the results of this research are too detailed to go into here, it can be said that the researchers found that children can, indeed, adopt and teach themselves mobile learning technologies.
What makes this research so interesting when thinking about the digital divide is that the researchers found that, in order to be effective, they had to customize their technology and tailor their implementation to the specific group they were trying to reach. One of their conclusions stated the following:
Considering the rapid advancement of technology today, mobile learning options for future projects will only increase. Consequently, researchers must continue to investigate their impact. We believe there is a specific need for more in-depth studies on ICT [Information and Communication Technology] design variations to meet different challenges of different localities.
To read more about Dr. Kim’s project, locate the paper referenced here.
Summary
Information technology has driven change on a global scale. Technology has given us the ability to integrate with people all over the world using digital tools. These tools have allowed businesses to broaden their labor pools, their markets, and even their operating hours. But they have also brought many new complications for businesses, which now must understand regulations, preferences, and cultures from many different nations. This new globalization has also exacerbated the digital divide. Nielsen has suggested that the digital divide consists of three stages (economic, usability, and empowerment), of which the economic stage is virtually solved.
Study Questions
1. What does the term globalization mean?
2. How does Friedman define the three eras of globalization?
3. Which technologies have had the biggest effect on globalization?
4. What are some of the advantages brought about by globalization?
5. What are the challenges of globalization?
6. What perspective does Ghemawat provide regarding globalization in his book World 3.0?
7. What does the term digital divide mean?
8. What are Jakob Nielsen’s three stages of the digital divide?
9. What was one of the key points of The Rise of the Network Society?
10. Which country has the highest average Internet speed? How does your country compare?
Exercises
1. Compare the concept of Friedman’s “Globalization 3.0″ with Nielsen empowerment stage of the digital divide.
2. Do some original research to determine some of the regulations that a US company may have to consider before doing business in one of the following countries: China, Germany, Saudi Arabia, Turkey.
3. Give one example of the digital divide and describe what you would do to address it.
4. How did the research conducted by Paul Kim address the three levels of the digital divide?
Lab
1. Go to speedtest.net to determine your Internet speed. Compare your speed at home to the Internet speed at two other locations, such as your school, place of employment, or local coffee shop. Write a one-page summary that compares these locations.
1. Internet World Stats. (n.d.). World Internet Users and 2018 Population Stats. Retrieved from http://internetworldstats.com/
2. Castells, M. (2000). The Rise of the Network Society (2nd ed.). Cambridge, MA: Blackwell Publishers, Inc.
3. Friedman, T. L. (2005). The world is flat: A brief history of the twenty-first century. New York: Farrar, Straus and Giroux.
4. Ghemawat, P. (2011). World 3.0: Global Prosperity and How to Achieve It. Boston: Harvard Business School Publishing.
5. Kim, K. (2005, December). Challenges in HCI: digital divide. Crossroads 12, 2. DOI=10.1145/1144375.1144377. Retrieved from http://doi.acm.org/10.1145/1144375.1144377
6. Nielsen, J. (2006).Digital Divide: The 3 Stages. Nielsen Norman Group. Retrieved from http://www.nngroup.com/articles/digi...-three-stages/
7. Miniwatts Marketing Group. (2018, May 23). The Digital Divide, ICT, and Broadband Internet. Retrieved from https://www.internetworldstats.com/links10.htm
8. Kim, P., Buckner, E., Makany, T., and Kim, H. (2011). A comparative analysis of a game-based mobile learning model in low-socioeconomic communities of India. International Journal of Educational Development. Retrieved from https//doi:10.1016/j.ijedudev.2011.05.008. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/03%3A_Information_Systems_Beyond_the_Organization/300%3A_Globalization_and_the_Digital_Divide.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe what the term information systems ethics means;
• explain what a code of ethics is and describe the advantages and disadvantages;
• define the term intellectual property and explain the protections provided by copyright, patent, and trademark; and
• describe the challenges that information technology brings to individual privacy.
Introduction
Information systems have had an impact far beyond the world of business. New technologies create new situations that have never had to be confronted before. One issue is how to handle the new capabilities that these devices provide to users. What new laws are going to be needed for protection from misuse of new technologies. This chapter begins with a discussion of the impact of information systems has on user behavior or ethics. This will be followed with the new legal structures being put in place with a focus on intellectual property and privacy.
Information Systems Ethics
The term ethics means “a set of moral principles” or “the principles of conduct governing an individual or a group.”[1] Since the dawn of civilization, the study of ethics and their impact has fascinated mankind. But what do ethics have to do with information systems?
The introduction of new technology can have a profound effect on human behavior. New technologies give us capabilities that we did not have before, which in turn create environments and situations that have not been specifically addressed in an ethical context. Those who master new technologies gain new power while those who cannot or do not master them may lose power. In 1913 Henry Ford implemented the first moving assembly line to create his Model T cars. While this was a great step forward technologically and economically, the assembly line reduced the value of human beings in the production process. The development of the atomic bomb concentrated unimaginable power in the hands of one government, who then had to wrestle with the decision to use it. Today’s digital technologies have created new categories of ethical dilemmas.
For example, the ability to anonymously make perfect copies of digital music has tempted many music fans to download copyrighted music for their own use without making payment to the music’s owner. Many of those who would never have walked into a music store and stolen a CD find themselves with dozens of illegally downloaded albums.
Digital technologies have given us the ability to aggregate information from multiple sources to create profiles of people. What would have taken weeks of work in the past can now be done in seconds, allowing private organizations and governments to know more about individuals than at any time in history. This information has value, but also chips away at the privacy of consumers and citizens.
Sidebar: Data Privacy, Facebook, and Cambridge Analytica
In early 2018 Facebook acknowledged a data breach affecting 87 million users. The app “thisisyourdigitallife”, created by Global Science Research, informed users that they could participate in a psychological research study. About 270,000 people decided to participate in the research, but the app failed to tell users that the data of all of their friends on Facebook would be automatically captured as well. All of this data theft took place prior to 2014, but it did not become public until four years later.
In 2015 Facebook learned about Global Science Research’s collection of data on millions of friends of the users in the research. Global Science Research agreed to delete the data, but it had already been sold to Cambridge Analytica who used it in the 2016 presidential primary campaign. The ensuing firestorm resulted in Mark Zuckerberg, CEO of Facebook, testifying before the U.S. Congress in 2018 on what happened and what Facebook would do in the future to protect users’ data. Congress is working on legislation to protect user data in the future, a prime example of technology advancing faster than the laws needed to protect users. More information about this case of data privacy can be found at Facebook and Cambridge Analytica. [2]
Code of Ethics
A code of ethics is one method for navigating new ethical waters. A code of ethics outlines a set of acceptable behaviors for a professional or social group. Generally, it is agreed to by all members of the group. The document details different actions that are considered appropriate and inappropriate.
A good example of a code of ethics is the Code of Ethics and Professional Conduct of the Association for Computing Machinery,[3] an organization of computing professionals that includes academics, researchers, and practitioners. Here is a quote from the preamble:
Commitment to ethical professional conduct is expected of every member (voting members, associate members, and student members) of the Association for Computing Machinery (ACM).
This Code, consisting of 24 imperatives formulated as statements of personal responsibility, identifies the elements of such a commitment. It contains many, but not all, issues professionals are likely to face. Section 1 outlines fundamental ethical considerations, while Section 2 addresses additional, more specific considerations of professional conduct. Statements in Section 3 pertain more specifically to individuals who have a leadership role, whether in the workplace or in a volunteer capacity such as with organizations like ACM. Principles involving compliance with this Code are given in Section 4.
In the ACM’s code you will find many straightforward ethical instructions such as the admonition to be honest and trustworthy. But because this is also an organization of professionals that focuses on computing, there are more specific admonitions that relate directly to information technology:
• No one should enter or use another’s computer system, software, or data files without permission. One must always have appropriate approval before using system resources, including communication ports, file space, other system peripherals, and computer time.
• Designing or implementing systems that deliberately or inadvertently demean individuals or groups is ethically unacceptable.
• Organizational leaders are responsible for ensuring that computer systems enhance, not degrade, the quality of working life. When implementing a computer system, organizations must consider the personal and professional development, physical safety, and human dignity of all workers. Appropriate human-computer ergonomic standards should be considered in system design and in the workplace.
One of the major advantages of creating a code of ethics is that it clarifies the acceptable standards of behavior for a professional group. The varied backgrounds and experiences of the members of a group lead to a variety of ideas regarding what is acceptable behavior. While the guidelines may seem obvious, having these items detailed provides clarity and consistency. Explicitly stating standards communicates the common guidelines to everyone in a clear manner.
A code of ethics can also have some drawbacks. First, a code of ethics does not have legal authority. Breaking a code of ethics is not a crime in itself. What happens if someone violates one of the guidelines? Many codes of ethics include a section that describes how such situations will be handled. In many cases repeated violations of the code result in expulsion from the group.
In the case of ACM: “Adherence of professionals to a code of ethics is largely a voluntary matter. However, if a member does not follow this code by engaging in gross misconduct, membership in ACM may be terminated.” Expulsion from ACM may not have much of an impact on many individuals since membership in ACM is usually not a requirement for employment. However, expulsion from other organizations, such as a state bar organization or medical board, could carry a huge impact.
Another possible disadvantage of a code of ethics is that there is always a chance that important issues will arise that are not specifically addressed in the code. Technology is quickly changing and a code of ethics might not be updated often enough to keep up with all of the changes. A good code of ethics, however, is written in a broad enough fashion that it can address the ethical issues of potential changes to technology while the organization behind the code makes revisions.
Finally, a code of ethics could also be a disadvantage in that it may not entirely reflect the ethics or morals of every member of the group. Organizations with a diverse membership may have internal conflicts as to what is acceptable behavior. For example, there may be a difference of opinion on the consumption of alcoholic beverages at company events. In such cases the organization must make a choice about the importance of addressing a specific behavior in the code.
Sidebar: Acceptable Use Policies
Many organizations that provide technology services to a group of constituents or the public require agreement to an Acceptable Use Policy (AUP) before those services can be accessed. Similar to a code of ethics, this policy outlines what is allowed and what is not allowed while someone is using the organization’s services. An everyday example of this is the terms of service that must be agreed to before using the public Wi-Fi at Starbucks, McDonald’s, or even a university. Here is an example of an acceptable use policy from Virginia Tech.
Just as with a code of ethics, these acceptable use policies specify what is allowed and what is not allowed. Again, while some of the items listed are obvious to most, others are not so obvious:
• “Borrowing” someone else’s login ID and password is prohibited.
• Using the provided access for commercial purposes, such as hosting your own business website, is not allowed.
• Sending out unsolicited email to a large group of people is prohibited.
As with codes of ethics, violations of these policies have various consequences. In most cases, such as with Wi-Fi, violating the acceptable use policy will mean that you will lose your access to the resource. While losing access to Wi-Fi at Starbucks may not have a lasting impact, a university student getting banned from the university’s Wi-Fi (or possibly all network resources) could have a large impact.
Intellectual Property
One of the domains that has been deeply impacted by digital technologies is intellectual property. Digital technologies have driven a rise in new intellectual property claims and made it much more difficult to defend intellectual property.
Intellectual property is defined as “property (as an idea, invention, or process) that derives from the work of the mind or intellect.”[4] This could include creations such as song lyrics, a computer program, a new type of toaster, or even a sculpture.
Practically speaking, it is very difficult to protect an idea. Instead, intellectual property laws are written to protect the tangible results of an idea. In other words, just coming up with a song in your head is not protected, but if you write it down it can be protected.
Protection of intellectual property is important because it gives people an incentive to be creative. Innovators with great ideas will be more likely to pursue those ideas if they have a clear understanding of how they will benefit. In the US Constitution, Article 8, Section 8, the authors saw fit to recognize the importance of protecting creative works:
Congress shall have the power . . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
An important point to note here is the “limited time” qualification. While protecting intellectual property is important because of the incentives it provides, it is also necessary to limit the amount of benefit that can be received and allow the results of ideas to become part of the public domain.
Outside of the US, intellectual property protections vary. You can find out more about a specific country’s intellectual property laws by visiting the World Intellectual Property Organization.
The following sections address three of the best known intellectual property protections: copyright, patent, and trademark.
Copyright
Copyright is the protection given to songs, computer programs, books, and other creative works. Any work that has an “author” can be copyrighted. Under the terms of copyright, the author of a work controls what can be done with the work, including:
• Who can make copies of the work.
• Who can make derivative works from the original work.
• Who can perform the work publicly.
• Who can display the work publicly.
• Who can distribute the work.
Many times a work is not owned by an individual but is instead owned by a publisher with whom the original author has an agreement. In return for the rights to the work, the publisher will market and distribute the work and then pay the original author a portion of the proceeds.
Copyright protection lasts for the life of the original author plus seventy years. In the case of a copyrighted work owned by a publisher or another third party, the protection lasts for ninety-five years from the original creation date. For works created before 1978, the protections vary slightly. You can see the full details on copyright protections by reviewing the Copyright Basics document available at the US Copyright Office’s website.
Obtaining Copyright Protection
In the United States a copyright is obtained by the simple act of creating the original work. In other words, when an author writes down a song, makes a film, or develops a computer program, the author has the copyright. However, for a work that will be used commercially, it is advisable to register for a copyright with the US Copyright Office. A registered copyright is needed in order to bring legal action against someone who has used a work without permission.
First Sale Doctrine
If an artist creates a painting and sells it to a collector who then, for whatever reason, proceeds to destroy it, does the original artist have any recourse? What if the collector, instead of destroying it, begins making copies of it and sells them? Is this allowed? The first sale doctrine is a part of copyright law that addresses this, as shown below[5]:
The first sale doctrine, codified at 17 U.S.C. § 109, provides that an individual who knowingly purchases a copy of a copyrighted work from the copyright holder receives the right to sell, display or otherwise dispose of that particular copy, notwithstanding the interests of the copyright owner.
Therefor, in our examples the copyright owner has no recourse if the collector destroys the artwork. But the collector does not have the right to make copies of the artwork.
Fair Use
Another important provision within copyright law is that of fair use. Fair use is a limitation on copyright law that allows for the use of protected works without prior authorization in specific cases. For example, if a teacher wanted to discuss a current event in class, copies of the copyrighted new story could be handed out in class without first getting permission. Fair use is also what allows a student to quote a small portion of a copyrighted work in a research paper.
Unfortunately, the specific guidelines for what is considered fair use and what constitutes copyright violation are not well defined. Fair use is a well-known and respected concept and will only be challenged when copyright holders feel that the integrity or market value of their work is being threatened. The following four factors are considered when determining if something constitutes fair use: [6]
1. The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes;
2. The nature of the copyrighted work;
3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole;
4. The effect of the use upon the potential market for, or value of, the copyrighted work.
If you are ever considering using a copyrighted work as part of something you are creating, you may be able to do so under fair use. However, it is always best to check with the copyright owner to be sure you are staying within your rights and not infringing upon theirs.
Sidebar: The History of Copyright Law
As noted above, current copyright law grants copyright protection for seventy years after the author’s death, or ninety-five years from the date of creation for a work created for hire. But it was not always this way.
The first US copyright law, which only protected books, maps, and charts, provided protection for only 14 years with a renewable term of 14 years. Over time copyright law was revised to grant protections to other forms of creative expression, such as photography and motion pictures. Congress also saw fit to extend the length of the protections, as shown in the following chart. Today, copyright has become big business with many businesses relying on the income from copyright protected works for their income.
Many now think that the protections last too long. The Sonny Bono Copyright Term Extension Act has been nicknamed the “Mickey Mouse Protection Act,” as it was enacted just in time to protect the copyright on the Walt Disney Company’s Mickey Mouse character. Because of this term extension, many works from the 1920s and 1930s that would have been available now in the public domain are still restricted.
The Digital Millennium Copyright Act
As digital technologies have changed what it means to create, copy, and distribute media, a policy vacuum has been created. In 1998, the US Congress passed the Digital Millennium Copyright Act (DMCA), which extended copyright law to take into consideration digital technologies. Two of the best-known provisions from the DMCA are the anti-circumvention provision and the “safe harbor” provision.
• The anti-circumvention provision makes it illegal to create technology to circumvent technology that has been put in place to protect a copyrighted work. This provision includes not just the creation of the technology but also the publishing of information that describes how to do it. While this provision does allow for some exceptions, it has become quite controversial and has led to a movement to have it modified.
• The “safe harbor” provision limits the liability of online service providers when someone using their services commits copyright infringement. This is the provision that allows YouTube, for example, not to be held liable when someone posts a clip from a copyrighted movie. The provision does require the online service provider to take action when they are notified of the violation (a “takedown” notice). For an example of how takedown works, here’s how YouTube handles these requests: YouTube Copyright Infringement Notification.
Many think that the DMCA goes too far and ends up limiting our freedom of speech. The Electronic Frontier Foundation (EFF) is at the forefront of this battle. In discussing the anti-circumvention provision, the EFF states:
Yet the DMCA has become a serious threat that jeopardizes fair use, impedes competition and innovation, chills free expression and scientific research, and interferes with computer intrusion laws. If you circumvent DRM [digital rights management] locks for non-infringing fair uses or create the tools to do so you might be on the receiving end of a lawsuit.
Sidebar: Creative Commons
Chapter 2 introduced the topic of open-source software. Open-source software has few or no copyright restrictions. The creators of the software publish their code and make their software available for others to use and distribute for free. This is great for software, but what about other forms of copyrighted works? If an artist or writer wants to make their works available, how can they go about doing so while still protecting the integrity of their work? Creative Commons is the solution to this problem.
Creative Commons is a nonprofit organization that provides legal tools for artists and authors. The tools offered make it simple to license artistic or literary work for others to use or distribute in a manner consistent with the author’s intentions. Creative Commons licenses are indicated with the symbol . It is important to note that Creative Commons and public domain are not the same. When something is in the public domain, it has absolutely no restrictions on its use or distribution. Works whose copyrights have expired are in the public domain.
By using a Creative Commons license, authors can control the use of their work while still making it widely accessible. By attaching a Creative Commons license to their work, a legally binding license is created. Here are some examples of these licenses:
• CC-BY. This is the least restrictive license. It lets others distribute and build upon the work, even commercially, as long as they give the author credit for the original work.
• CC-BY-SA. This license restricts the distribution of the work via the “share-alike” clause. This means that others can freely distribute and build upon the work, but they must give credit to the original author and they must share using the same Creative Commons license.
• CC-BY-NC. This license is the same as CC-BY but adds the restriction that no one can make money with this work. NC stands for “non-commercial.”
• CC-BY-NC-ND. This license is the same as CC-BY-NC but also adds the ND restriction, which means that no derivative works may be made from the original.
These are a few of the more common licenses that can be created using the tools that Creative Commons makes available. For a full listing of the licenses and to learn much more about Creative Commons, visit their web site.
Patent
Patents are another important form of intellectual property protection. A patent creates protection for someone who invents a new product or process. The definition of invention is quite broad and covers many different fields. Here are some examples of items receiving patents:
• circuit designs in semiconductors;
• prescription drug formulas;
• firearms;
• locks;
• plumbing;
• engines;
• coating processes; and
• business processes.
Once a patent is granted it provides the inventor with protection from others infringing on his or her patent. A patent holder has the right to “exclude others from making, using, offering for sale, or selling the invention throughout the United States or importing the invention into the United States for a limited time in exchange for public disclosure of the invention when the patent is granted.”[7]
As with copyright, patent protection lasts for a limited period of time before the invention or process enters the public domain. In the US, a patent lasts twenty years. This is why generic drugs are available to replace brand-name drugs after twenty years.
Obtaining Patent Protection
Unlike copyright, a patent is not automatically granted when someone has an interesting idea and writes it down. In most countries a patent application must be submitted to a government patent office. A patent will only be granted if the invention or process being submitted meets certain conditions.
• Must be original. The invention being submitted must not have been submitted before.
• Must be non-obvious. You cannot patent something that anyone could think of. For example, you could not put a pencil on a chair and try to get a patent for a pencil-holding chair.
• Must be useful. The invention being submitted must serve some purpose or have some use that would be desired.
The job of the patent office is to review patent applications to ensure that the item being submitted meets these requirements. This is not an easy job. In 2017 the US Patent Office granted 318,849 patents, an increase of 5.2% over 2016.[8] The current backlog for a patent approval is 15.6 months. Information Technology firms have apply for a significant number of patents each year. Here are the top five I.T. firms in terms of patent applications filed since 2009. The percents indicate the percent of total I.T. patents filed since 2009. Notice that over half of patent filings come from just these five corporations.
• International Business Machines (IBM) 21.6%
• Microsoft Corporation 14.2%
• AT & T, Inc. 7.1%
• Alphabet (Google), Inc. 5.0%
• Sony Corporation 4.7%
You might have noticed that Apple is not in the top five listing. Microsoft holds the lead in Artificial Intelligence (AI) patents.
[9]
Sidebar: What Is a Patent Troll?
The advent of digital technologies has led to a large increase in patent filings and therefore a large number of patents being granted. Once a patent is granted, it is up to the owner of the patent to enforce it. If someone is found to be using the invention without permission, the patent holder has the right to sue to force that person to stop and to collect damages.
The rise in patents has led to a new form of profiteering called patent trolling. A patent troll is a person or organization who gains the rights to a patent but does not actually make the invention that the patent protects. Instead, the patent troll searches for those who are illegally using the invention in some way and sues them. In many cases the infringement being alleged is questionable at best. For example, companies have been sued for using Wi-Fi or for scanning documents, technologies that have been on the market for many years.
Recently, the U.S. government has begun taking action against patent trolls. Several pieces of legislation are working their way through the U.S. Congress that will, if enacted, limit the ability of patent trolls to threaten innovation. You can learn a lot more about patent trolls by listening to a detailed investigation conducted by the radio program This American Life, by clicking this link.
Trademark
A trademark is a word, phrase, logo, shape or sound that identifies a source of goods or services. For example, the Nike “Swoosh,” the Facebook “f”, and Apple’s apple (with a bite taken out of it) are all trademarked. The concept behind trademarks is to protect the consumer. Imagine going to the local shopping center to purchase a specific item from a specific store and finding that there are several stores all with the same name!
Two types of trademarks exist – a common law trademark and a registered trademark. As with copyright, an organization will automatically receive a trademark if a word, phrase, or logo is being used in the normal course of business (subject to some restrictions, discussed below). A common law trademark is designated by placing “TM” next to the trademark. A registered trademark is one that has been examined, approved, and registered with the trademark office, such as the Patent and Trademark Office in the US. A registered trademark has the circle-R (®) placed next to the trademark.
While most any word, phrase, logo, shape, or sound can be trademarked, there are a few limitations. A trademark will not hold up legally if it meets one or more of the following conditions:
• The trademark is likely to cause confusion with a mark in a registration or prior application.
• The trademark is merely descriptive for the goods/services. For example, trying to register the trademark “blue” for a blue product you are selling will not pass muster.
• The trademark is a geographic term.
• The trademark is a surname. You will not be allowed to trademark “Smith’s Bookstore.”
• The trademark is ornamental as applied to the goods. For example, a repeating flower pattern that is a design on a plate cannot be trademarked.
As long as an organization uses its trademark and defends it against infringement, the protection afforded by it does not expire. Because of this, many organizations defend their trademark against other companies whose branding even only slightly copies their trademark. For example, Chick-fil-A has trademarked the phrase “Eat Mor Chikin” and has vigorously defended it against a small business using the slogan “Eat More Kale.” Coca-Cola has trademarked the contour shape of its bottle and will bring legal action against any company using a bottle design similar to theirs. Examples of trademarks that have been diluted and have now lost their protection in the US include: “aspirin” (originally trademarked by Bayer), “escalator” (originally trademarked by Otis), and “yo-yo” (originally trademarked by Duncan).
Information Systems and Intellectual Property
The rise of information systems has resulted in rethinking how to deal with intellectual property. From the increase in patent applications swamping the government’s patent office to the new laws that must be put in place to enforce copyright protection, digital technologies have impacted our behavior.
Privacy
The term privacy has many definitions, but for purposes here, privacy will mean the ability to control information about oneself. The ability to maintain our privacy has eroded substantially in the past decades, due to information systems.
Personally Identifiable Information
Information about a person that can be used to uniquely establish that person’s identify is called personally identifiable information, or PII. This is a broad category that includes information such as:
• Name;
• Social Security Number;
• Date of birth;
• Place of birth;
• Mother‘s maiden name;
• Biometric records (fingerprint, face, etc.);
• Medical records;
• Educational records;
• Financial information; and
• Employment information.
Organizations that collect PII are responsible to protect it. The Department of Commerce recommends that “organizations minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.” They go on to state that “the likelihood of harm caused by a breach involving PII is greatly reduced if an organization minimizes the amount of PII it uses, collects, and stores.”[10] Organizations that do not protect PII can face penalties, lawsuits, and loss of business. In the US, most states now have laws in place requiring organizations that have had security breaches related to PII to notify potential victims, as does the European Union.
Just because companies are required to protect your information does not mean they are restricted from sharing it. In the US, companies can share your information without your explicit consent (see the following sidebar), though not all do so. Companies that collect PII are urged by the FTC to create a privacy policy and post it on their website. The State of California requires a privacy policy for any website that does business with a resident of the state (see www.privacy.ca.gov/lawenforcement/laws.htm).
While the privacy laws in the US seek to balance consumer protection with promoting commerce, privacy in the European Union is considered a fundamental right that outweighs the interests of commerce. This has led to much stricter privacy protection in the EU, but also makes commerce more difficult between the US and the EU.
Non-Obvious Relationship Awareness
Digital technologies have given people many new capabilities that simplify and expedite the collection of personal information. Every time a person comes into contact with digital technologies, information about that person is being made available. From location to web-surfing habits, your criminal record to your credit report, you are constantly being monitored. This information can then be aggregated to create profiles of each person. While much of the information collected was available in the past, collecting it and combining it took time and effort. Today, detailed information about a person is available for purchase from different companies. Even information not categorized as PII can be aggregated in such a way that an individual can be identified.
This process of collecting large quantities of a variety of information and then combining it to create profiles of individuals is known as Non-Obvious Relationship Awareness, or NORA. First commercialized by big casinos looking to find cheaters, NORA is used by both government agencies and private organizations, and it is big business.
In some settings NORA can bring many benefits such as in law enforcement. By being able to identify potential criminals more quickly, crimes can be solved sooner or even prevented before they happen. But these advantages come at a price, namely, our privacy.
Restrictions on Data Collecting
In the United State the government has strict guidelines on how much information can be collected about its citizens. Certain classes of information have been restricted by laws over time and the advent of digital tools has made these restrictions more important than ever.
Children’s Online Privacy Protection Act
Websites that collect information from children under the age of thirteen are required to comply with the Children’s Online Privacy Protection Act (COPPA), which is enforced by the Federal Trade Commission (FTC). To comply with COPPA, organizations must make a good-faith effort to determine the age of those accessing their websites and, if users are under thirteen years old, must obtain parental consent before collecting any information.
Family Educational Rights and Privacy Act
The Family Educational Rights and Privacy Act (FERPA) is a US law that protects the privacy of student education records. In brief, this law specifies that parents have a right to their child’s educational information until the child reaches either the age of eighteen or begins attending school beyond the high school level. At that point control of the information is given to the child. While this law is not specifically about the digital collection of information on the Internet, the educational institutions that are collecting student information are at a higher risk for disclosing it improperly because of digital technologies.
Health Insurance Portability and Accountability Act
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) singles out records related to health care as a special class of personally identifiable information. This law gives patients specific rights to control their medical records, requires health care providers and others who maintain this information to get specific permission in order to share it, and imposes penalties on the institutions that breach this trust. Since much of this information is now shared via electronic medical records, the protection of those systems becomes paramount.
General Data Protection Regulation
The European Union, in an effort to help people take control over their personal data, passed the General Data Protection Regulation (GDPR) in May 2016. While this protection applies to the countries in the EU, it is having an impact of U.S. companies using the Internet as well. The regulation went into effect May 25, 2018.
EU and non-EU countries have different approaches to protecting the data of individuals. The focus in the U.S. has been on protecting data privacy so that it does not impact commercial interests.
In the EU the individual’s data privacy rights supercede those of business. Under GDPR data cannot be transferred to countries that do not have adequate data protection for individuals. Currently, those countries include, but are not limited to, the United States, Korea, and Japan. While the GDPR applies to countries in the EU, it is having an impact around the world as businesses in other countries seek to comply with this regulation.IEEE Spectrum. Retrieved from https://spectrum.ieee.org/telecom/internet/your-guide-to-the-gdpr.”[11]
One week prior to the effective date of May 25, 2018, only 60% of companies surveyed reported they would be ready by the deadline.Information Management. Retrieved from https://www.information-management.com/opinion/playing-catch-up-with-the-general-data-protection-regulation.”[12]
Clearly, the message of GDPR has gone out around the world. It is likely that greater data protection regulations will forthcoming from the U.S. Congress as well.
Sidebar: Do Not Track
When it comes to getting permission to share personal information, the US and the EU have different approaches. In the US, the “opt-out” model is prevalent. In this model the default agreement states that you have agreed to share your information with the organization and must explicitly tell them that you do not want your information shared. There are no laws prohibiting the sharing of your data, beyond some specific categories of data such as medical records. In the European Union the “opt-in” model is required to be the default. In this case you must give your explicit permission before an organization can share your information.
To combat this sharing of information, the Do Not Track initiative was created. As its creators explain[13]:
Do Not Track is a technology and policy proposal that enables users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms. At present few of these third parties offer a reliable tracking opt out and tools for blocking them are neither user-friendly nor comprehensive. Much like the popular Do Not Call registry, Do Not Track provides users with a single, simple, persistent choice to opt out of third-party web tracking.
Summary
The rapid changes in information technology in the past few decades have brought a broad array of new capabilities and powers to governments, organizations, and individuals alike. These new capabilities have required thoughtful analysis and the creation of new norms, regulations, and laws. This chapter has covered the areas of intellectual property and privacy regarding how these domains have been affected by new information systems capabilities and how the regulatory environment has been changed to address them.
Study Questions
1. What does the term information systems ethics mean?
2. What is a code of ethics? What is one advantage and one disadvantage of a code of ethics?
3. What does the term intellectual property mean? Give an example.
4. What protections are provided by a copyright? How do you obtain one?
5. What is fair use?
6. What protections are provided by a patent? How do you obtain one?
7. What does a trademark protect? How do you obtain one?
8. What does the term personally identifiable information mean?
9. What protections are provided by HIPAA, COPPA, and FERPA?
10. How would you explain the concept of NORA?
11. What is GDPR and what was the motivation behind this regulation?
Exercises
1. Provide one example of how information technology has created an ethical dilemma that would not have existed before the advent of I.T.
2. Find an example of a code of ethics or acceptable use policy related to information technology and highlight five points that you think are important.
3. Do some original research on the effort to combat patent trolls. Write a two-page paper that discusses this legislation.
4. Give an example of how NORA could be used to identify an individual.
5. How are intellectual property protections different across the world? Pick two countries and do some original research, then compare the patent and copyright protections offered in those countries to those in the US. Write a two- to three-page paper describing the differences.
6. Knowing that GDPR had a deadline of May 25, 2018, provide an update on the status of compliance by firms in non-European countries.
Labs
1. Contact someone who has created a mobile device app, composed music, written a book, or created some other type of intellectual property. Ask them about the amount of effort required to produce their work and how they feel about being able to protect that work. Write a one or two page paper on your findings.
2. Research the intellectual property portion of the End User License Agreement (EULA) on a favorite computer program of yours. Explain what the EULA is saying about protection of this work.
1. Merriam-Webster Dictionary. (n.d.). Ethics. Retrieved from http://www.merriam-webster.com/dictionary/ethics
2. Grigonis, H. (2018, April 5). Nine Things to Know About Facebook and Cambridge Analytica. Digital Trends. Retrieved from https://www.digitaltrends.com/social...a-and-privacy/
3. Association for Computing Machinery (1992, October 16) ACM Code of Ethics and Professional Conduct.
4. Merriam-Webster Dictionary. (n.d.). Intellectual Property. Retrieved from http://www.merriam-webster.com/dicti...ual%20property
5. United States Department of Justice. (n.d.). Copyright Infringement – First Sale Doctrine. Retrieved from http://www.justice.gov/usao/eousa/fo...9/crm01854.htm
6. United States Copyright Office. (n.d.). Fair Use Index. Retrieved from http://www.copyright.gov/fls/fl102.html
7. United States Patent and Trademark Office (n.d.). What Is A Patent? Retrieved from http://www.uspto.gov/patents/
8. United States Patent and Trademark Office (n.d.). Visualization Center. Retrieved from http://www.uspto.gov/patents/
9. Bachmann, S. (2016, December 22). America’s Big 5 Tech companies increase patent filings, Microsoft holds lead in AI technologies. IP Watchdog. Retrieved from http://www.ipwatchdog.com/2016/12/22...tent/id=76019/
10. McAllister, E., Grance, T., and Scarfone, K. (2010, April). Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). National Institute of Standards and Technology. Retrieved from http://csrc.nist.gov/publications/ni.../sp800-122.pdf
11. Sanz, R. M. G. (2018, April 30). Your Guide to the GDPR. IEEE Spectrum. Retrieved from
https://spectrum.ieee.org/telecom/in...de-to-the-gdpr
12. Zafrin, W. (2018, May 25). Playing Catch-up with the General Data Protection Regulation. Information Management. Retrieved from
https://www.information-management.c...ion-regulation
13. Electronic Frontier Foundation. (n.d.). Do Not Track. Retrieved from http://donottrack.us/ | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/03%3A_Information_Systems_Beyond_the_Organization/302%3A_The_Ethical_and_Legal_Implications_of_Informat.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• describe current trends in information systems.
• know how to think about the impacts of changes in technology on society and culture.
Introduction
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today devices you can hold in one hand are more powerful than the computers used to land a man on the moon in 1969. The Internet has made the entire world accessible to you, allowing you to communicate and collaborate like never before. This chapter examines current trends and looks ahead to what is coming next. As you read about technology trends in this chapter, think how you might gain competitive advantage in a future career through implementation of some of these devices.
Global
The first trend to note is the continuing expansion of globalization. The use of the Internet is growing all over the world, and with it the use of digital devices. Penetration rates, the percent of the population using the Internet, remains high in the developed world, but other continents are gaining.[1]
In addition to worldwide growth in Internet penetration, the number of mobile phones in use continues to increase. At the end of 2017 the world population of people over the age 10 years (those old enough to possibly have their own mobile phone) was about 5.7 billion with an estimated 4.77 billion mobile phone users. This equates to over 80% of people in the world having a mobile phone. [2]
World wide mobile phone users (Source: Statista)
Social
Social media growth is another trend that continues at a firm growth rate. As of April 2018 there were about 2.18 billion Facebook users, a 14% increase from April 2017.[3]
Facebook users world wide in June 2017 (Source: Internet World Stats)
In 2018, of the 2.2 billion users who regularly use Facebook, only half them spoke English and only 10% were from the US.[4]
Besides Facebook, other social media sites are also seeing tremendous growth. Over 83% of YouTube’s users are outside the US, with the UK, India, Germany, Canada, France, South Korea, and Russia leading the way.[5] Pinterest gets over 57% of its users from outside the US, with over 9% residing in India. [6] Twitter now has over 330 million active users. [7] Social media sites not based in the US are also growing. China’s WeChat multipurpose messaging and social media app is the fifth most-visited site in the world.[8]
Personal
Ever since the advent of Web 2.0 and e-commerce, users of information systems have expected to be able to modify their experiences to meet their personal tastes. From custom backgrounds on computer desktops to unique ringtones on mobile phones, makers of digital devices provide the ability to personalize how we use them. More recently, companies such as Netflix have begun assisting their users with personalizations by viewing suggestions. In the future, we will begin seeing devices perfectly matched to our personal preferences, based upon information collected about us.
Sidebar: Mary Meeker and Internet Trends
Chapters such as this are difficult to maintain because the future is a moving target. The same goes for businesses looking to figure out where to develop new products and make investments. Enter Mary Meeker, up until 2018 a partner at the notable venture capital firm Kleiner Perkins Caufield & Byers and now forming her own investment group, Bond Capital. For the past several years, Ms. Meeker has presented the “Internet Trends” report at the Code Conference every May. The presentation consists of rapid-fire summaries of data that provides insights into all of the latest trends in digital technologies and their impact on economies, culture, and investing. For those wanting to keep up with technology, there is no better way than to unpack her annual presentation by watching a video of the presentation and reviewing the associated slide deck.
Here are the last few years of videos of her presentation: 2019 2018 2017
You can view her slide decks from previous years by going to the Bond Capital archive.
Mobile
Perhaps the most impactful trend in digital technologies in the last decade has been the advent of mobile technologies. Beginning with the simple cellphone in the 1990s and evolving into the smartphones of today, the growth of mobile has been overwhelming. Here are some key indicators of this trend:
• Mobile vs. Desktop. Minutes spent each day on a mobile device are 2.5 times the number of minutes spent on a desktop computer.
• Daytime vs. Evening. Desktop use dominates in the daytime hours, but mobile devices are dominant in the evening, with peak usage around 8:00 pm.
• Device usage. Smartphones are used more than any other technology. Laptops are in second place, followed by tablets holding a slight edge over desktops. [9]
• Smartphone sales decline. According to Gartner Group, world wide smartphone sales declined in the fourth quarter of 2017 by 4.7% compared with the fourth quarter of 2016. This is the first decline in global smartphone sales since Gartner began tracking mobile phone sales in 2004. [10]
• The rise and fall of tablets. In 2012 the iPad sold more than three times as many units in its first twelve months as the iPhone did in its first twelve months. However, tablet sales dropped 20% from the fourth quarter 2015 to fourth quarter 2016. [11]
The decline in tablet sales continued into 2017 when first quarter sales dropped 8.5% to their lowest total since the third quarter of 2012, the year they were introduced. [12] In comparison, PC sales dropped only 1.7% in 2017 compared with tablet sales being down 10%. [13]
As discussed in chapter 5, the advent of 5G connection technologies will accelerate an “always-connected” state for a majority of people around the world.
Wearable
The average smartphone user looks at his or her smartphone 150 times a day for functions such as messaging (23 times), phone calls (22), listening to music (13), and social media (9).Many of these functions would be much better served if the technology was worn on, or even physically integrated into, our bodies. This technology is known as a “wearable.”
Wearables have been around for a long time, with technologies such as hearing aids and, later, bluetooth earpieces. Now the product lines have expanded to include the Smartwatch, body cameras, sports watch, and various fitness monitors. The following table from the Gartner Group reports both historical and predicted sales.
Wearable Devices Worldwide (millions of units)
Notice the strong growth predicted by 2021. Total wearable devices are projected to increase by about 45% from 2018 to 2021.
Collaborative
As more people use smartphones and wearables, it will be simpler than ever to share data with each other for mutual benefit. Some of this sharing can be done passively, such as reporting your location in order to update traffic statistics. Other data can be reported actively, such as adding your rating of a restaurant to a review site.
The smartphone app Waze is a community-based tool that keeps track of the route you are traveling and how fast you are making your way to your destination. In return for providing your data, you can benefit from the data being sent from all of the other users of the app. Waze directs you around traffic and accidents based upon real-time reports from other users.
Yelp! allows consumers to post ratings and reviews of local businesses into a database, and then it provides that data back to consumers via its website or mobile phone app. By compiling ratings of restaurants, shopping centers, and services, and then allowing consumers to search through its directory, Yelp! has become a huge source of business for many companies. Unlike data collected passively however, Yelp! relies on its users to take the time to provide honest ratings and reviews.
Printable
One of the most amazing innovations to be developed recently is the 3-D printer. A 3-D printer allows you to print virtually any 3-D object based on a model of that object designed on a computer. 3-D printers work by creating layer upon layer of the model using malleable materials, such as different types of glass, metals, or even wax.
3-D printing is quite useful for prototyping the designs of products to determine their feasibility and marketability. 3-D printing has also been used to create working prosthetic legs and an ear that can hear beyond the range of normal hearing. The US military now uses 3-D printed parts on aircraft such as the F-18.[14]
Here are more amazing productions from 3D printers.
• Buildings. Researchers at MIT in 2017 unveiled a 3D printing robot that can construct a building. It has a large arm and small arm. The large arm moves around the perimeter of the building while the small arm sprays a variety of materials including concrete and insulation. Total time to construct a dome-shaped building is just 14 hours.
• Musical Instruments. Flutes, fiddles, and acoustic guitars are being produced with 3D printing using both metal and plastic. You can click here for an example of making a violin.
• Medical Models. Medical models are being used to help doctors train in the areas of orthopedics, transplant surgery, and oncology. Using a 3D printed brain model similar to the one shown here, surgeons were able to save a patient from a cerebral aneurysm.
• Clothing. How would you like clothes that fit perfectly? Special software is used to measure a person, then 3D printing produces the clothing to the exact measurements. The result is well-fitting clothes that consume less raw materials. Initially the challenge was to find materials that would not break. You can read more about 3D printing of clothes and shoes.
[15]
3-D printing is one of many technologies embraced by the “maker” movement. Chris Anderson, editor of Wired magazine, puts it this way[16]:
In a nutshell, the term “Maker” refers to a new category of builders who are using open-source methods and the latest technology to bring manufacturing out of its traditional factory context, and into the realm of the personal desktop computer. Until recently, the ability to manufacture was reserved for those who owned factories. What’s happened over the last five years is that we’ve brought the Web’s democratizing power to manufacturing. Today, you can manufacture with the push of a button.
Findable
The “Internet of Things” (IoT) refers to devices that have been embedded into a variety of objects including appliances, lamps, vehicles, lightbulbs, toys, thermostats, jet engines, etc. and then connecting them via Wi-Fi, BlueTooth, or LTE to the Internet. Principally three factors have come together to give us IoT: inexpensive processors, wireless connectivity, and a new standard for addresses on the Internet known as IPv6. The result is these small, embedded objects (things) are capable of sending and receiving data. Lights can be turned on or off remotely. Thermostats can be reset with anyone being present. And, perhaps on the downside, how you drive your car can be monitored and evaluated by your insurance company.
Processors have become both smaller and cheaper in recent years, leading to their being embedded in more devices. Consider technological advancements in your vehicles. Your car can now collect data about how fast you drive, where you go, radio stations you listen to, and your driving performance such as acceleration and braking. Insurance companies are offering discounts for the right to monitor your driving behavior. On the positive side, imagine the benefit of being informed instantly of anticipated traffic delays each time you adjust your route to work in the morning.
Think of IoT as devices that you wouldn’t normally consider being connected to the Internet. And, the connection is independent of human intervention. So a PC is not an IoT, but a fitness band could be. One keyword for IoT would be “independent”, not relying directly or constantly on human action.
Another keyword would be “interconnected”, in the sense that IoTs are connected to other IoTs and data collection points or data servers. This interconnectedness or uploading of data is virtually automatic.
“Ubiqutous” is also a good descriptor of IoTs. And so is “embeddedness.” It is reasonable to expect that devices through IoTs are reporting data about conditions and events that are not foremost in our thinking, at least not on a continuous basis. Today there are IoTs for monitoring traffic, air quality, soil moisture, bridge conditions, consumer electronics, autonomous vehicles, and the list seemingly never stops. The question that might come to mind is “How many IoTs are there today?”
The Gartner Group released a study in January 2017 which attempted to identify where IoTs exist. They reported that over half of all IoTs are installed in devices used by consumers. They also noted that growth in IoTs increased by over 30% from 2016 to the projected levels for 2017.[17]
Benefits from IoTs are virtually everywhere. Here is a quick list.
• Optimization of Processes. IoTs in manufacturing monitor a variety of conditions that impact production including temperature, humidity, barometric pressure – all factors which require adjustment in application of manufacturing formulas.
• Component Monitoring. IoTs are added to components in the manufacturing process, then monitored to see how each component is performing.
• Home Security Systems. IoTs make the challenge of monitoring activity inside and outside your home are now easier.
• Smart Thermostats. Remote control of home thermostats through the use of IoTs allows the homeowner to be more efficient in consumption of utilities.
• Residential Lighting. IoTs provide remote control of lighting, both interior and exterior, and at any time of day.[18]
Security issues need to be acknowledged and resolved, preferably before IoTs in the form of remote lighting, thermostats, and security systems are installed in a residence. Here are some security concerns that need monitoring.
• Eavesdropping. Smart speaker systems in residences have been hacked, allowing others to eavesdrop on conversations within the home.
• Internet-connected Smart Watches. These devices are sometimes used to monitor the location of children in the family. Unfortunately, hackers have been able to breakin and again, eavesdrop as well as learn where children are located.
• Lax Use by Owners. Devices such as smart thermometers, security systems, etc. come with a default password. Many owners fail to change the password, thereby allowing easy access by a hacker.
Autonomous
Another trend that is emerging is an extension of the Internet of Things: autonomous robots and vehicles. By combining software, sensors, and location technologies, devices that can operate themselves to perform specific functions are being developed. These take the form of creations such as medical nanotechnology robots (nanobots), self-driving cars, or unmanned aerial vehicles (UAVs).
A nanobot is a robot whose components are on the scale of about a nanometer, which is one-billionth of a meter. While still an emerging field, it is showing promise for applications in the medical field. For example, a set of nanobots could be introduced into the human body to combat cancer or a specific disease.
In March of 2012, Google introduced the world to their driverless car by releasing a video on YouTube showing a blind man driving the car around the San Francisco area. The car combines several technologies, including a laser radar system, worth about \$150,000. While the car is not available commercially yet, three US states (Nevada, Florida, and California) have already passed legislation making driverless cars legal.
A UAV, often referred to as a “drone,” is a small airplane or helicopter that can fly without a pilot. Instead of a pilot, they are either run autonomously by computers in the vehicle or operated by a person using a remote control. While most drones today are used for military or civil applications, there is a growing market for personal drones. For around \$300, a consumer can purchase a drone for personal use.
Secure
As digital technologies drive relentlessly forward, so does the demand for increased security. One of the most important innovations in security is the use of encryption, which we covered in chapter 6.
Summary
As the world of information technology moves forward, we will be constantly challenged by new capabilities and innovations that will both amaze and disgust us. As we learned in chapter 12, many times the new capabilities and powers that come with these new technologies will test us and require a new way of thinking about the world. Businesses and individuals alike need to be aware of these coming changes and prepare for them.
Study Questions
1. Which countries are the biggest users of the Internet? Social media? Mobile?
2. Which country had the largest Internet growth (in %) in the last five years?
3. How will most people connect to the Internet in the future?
4. What are two different applications of wearable technologies?
5. What are two different applications of collaborative technologies?
6. What capabilities do printable technologies have?
7. How will advances in wireless technologies and sensors make objects “findable”?
8. What is enhanced situational awareness?
9. What is a nanobot?
10. What is a UAV?
Exercises
1. If you were going to start a new technology business, which of the emerging trends do you think would be the biggest opportunity? Do some original research to estimate the market size.
2. What privacy concerns could be raised by collaborative technologies such as Waze?
3. Do some research about the first handgun printed using a 3-D printer and report on some of the concerns raised.
4. Write up an example of how IoT might provide a business with a competitive advantage.
5. How do you think wearable technologies could improve overall healthcare?
6. What potential problems do you see with a rise in the number of autonomous cars? Do some independent research and write a two-page paper that describes where autonomous cars are legal and what problems may occur.
7. Seek out the latest presentation by Mary Meeker on “Internet Trends” (if you cannot find it, the video from 2018 is available at Mary Meeker). Write a one-page paper describing what the top three trends are, in your opinion.
8. Select a business enterprise of interest to you, one that you may pursue following graduation. Select one or more of the technologies listed in this chapter, then write a one or two page paper about how you might use that technology to gain a competitive advantage. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business_and_Beyond_(Bourgeois)_(2019_Edition)/03%3A_Information_Systems_Beyond_the_Organization/303%3A_Trends_in_Information_Systems.txt |
Learning Objectives
Upon completion of this unit the learner should be able to:
• Describe functional organization of the computer
• Explain the basic principles of organization, operation and performance of modern-day computer systems
• Outline the architectural design of the computer system
This unit introduces learners to functional organization of a computer. The unit provides you with basic concepts and techniques that will get you started in understanding and analysis of hardware and software interaction in computer systems.
01: Functional Organization
Introduction
In this learning activity section, the learner will be able to learn languages that are used to describe the transfer of registration of internal operations in the computer.
Activity Details
Register Transfer language
A register transfer language is a notation used to describe the micr-operation transfers between registers. It is a system for expressing in symbolic form the micro-operation sequences among register that are used to implement machine-language instructions. For any function of the computer, the register transfer language can be used to describe the (sequence of) micro-operations
• Register transfer language
• A symbolic language
• A convenient tool for describing the internal organization of digital computers
• Can also be used to facilitate the design process of digital systems.
Registers and Register Transfer
• Registers are designated by capital letters, sometimes followed by numbers (e.g., A, R13, IR)
• Registers are denoted by capital letters and are sometimes followed by numerals, e.g.,
○ MAR – Memory Address Register (holds addresses for the memory unit)
○ PC – Program Counter (holds the next instruction’s address)
○ IR – Instruction Register (holds the instruction being executed)
○ R1 – Register 1 (a CPU register)
• We can indicate individual bits by placing them in parentheses, e.g., PC(8-15), R2(5), etc.
• Often the names indicate function:
○ Registers and their contents can be viewed and represented in various ways
○ A register can be viewed as a single entity
○ Registers may also be represented showing the bits of data they contain
• Designation of a register
○ a register
○ portion of a register
○ a bit of a register
Common ways of drawing the block diagram of a register
Register Transfer
• Copying the contents of one register to another is a register transfer
• A register transfer is indicated as
R2 \(\neg\) R1
• In this case the contents of register R1 are copied (loaded) into register R2
• A simultaneous transfer of all bits from the source R1 to the destination register R2, during one clock pulse
• Note that this is a non-destructive; i.e. the contents of R1 are not altered by copying (loading) them to R2
• A register transfer such as
R3 \(\neg\) R5
• Implies that the digital system has
• the data lines from the source register (R5) to the destination register (R3)
• Parallel load in the destination register (R3)
• Control lines to perform the action
Register Transfer Language Instructions
• Register Transfer
R2 \(\neg\) R1
• Simultaneous Transfer
R2 \(\neg\) R1, R1 \(\neg\) R2
• Conditional Transfer (Control Function)
P: R2 \(\neg\) R1
or
If (P = 1) Then R2 \(\neg\) R1
• Conditional, Simultaneous Transfer
T: R2 \(\neg\) R1, R1 \(\neg\) R2
Basic Symbols For Register Transfer
Symbol Description Example
Letters (and numerals) Denotes a register MaR, R2
Parentheses () Denotes a part of register R2(0-7), R2(L)
Arrow \(\leftarrow\) Denotes Transfer of information R2 \(\leftarrow\) R1
Comma, Separates 2 micro-operations R2 \(\leftarrow\) R1, R1 \(\leftarrow\) R1
Conclusion
The learner was introduced to the register transfer language. In particular, how specific notations (symbols) are used to specify digital systems, rather than in words. Learners were also introduced to how registers can be viewed and represented.
Assessment
1. Briefly explain what can be used to store one or more bits of data, also accept and/or transfer information serially?
Shift registers
Shift registers are group of flip-flops. each flip-flop in the register store one bit only i.e 1 or 0.
2. What addressing mode has its address part pointing to the address of actual data.
The addressing mode is direct addressing: In direct addressing operand is stored in memory and the memory address of the operand is specified in the instruction
3. Which addressing mode does not require the fetch operations?
Fetch operations are not required in immediate addressing. Because in immediate addressing the data is part of the instruction.
4. What addressing mode used an instruction of the form ADD X, Y? Absolute or direct addressing is used
5. Which is the register used as a working area in CPU?
An accumulator is register used in computer’s central processing unit in which intermediate arithmetic and logic results are stored.
6. What addressing mode is used in the instruction PUSH B?
In register addressing mode the operand is held in memory. The address of the operand location is held in a register which is specified in instruction. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/01%3A_Functional_Organization/1.01%3A_Activity_1.1_-_Revision_of_Language_to_Describe_the_Transfer_of_Regist.txt |
Introduction
This section introduces the learners to the micro-architecture of the computer. That is the resources and methods used to achieve specifications.
Activity Details
Micro-architecture is the term used to describe the resources and methods used to achieve architecture specification. The term typically includes the way in which these resources are organized as well as the design techniques used in the processor to reach the target cost and performance goals. The micro-architecture essentially forms a specification for the logical implementation.
Also called computer organization and sometimes abbreviated as μarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different micro-architectures; implementations may vary due to different goals of a given design or due to shifts in technology.
The micro-architecture is related to, but not the same as, the instruction set architecture. Micro- architectural elements may be everything from single logic gates, to registers, lookup tables, multiplexers, counters, etc., to complete ALUs, floating point units (FPU) and even larger elements.
A few important points:
• A single micro-architecture can be used to implement many different instruction sets, by means of changing the control store.
• Two machines may have the same micro-architecture, and so the same block diagram, but completely different hardware implementations. This manages both the level of the electronic circuitry and even more the physical level of manufacturing (of both ICs and/or discrete components).
• Machines with different micro-architectures may have the same instruction set architecture, and so both are capable of executing the same programs. New micro-architectures and/or circuitry solutions, along with advances in semiconductor manufacturing, are what allow newer generations of processors to achieve higher performance.
The pipelined datapath is the most commonly used datapath design in micro-architecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line.
The pipeline includes several different stages which are fundamental in micro-architecture designs. Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central micro-architectural tasks.
Execution units are also essential to micro-architecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD.
These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central micro-architectural design task.
The size, latency, throughput and connectivity of memories within the system are also micro- architectural decisions.
System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the micro-architectural design process. This includes decisions on the performance-level and connectivity of these peripherals.
Unlike architectural design, where achieving a specific performance level is the main goal, micro-architectural design pays closer attention to other constraints. Since micro-architecture design decisions directly affect what goes into a system, attention must be paid to such issues as:
• Chip area/cost.
• Power consumption.
• Logic complexity.
• Ease of connectivity.
• Manufacturability.
• Ease of debugging.
• Testability.
Conclusion
This section has highlighted the micro-architecture. It discussed the resources and methods used to achieve the architecture.
Assessment
1. Outline with an explanation the micro-architecture
Micro-architecture is used to describe the units that were controlled by the micro-program words. Micro-architecture is related to, but not the same as, the instruction set architecture.
The instruction set architecture is near to the programming model of a processor as seen by an assembly language programmer or compiler writer, which includes the execution model, processor registers, address and data formats etc. The micro-architecture (or computer organization) is mainly a lower level structure and therefore manage a large number of details that are hidden in the programming model. It describes the inside parts of the processor and how they work together in order to implement the architectural specification.
Micro-architectural elements may be everything from single logic gates, to registers, lookup tables, multiplexers, counters, etc., to complete ALUs, FPUs and even larger elements. The electronic circuitry level can, in turn, be subdivided into transistor-level details, such as which basic gate-building structures are used and what logic implementation types (static/dynamic, number of phases, etc.) are chosen, in addition to the actual logic design used built them. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/01%3A_Functional_Organization/1.02%3A_Micro-architectures_-_Achievements_Connections_by_Wires_and_Microprogr.txt |
Introduction
This section introduces the learners to the Instruction plumbing and Instruction level parallelism (ILP). Basically this all about how many of the operations in a computer program can be performed simultaneously
Activity Details
Instruction plumbing is a measure of how many of the operations in a computer program can be performed simultaneously. The potential overlap among instructions is called instruction level parallelism.
There are two approaches to instruction level parallelism:
• Hardware
• Software
Hardware level works upon dynamic parallelism whereas, the software level works on static parallelism
The Pentium processor works on the dynamic sequence of parallel execution but the Itanium processor works on the static level parallelism.
Example \(1\)
Consider the following program:
1.e = a + b
2.f = c + d
3.m = e * f
Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2.
A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible.
Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.
How much ILP exists in programs is very application specific. In certain fields, such as graphics and scientific computing the amount can be very large. However, workloads such as cryptography may exhibit much less parallelism.
Micro-architectural techniques that are used to exploit ILP include:
• Instructionpipeliningwheretheexecutionofmultipleinstructions can be partially overlapped.
• Superscalar execution, VLIW, and the closely related explicitly parallel instruction computing concepts, in which multiple execution units are used to execute multiple instructions in parallel.
• Out-of-order execution where instructions execute in any order that does not violate data dependencies. Note that this technique is independent of both pipelining and superscalar. Current implementations of out-of-order execution dynamically (i.e., while the program is executing and without any help from the compiler) extract ILP from ordinary programs. An alternative is to extract this parallelism at compile time and somehow convey this information to the hardware. Due to the complexity of scaling the out-of-order execution technique, the industry has re-examined instruction sets which explicitly encode multiple independent operations per instruction.
• Register renaming which refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations, used to enable out-of-order execution.
• Speculative execution which allow the execution of complete instructions or parts of instructions before being certain whether this execution should take place. A commonly used form of speculative execution is control flow speculation where instructions past a control flow instruction (e.g., a branch) are executed before the target of the control flow instruction is determined. Several other forms of speculative execution have been proposed and are in use including speculative execution driven by value prediction, memory dependence prediction and cache latency prediction.
• Branch prediction which is used to avoid stalling for control dependencies to be resolved. Branch prediction is used with speculative execution
Dataflow architectures are another class of architectures where ILP is explicitly specified. In recent years, ILP techniques have been used to provide performance improvements in spite
of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as the IBM System/360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file). Presently, a cache miss penalty to main memory costs several hundreds of CPU cycles. While in principle it is possible to use ILP to tolerate even such memory latencies the associated resource and power dissipation costs are disproportionate. Moreover, the complexity and often the latency of the underlying hardware structures results in reduced operating frequency further reducing any benefits. Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data. Instead, the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such as multiprocessing and multithreading.
Superscalar architectures
Is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor or a computer designed to improve the performance of the execution of scalar instructions. A scalar is a variable that can hold only one atomic value at a time, e.g., an integer or a real. A scalar architecture processes one data item at a time - the computers we discussed up till now.
Examples of non-scalar variables:
• Arrays
• Matrices
• Records
In a superscalar architecture (SSA), several scalar instructions can be initiated simultaneously and executed independently. Pipelining allows also several instructions to be executed at
the same time, but they have to be in different pipeline stages at a given moment. SSA includes all features of pipelining but, in addition, there can be several instructions executing simultaneously in the same pipeline stage. SSA introduces therefore a new level of parallelism, called instruction-level parallelism.
Conclusion
This section covered the Instruction plumbing and Instruction level parallelism (ILP), that is, how many of the operations in a computer program can be performed simultaneously.
Assessment
1. Outline give an example of an Instruction level parallelism (ILP)
is a measure of how many of the operations in a computer program can be performed simultaneously. The potential overlap among instructions is called instruction level parallelism.
basic idea is to execute several instructions in parallel. Parallelism exists in that we perform different operations (fetch, decode, ...) on several different instructions in parallel.
Mostly determined by the number of true (data) dependencies and procedural (control) dependencies in relation to the number of other instructions.
e.g.
• A: ADD R1 = R2 + R3
• B: SUB R4 = R1 – R5
ILP is traditionally “extracting parallelism from a single instruction stream working on a single stream of data”. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/01%3A_Functional_Organization/1.03%3A_Instructions_plumbing_and_Instruction_level_parallelism_%28ILP%29.txt |
Introduction
This section introduces the learners to the processor and system performance. Material on the characteristics of the processor and system performance as well as the components that determine the system performance is provided .
Activity Details
The performance of a computer is dependent on how well it works together as a whole. Continually upgrading one part of the computer while leaving outdated parts installed will not improve performance much, if at all. The processor, memory and video card are the most important components when determining performance inside a computer.
The following are some of the most important parts of the computer regarding its speed and computing power;
1. Clock speed (Processor speed);
is often played up to be the major factor in a computer’s overall performance. In rare cases this is true, but an average user rarely uses 100 percent of his Central Processing Unit’s power (CPU). Things like encoding video or encrypting files, or anything that computes large, complex, numbers requires a lot of processor power. Most users spend most of their time typing, reading email or viewing web pages. During this time, the computer’s CPU is probably hovering around 1 or 2 percent of it’s total speed. Startup time is probably the only time the CPU is under stress, and even then it’s often limited due to the hard drive speed.
2. System RAM speed and size;
The amount and speed of the RAM in your computer makes a huge difference in how your computer performs. If you are trying to run Windows XP with 64 MB of RAM it probably won’t even work. When the computer uses up all available RAM it has to start using the hard drive to cache data, which is much slower. The constant transfer of data between RAM and virtual memory (hard drive memory) slows a computer down considerably. Especially when trying to load applications or files. The two types differ in the technology they use to hold data, dynamic RAM being the more common type. Dynamic RAM needs to be refreshed thousands of times per second.
Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.
3. Disk speed and size;
Is the biggest factor in your computer’s performance is the hard disk speed. How fast the hard drive can find (average seek time), read, write, and transfer data will make a big difference in the way your computer performs. Most hard drives today spin at 7,200 RPMS, older models and laptops still spin at 5,200 RPMS, which is one reason laptops often appear sluggish to a desktop equivalent. The size of your hard drive plays a very little role in the performance of a computer. As long as you have enough free space for virtual memory and keep the disk defragmented it will perform well no matter what the size.
4. Video card - (onboard video RAM, chip type and speed);
Whenever your computer puts an image on the screen something has to render it. If a computer is doing this with software it is often slow and will affect the performance of the rest of the computer. Also, the image will not be rendered as crisp or as smoothly in the case of video. Even a low-end video card will significantly improve the performance of the computer by taking the large task of rendering the images on the screen from the CPU to the graphics card. If you work with large image files, video or play games you will want a higher end video card. Video cards use their own RAM called Video RAM. The more Video RAM a computer has the more textures and images the card can remember at a time. High end graphics cards for desktops now come with up to 64 megabytes of Video RAM, Laptops often only have 8 or 16 megabytes of Video RAM.
5. Others include memory and system buses
Latency memory, performance and efficiency
Is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place.
This velocity is always lower or equal to speed of light. Therefore, every physical system that has spatial dimensions different from zero will experience some sort of latency. The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two- way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability.
Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding.
For example, suppose a process commands that a computer card’s voltage output be set high- low-high-low and so on at a rate of 1000 Hz.. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock.
The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high. System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response.
Caches
Cache memory, also called CPU memory, is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. This memory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. The basic purpose of cache memory is to store program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program. Most programs use very few resources once they have been opened and operated for a time, mainly because frequently re-referenced instructions tend to be cached. This explains why measurements of system performance in computers with slower processors but larger caches tend to be faster than measurements of system performance in computers with faster processors but more limited cache space.
Conclusion
This section covered the processor and system, the characteristics of the components that determine system performance.
Assessment
1. Discuss system performance
Computer performance is characterized by the amount of useful work accomplished by a computer system or computer network compared to the time and resources used. Depending on the context, high computer performance may involve one or more of the following: Short response time for a given piece of work.
• Short response time for a given piece of work
• High throughput (rate of processing work)
• Low utilization of computing resource(s)
• High availability of the computing system or application
• Fast (or highly compact) data compression and decompression
• High bandwidth
• Short data transmission time
1.05: Unit 1 Summary
At the end of this unit, you will be conversant with the advanced functional organization of computer. This is by learning how data transfers occurs in the computer, the architecture of the microprocessor, the types of transfer of instructions within the computer and the processor and system performance of the computer.
Unit Assessment
The following section will test your understanding of this unit Instructions
Answer the following questions
1. Differentiate between computer architecture and computer organization.
2. Explain the significance of layered architecture.
3. Explain the various types of performance metrics.
Grading Scheme
The marks will be awarded as shown below
question sub-question marks awarded
1 Any difference award 2 mark 8
2 any significance listed award 1.5 mark, maximum 4 significances 6
3 Explanation only 2 marks. 6
Each significance listed award 1 mark (maximum 4)
Total 20
Feedback
1. Difference between computer architecture and computer organization:
Computer architecture Computer organization
It includes emphasis on logical design, computer design and the system design It includes emphasis on the system components, circuit design, logical design, structure of instructions, computer arithmetic, processor control, assembly language, programming methods and of performance enhancement
It is concerned with structure and behaviour of computer as seen by the user Computer organization is concerned with the way the hardware components operate and the way they are connected together to form the computer system
2. Explain the significance of layered architecture
Significance of layered architecture: In layered architecture, complex problems can be segmented into smaller and more manageable form. Each layer is specialized for specific functioning. Team development is possible because of logical segmentation. A team of programmers will build. The system and work has to be sub-divided of along clear boundaries.
3. Explain the various types of performance metrics.
Performance metrics include availability, response time, Channel capacity, latency, Completion time. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/01%3A_Functional_Organization/1.04%3A_Processor_and_System_Performance.txt |
Learning Objectives
Upon completion of this unit the learner should be able to:
1. Describe multiprocessing
2. Explain how a processor can support concurrent running of programs
3. Describe how Amdahls law determines the maximum improvement of a system
This unit introduces you to multiprocessing, which is the ability of a system to support more than one processor and/or the ability to allocate tasks between them. This will make you be in a position to understand how several programs can run concurrently in the same computer.
02: Multiprocessing
Introduction
The section introduces the learners to Amdahl’s law; this law introduces the concept of how to find the maximum expected improvement to an overall system when only part of the system is improved.
Activity Details
Amdahls law is also known as Amdahl’s argument. It is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speed up using multiple processors.
The law is named after computer architect Gene Amdahl.
Amdahl’s law: is law used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speed up using multiple processors.
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. E.g. if a program needs 20 hours using a single processor core, and a particular portion of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (95%) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20$\times$.
Equation
A task which can be parallelized can be split up into two parts:
• A non-parallelizable part (or serial part) which cannot be parallelized;
• A parallelizable part which can be parallelized.
Example $1$
A program that processes files from disk, A small part of that program may scan the directory and create a list of files internally in memory. After that, each file is passed to a separate thread for processing. The part that scans the directory and creates the file list cannot be parallelized, but processing the files can.
The time taken to execute the whole task in serial (not in parallel) is denoted $T$. The time $T$ includes the time of both the non-parallelizable and parallelizable parts. The portion of time to execute in serial the parallelizable part is denoted P. The portion of time to execute in serial the non-parallizable part is then $1 - P$.
From this follows that : $T=(1-P)T+PT$
It is the parallelizable part $P$ that can be sped up by executing it in parallel. How much it can be sped up depends on how many subtasks are executed in parallel. The theoretical execution time of the parallelizable part on a system capable of executing $N$ subtasks in parallel .
Amdahl’s law gives the theoretical execution time T(N) of the whole task on a system capable of executing N subtasks in parallel:
Consequently, the best (with an infinite number of subtasks) theoretical execution time of the whole task is
In terms of theoretical overall speedup, Amdahl’s law is given as
and the best theoretical overall speedup is
As an example, if $P$ is 90%, then $1 - P$ is 10%, and the task can be sped up by a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for small numbers of processors and problems with very high values of $P$ (close to 100%): so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce the component $1 - P$ to the smallest possible value.
In parallel computing, $P$ can be estimated by using the measured speedup $S(N)$ on a specific number of processors $N$ using
$P$ estimated in this way can then be used in Amdahl’s law to predict speedup for a different number of processors.
Conclusion
This unit introduced the learner to the Amdahl’s law. Examples were used to teach the learner how to find the maximum expected improvement to an overall system when only part of the system is improved.
1. Briefly describe Amdahls law on parallel computing
In computer architecture, Amdahl’s law (or Amdahl’s argument) gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It is named after computer scientist Gene Amdahl, and was presented at the AFIPS Spring Joint Computer Conference in 1967.
Amdahl’s law can be formulated the following way:
where
• Slatency is the theoretical speedup in latency of the execution of the whole task;
• $s$ is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system;
• $p$ is the percentage of the execution time of the whole task concerning the part that benefits from the improvement of the resources of the system before the improvement. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/02%3A_Multiprocessing/2.01%3A_The_Amdahl%27s_law.txt |
Introduction
This section introduces the learner to the short vector processing (multimedia operations) which was initially developed for super-computing applications. Today its important for multimedia operations.
Activity Details
Vector processor
Is also called the array processor and is a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors, compared to scalar processors, whose instructions operate on single data items. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks.
They are commonly called supercomputers, the vector processors are machines built primarily to handle large scientific and engineering calculations. Their performance derives from a heavily pipelined architecture which operations on vectors and matrices can efficiently exploit.
Vector processors have high- level operations that work on linear arrays of numbers: “vectors”.
Properties of Vector Processors
the vector processors have the following properties;
1. Single vector instruction implies lots of work (loop), i.e. it has fewer instruction fetches
2. Each result independent of previous result.
a. Multiple operations can be executed in parallel
b. Simpler design, high clock rate
c. Compiler (programmer) ensures no dependencies
3. Reduces branches and branch problems in pipelines
4. Vector instructions access memory with known pattern
a. Effective prefetching
b. Amortize memory latency of over large number of elements
c. Can exploit a high bandwidth memory system
d. No (data) caches required!
Styles of Vector Architectures
They are of two styles
1. Memory- memory vector processors
a. All vector operations are memory to memory
2. Vector-register processors
1. All vector operations between vector registers (except vector load and store)
2. Vector equivalent of load-store architectures
3. Includes all vector machines since late 1980s
4. We assume vector-register for rest of the lecture
Components of a Vector Processor
• Scalar CPU: registers, data paths, instruction fetch logic
• Vector register
• Fixed length memory bank holding a single vector
• Typically 8-32 vector registers, each holding 1 to 8 Kbits
• Has at least 2 read and 1 write ports
• MM: Can be viewed as array of 64b, 32b, 16b, or 8b elements
• Vector functional units (FUs)
• Fully pipelined, start new operation every clock
• Typically 2 to 8 FUs: integer and FP
• Multiple data paths (pipelines) used for each unit to process multiple elements per cycle
• Vector load-store units (LSUs)
• Fully pipelined unit to load or store a vector
• Multiple elements fetched/stored per cycle
• May have multiple LSUs
Cross-bar to connect FUs , LSUs, registers Basic Vector Instructions
This are in Table 1 below;
Table 1
Instr. Operands Operation Comment
VADD.VV V1, V2, V3 V1 = V2 + V3 vector + vector
VADD.SV V1, R0, V2 V1 = R0 + V2 scalar + vector
VMUL.VV V1, V2, V3 V1 = V2$\times$V3 vector $\times$ vector
VMUL.SV V1, R0, V2 V1 = R0$\times$V2 scalar $\times$ vector
VLD V1, R1 V1 = M [R1..R1 + 63] load, stride = 1
VLDS V1, R1, R2 V1 = M [R1..R1 + 63*R2] load, stride = R2
VLDX V1, R1, V2 V1 = M [R1 + V2i, i = 0..63] indexed("gather")
VST V1, R1 M [R1..R1 + 63] = V1 store, stride = 1
VSTS V1, R1, R2 V1 = M [R1..R1 + 63*R2] store, stride = R2
VSTX V1, R1, V2 V1 = M[R1 + V2i, i=0..63] indexed("scatter")
+ all the regular scalar instructions (RISC style)...
Table 1
Vector Memory Operations
• Load/store operations move groups of data between registers and memory
• Three types of addressing
• Unit stride
• Fastest
• Non-unit (constant) stride
• Indexed (gather-scatter)
• Vector equivalent of register indirect
• Good for sparse arrays of data
• Increases number of programs that vectorize
• compress/expand variant also
• Support for various combinations of data widths in memory
• {.L,.W,.H.,.B} $\times$ {64b, 32b, 16b, 8b}
Vector Code Example
Y[0:63] = Y[0:653] + a*X[0:63]
See the table 2 & 3 below;
Table 2
64 element SAXPY: scalar 64 element SAXPY: vector
LD R0, a LD R0, a #load scalar
a
ADDI R4,Rx,#512 VLD, V1, Rx #load vector
X
loop: LD R2, 0(Rx) VMUL.SV
V2, R0, V1
#vector mult
MULTD R2, R0, R2 VLD V3, Ry #load vector
Y
LD R4, 0(Ry) VADD.VV
V4, V2, V3
#vector add
ADDD R4, R2, R4 VST Ry, V4 #store
Vector Y
Table 3
SD R4, 0(Ry)
ADDI Rx, Rx, #8
ADDI Ry, Ry, #8
SUB R20, R4, Rx
BNZ R20, loop
Vector Length
vector register can hold some maximum number of elements for each data width (maximum vector length or MVL). What to do when the application vector length is not exactly MVL? Vector-length (VL) register controls the length of any vector operation, including a vector load or store E.g. vadd.vv with VL=10 is for (I=0; I<10; I++) V1[I]=V2[I]+V3[I]
The VL can be anything from 0 to MVL
Conclusion
The section introduced the learner to the short vector processing (multimedia operations) which is used for the operations on high level linear array of numbers.
Assessment
1. State the properties of the vector processors
• Each result independent of previous result
=> Long pipeline, compiler ensures no dependencies
=> High clock rate
• Vector instructions access memory with known pattern
=> Highly interleaved memory
=> Amortize memory latency of over $\approx$ 64 elements
=> No (data) caches required! (Do use instruction cache)
• Reduces branches and branch problems in pipelines
• Single vector instruction implies lots of work ($\approx$ loop)
=> Fewer instruction fetches | textbooks/workforce/Information_Technology/Information_Technology_Hardware/02%3A_Multiprocessing/2.02%3A_Short_Vector_Processing_%28Multimedia_Operations%29.txt |
Introduction
This section introduces the learner to the multicore and multiprocessor. It also highlights why computer architecture is moving towards multiprocessor architecture
Activity details
Central Processing Unit is what is typically referred to as a processor. A processor contains many discrete parts within it, such as one or more memory caches for instructions and data, instruction decoders, and various types of execution units for performing arithmetic or logical operations.
Multicore: is a type of architecture where a single physical processor contains the core logic of two or more processors.
A multicore CPU has multiple execution cores on one CPU. This can mean different things depending on the exact architecture, but it basically means that a certain subset of the CPU’s components is duplicated, so that multiple “cores” can work in parallel on separate operations.
This is called CMP, Chip-level Multiprocessing.
A multiprocessor system contains more than one such CPU, allowing them to work in parallel. This is called SMP, or Simultaneous Multiprocessing. That is Multi-processing simply means
putting multiple processors in one system.
For example, a multicore processor may have a separate L1 cache and execution unit for each core, while it has a shared L2 cache for the entire processor. That means that while the processor has one big pool of slower cache, it has separate fast memory and arithmetic/logic units for each of several cores. This would allow each core to perform operations at the same time as the others.
Single-core CPU Chip
Figure 1 below illustrates the single-core CPU chip. it has the components register files, ALU, bus interface and the system bus. The figure also shows the single core clearly marked out
Figure 1
Multi-core architectures
The cores fit on a single processor socket, also called CMP (Chip Multi-Processor)
Multicore System:
A Multicore system usually refers to a multiprocessor system that has all its processors on the same chip. It could also refer to a system where the processors are on different chips but use the same package (i.e., a multichip module). Multicore systems were developed primarily to enhance the system performance while limiting its power consumption. It consists of
1. General - purpose programmable cores,
2. Special - purpose accelerator cores,
3. Shared memory modules,
4. NoC (interconnection network), and
5. I/O interface.
Figure 2 below illustrates the Multi-core CPU chip
Figure 2
In Figure 2 all processors are on the same chip. The Multi-core processors are MIMD, i.e. different cores execute different threads (Multiple Instructions), operating on different parts of memory (Multiple Data). Also Multi-core is a shared memory multiprocessor, i.e. all cores share the same memory
NB
The main reason for which computer architecture is moving towards multicore systems is scalability. That is, as we increase the number of processors to enhance performance, multicore systems allow limiting power consumption and interprocessor communication overhead. A Multicore system can be scaled by adding more CPU cores and adjusting the interconnection network. More system programming work has to be done to be able to utilize the increased resources. It is one thing to increase the number of CPU resources. It is another to be able to schedule all of them to do useful tasks.
Multiprocessor
Multiprocessor. is the use of two or more central processing units (CPUs) within a single computer system.
Multiprocessor System
A multiprocessor system has its processors residing in separate chips and processors are interconnected by a backplane bus. Multiprocessor systems were developed to enhance the system performance with little regard to power consumption. A multiprocessor system has good performance and its constituent processors are high - performing processors.
Main Differences between Multicore Systems and Multiprocessor Systems:
The differences are as expressed in Table 4
Table 4
Multiprocessor system Multicore system
Integration level Each processor in a chip All processors on the same chip
Processor performance High Low
System performance Very high High
Processor power consumption High Low
Total power consumption Relatively high Relatively low
Conclusion
This section introduced the main reason why computer architecture is moving towards multicore systems which is scalability. It enables an increase in the number of processors.
This has enhanced performance, multicore systems allow limiting power consumption and interprocessor communication overhead
Assessment
Distinguish the multicore and multiprocessor architectures
A CPU, or Central Processing Unit, is what is typically referred to as a processor. A processor contains many discrete parts within it, such as one or more memory caches for instructions and data, instruction decoders, and various types of execution units for performing arithmetic or logical operations.
A multiprocessor system contains more than one such CPU, allowing them to work in parallel. This is called SMP, or Simultaneous Multiprocessing.
A multicore CPU has multiple execution cores on one CPU. Now, this can mean different things depending on the exact architecture, but it basically means that a certain subset of the CPU’s components is duplicated, so that multiple “cores” can work in parallel on separate operations.
This is called CMP, Chip-level Multiprocessing. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/02%3A_Multiprocessing/2.03%3A_The_Multicore_and_Multiprocessor_Segments.txt |
Introduction
This section introduces the learner to the designs of modern processors and their functionalities. Flynn Taxonomy is also discussed in the section.
Activity Details
Flynn’s taxonomy is a classification of computer architectures. i.e. is a classification of computer architectures. It has been used as a tool in design of modern processors and
their functionalities. Flynn classification definitions are based upon the number of concurrent instruction (or control) streams and data streams available in the architecture. They are;-
1. SISD (Single instruction stream, single data stream)
A sequential computer which exploits no parallelism in either the instruction or data streams. Single control unit (CU) fetches single instruction stream (IS) from memory. The CU then generates appropriate control signals to direct single processing element (PE) to operate on single data stream (DS) i.e. one operation at a time.
The traditional uniprocessor machines like a PC (currently manufactured PCs have multiple cores) or old mainframes.
2. Single instruction stream, multiple data streams (SIMD)
A computer which exploits multiple data streams against a single instruction stream to perform operations which may be naturally parallelized.
For example, an array processor or graphics processing unit (GPU)
3. Multiple instruction streams, single data stream (MISD)
Multiple instructions operate on one data stream. Uncommon architecture which is generally used for fault tolerance. Heterogeneous systems operate on the same data stream and must agree on the result.
Examples include the Space Shuttle flight control computer.
4. Multiple instruction streams, multiple data streams (MIMD)
Multiple autonomous processors simultaneously executing different instructions on different data. MIMD architectures include multi-core superscalar processors, and distributed systems, using either one shared memory space or a distributed memory space.
Diagram comparing classifications
These four architectures are shown below visually. Each processing unit (PU) is shown for a unicore or multi-core computer:
Note
As of 2006, all the top 10 and most of the TOP500 supercomputers are based on a MIMD architecture.
Further divide the MIMD category into the two categories below, and even further subdivisions are sometimes considered.
5. Single program, multiple data streams (SPMD)
Multiple autonomous processors simultaneously executing the same program (but at independent points, rather than in the lockstep that SIMD imposes) on different data. Also termed single process, multiple data the use of this terminology for SPMD is technically incorrect, as SPMD is a parallel execution model and assumes multiple cooperating processes executing a program. SPMD is the most common style of parallel programming. The SPMD model and the term was proposed by Frederica Darema.
Gregory F. Pfister was a manager of the RP3 project, and Darema was part of the RP3 team.
6. Multiple programs, multiple data streams (MPMD)
Multiple autonomous processors simultaneously operating at least 2 independent programs. Typically such systems pick one node to be the “host” (“the explicit host/node programming model”) or “manager” (the “Manager/Worker” strategy), which runs one program that farms out data to all the other nodes which all run a second program. Those other nodes then return their results directly to the manager. An example of this would be the Sony PlayStation 3 game console, with its SPU/PPU processor.
Conclusion
This sections introduced the learner the classification of computer architecture and has been used as a tool in design of modern processors and their functionalities
Assessment
Discuss the Multiple instruction streams data stream (MISD) architecture
This implies that several instructions are operating on a single piece of data. The same data flows through a linear array of processors executing different instruction streams. This architecture is also known as systolic array for pipelined execution of specific algorithms.
• Not much used in practice. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/02%3A_Multiprocessing/2.04%3A_Flynn_Taxonomy-_Structures_and_Multiprocessor_Architectures.txt |
Introduction
This section introduces the learner to the multiprocessor scheduling. In this the workload called tasks can be spread across processors and thus be executed much faster.
Activity Details
In computer science, multiprocessor scheduling is an NP-hard optimization problem. The problem statement is: “Given a set J of jobs where job ji has length li and a number of processors m, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?”
The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the scheduling of computational tasks in a multiprocessor environment.
Multiprocessor schedulers have to schedule tasks which may or may not be dependent upon one another. For example take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of ordering exists between the tasks. In fact it is clear that it can be modelled with partial ordering. Then, by definition, the set of tasks constitute a lattice structure.
The general multiprocessor scheduling problem is a generalization of the optimization version of the number partitioning problem, which considers the case of partitioning a set of numbers (jobs) into two equal sets (processors).
Processors purpose-specific graphics and GPU
General-purpose computing on graphics processing units (GPGPU, rarely GPGP or GPU) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple graphics cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.
GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs generally operate at lower frequencies, they usually have many times more cores to make up for it (up to hundreds at least) and can, thus, operate on pictures and graphical data effectively much faster, dozens or even hundreds of times faster than a traditional CPU, migrating data into graphical form and then using the GPU to “look” at it and analyze it can result in profound speedup. GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs generally operate at lower frequencies, they usually have many times more cores to make up for it (up to hundreds at least) and can, thus, operate on pictures and graphical data effectively much faster, dozens or even hundreds of times faster than a traditional CPU, migrating data into graphical form and then using the GPU to “look” at it and analyze it can result in profound speedup.
Reconfigurable Logic and Purpose-specific Processors
Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the data path itself in addition to the control flow. On the other hand, the main difference with custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by “loading” a new circuit on the reconfigurable fabric.
The concept of reconfigurable computing has existed since the 1960s, when Gerald Estrin’s paper proposed the concept of a computer made of a standard processor and an array of “reconfigurable” hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware. Reconfigurable architectures can bring unique capabilities to computational tasks. They offer the performance and energy efficiency of hardware with the flexibility of software.
Conclusion
This section introduced the learner to the multiprocessor scheduling applied in the computer architecture by partitioning the jobs to be performed.
Assessment
1. Briefly describe multiprocessor scheduling
Multiprocessor scheduling is an NP-hard optimization problem. The problem statement is: “Given a set J of jobs where job ji has length li and a number of processors m, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?” The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the scheduling of computational tasks in a multiprocessor environment.
Multiprocessor schedulers have to schedule tasks which may or may not be dependent upon one another. For example take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of ordering exists between the tasks. In fact it is clear that it can be modelled with partial ordering. Then, by definition, the set of tasks constitute a lattice structure.
The general multiprocessor scheduling problem is a generalization of the optimization version of the number partitioning problem, which considers the case of partitioning a set of numbers (jobs) into two equal sets (processors)
2.06: Unit 2 Summary
At the end of this unit, the learners will be describe Amadahl’s law, Flynn Taxonom, multiprocessing and Scheduling. Short vector processing and multicore and multiprocessor is also covered. This is by learning how several processors can be integrated into one system to solve and allocate given tasks among themselves.
Unit Assessment
The following section will test the learners understanding of this unit Instructions
Answer the following questions
1. State Amdahls law and what is used for?
2. Explain two properties of vector processors
3. What is multicore processor?
Grading Scheme
The marks will be awarded as shown below
Question Sub-question marks Awarded
1 Stating and explaining award a mark each, maximum 6 6
2 Any two and their explanations 2mark, maximum 4 significances 4
3 Stating only 2 marks. 5
Each subsequent explanation listed award 1 mark (maximum 3)
Total 15
Feedback
1. Also known as Amdahl’s argument. It is used to find the maximum expected improvement to an overall system when only part of the system is improved, is law used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
2. Single vector instruction implies lots of work (loop)
• Each result independent of previous result
• Reduces branches and branch problems in pipelines
• Vector instructions access memory with known pattern
3. It is a type of architecture where a single physical processor contains the core logic of two or more processors. A multicore CPU has multiple execution cores on one CPU.
This can mean different things depending on the exact architecture, but it basically means that a certain subset of the CPU’s components is duplicated, so that multiple “cores” can work in parallel on separate operations. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/02%3A_Multiprocessing/2.05%3A_Scheduling_multiprocessor_systems.txt |
Learning Objectives
At the end of the unit the learner should be able to;-
1. Describe low-level programming
2. Distinguish between assembly and machine language
3. Understand the fundamental concepts of machine organization and the requirements of low-level language programming.
This unit introduces the learner to the organization and low-level programming that provides little or no abstraction which is basically about the machine and assembly language programming.
• 3.1: Structure of low-level programs
This section introduces the learner to the low-level programming languages. The learners should distinguish the low-level languages and their different uses in computer programming
• 3.2: Architecture support from Low level to High level languages
This section introduces learners to the various supports offered by programming languages starting with the low-level to the high level programming languages.
• 3.3: Unit 3 Summary
At the end of this unit, the learners will be conversant with Low-level programming, which is about machine and Assembly programming. It also looked at the language support offered.
03: Computer Organization and low-level Programming
Introduction
This section introduces the learner to the low-level programming languages. The learners should distinguish the low-level languages and their different uses in computer programming
Activity Details
In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer’s instruction set architecture—commands or functions in the language map closely to processor instructions. This refers to either machine code or assembly language. The word “low” refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being “close to the hardware.” Because of the close relationship between the language and the hardware architecture programs written in low-level languages tend to be relatively non-portable.
Low-level languages can convert to machine code without a compiler or interpreter— second- generation programming languages use a simpler processor called an assembler— and the resulting code runs directly on the processor. A program written in a low-level language can be made to run very quickly, with a small memory footprint. An equivalent program in a high- level language can be less efficient and use more memory. Low-level languages are simple, but considered difficult to use, due to numerous technical details that the programmer must remember. By comparison, a high-level programming language isolates execution semantics of computer architecture from the specification of the program, which simplifies development.
Machine codes
Machine code is the only language a computer can process directly without a previous transformation. Currently, programmers almost never write programs directly in machine code, because it requires attention to numerous details that a high-level language handles automatically, requires memorizing or looking up numerical codes for every instruction, and is extremely difficult to modify.
True machine code is a stream of raw, usually binary, data. A programmer coding in “machine code” normally codes instructions and data in a more readable form such as decimal, octal, or hexadecimal which is translated to internal format by a program called a loader or toggled into the computer’s memory from a front panel.
Assembly language
Second-generation languages provide one abstraction level on top of the machine code. In the early days of coding on computers like the TX-0 and PDP-1, the first thing MIT hackers did was write assemblers. Assembly language has little semantics or formal specification, being only a mapping of human-readable symbols, including symbolic addresses, to opcodes, addresses, numeric constants, strings and so on.
Typically, one machine instruction is represented as one line of assembly code. Assemblers produce object files that can link with other object files or be loaded on their own. Most assemblers provide macros to generate common sequences of instructions.
Limitations of low-level architectures
• Very hard to read or learn for the uninitiated.
• Not very self-documenting like higher level languages.
• Harder to modify and maintain.
• Less support, than high level languages, in development and debug environments.
Conclusion
The learner should distinguish the various levels of low-level programming used in programming. They can also state the limitations arising from comparing the two low-level of programming languages
Assessment
Distinguish between machine and assembly languages
Machine language is the actual bits used to control the processor in the computer, usually viewed as a sequence of hexadecimal numbers (typically bytes). The processor reads these bits in from program memory, and the bits represent “instructions” as to what to do next.
Thus machine language provides a way of entering instructions into a computer (whether through switches, punched tape, or a binary file).
While assembly language is a more human readable view of machine language. Instead of representing the machine language as numbers, the instructions and registers are given names (typically abbreviated words, or mnemonics, like ld means “load”). Unlike a high level language, assembler is very close to the machine language. The main abstractions (apart from the mnemonics) are the use of labels instead of fixed memory addresses, and comments.
An assembly language program (ie a text file) is translated to machine language by an assembler. A disassembler performs the reverse function (although the comments and the names of labels will have been discarded in the assembler process).
Machine language faster than assembly language even than assembly language depend upon machine language | textbooks/workforce/Information_Technology/Information_Technology_Hardware/03%3A_Computer_Organization_and_low-level_Programming/3.01%3A_Structure_of_low-level_programs.txt |
Introduction
This section introduces learners to the various supports offered by programming languages starting with the low-level to the high level programming languages.
Activity Details
Architecture support from low level to high level languages is in the following ways;
Abstraction in software design;
1. Assemble-level abstraction
A programmer who writes directly with the raw machine instruction set. Expressing the program in terms of instructions, addresses, registers, bytes and words
2. High-level languages
Allows the programmer to think in terms of abstractions that are above the machine level. The programmer may not even know on which machine the program will ultimately run. The RISC philosophy focusing instruction set design on flexible primitive operations from which the complier can build its high level operations
3. Data types
a) ARM support for characters
For handling characters is the unsigned byte load and store instruction
b) ANSI (American National Standards Institute) C basic data types
Defines the following basic data types –
• Signedandunsignedcharactersofatleasteightbits–Signedand unsigned short integers of at least 16 bits – Signed and unsigned integers of at least 16 bits – Signed and unsigned long integers of at least 32 bits – Floating-point, double and long double floating-point numbers – Enumerated types – Bit fields (sets of Boolean variables).
• The ARM C compiler adopts the minimum sizes for each of these types
• The standard integer uses 32-bit values
c) ANCI C derived data types
• Defines derived data types – Arrays, Functions, Structures, Pointers, Unions
• ARM pointers are 32 bits long and resemble unsigned integers
• The ARM C compiler aligns characters on byte boundaries, short integers at even addresses and all other types on word boundaries
d) ARM architectural support for C data types
• Provides native support for signed and unsigned 32-bit integers and for unsigned bytes, covering the C integer, long integer and unsigned character types
• For arrays and structures: base plus scaled index addressing
• Current versions of the ARM include signed byte and signed and unsigned 16-bit loads and stores, providing some native support for short integer and signed character types
4. Expressions
a) Register use
• The key to the efficient evaluation of a complex expression is to get the required values into the registers in the right order and to ensure that frequently used values are normally resident in registers
• Optimizing this trade-off between the number of values that can be held in registers and the number of registers remaining is a major task for the complier
b) ARM support
• The 3-address instruction format used by the ARM gives the compiler the maximum flexibility
• Thumb instructions (generally 2-address) – restricts the compiler’s freedom to some extent – smaller number of general registers also makes its job harder
c) Accessing operands
• A procedure will normally work with operands that are presented in one of the following ways, and can be accessed as indicated as an argument passed through a register – The value is already in a register, so no further work is necessary as a argument passed on the stack – Stack pointer (r13) relative addressing with an immediate offset known at compile-time allows the operand to be collected with a single LDR
• As a constant in the procedure’s literal pool – PC-relative addressing, again with an immediate offset known at compile-time, gives access with a single LDR
• As a local variable – Local variables are allocated space on the stack and are accessed by
• a stack pointer relative LDR
• As a global variable – Global (and static) variables are allocated space in the static area and are accessed by static base (is usually in r9) relative addressing
d) Pointer arithmetic
• Arithmetic on pointers depends on the size of the data types that the pointers are pointing to
• If a variable is used as an offset it must be scaled at run-time
• If p is held in r0 and i in r1, the change top may be compiled as: ADD r0, r0, r1, LSL #2 ; scale r1 to int
(e) Arrays
• The declaration: int a[10]; – a reference to a[i] is equivalent to the pointer-plus-offset form *(a+i)
Conclusion
The section lists several supports that can be got by using the various programming level operations. This can include abstraction, pointer arithmetic, arrays etc.
Assessment
1. Explain the following supports as obtainable in application of the various programming levels
• Abstraction; is a technique for managing complexity of computer systems. It works by establishing a level of complexity on which a person interacts with the system, suppressing the more complex details below the current level.
• Pointer arithmetic: is another way to traverse through an array.
2. Describe Low-level programming language; is a programming language that provides little or no abstraction from a computer’s instruction set architecture—commands or functions in the language map closely to processor instructions. it refers to refers to either machine code or assembly language. The word “low” refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being “close to the hardware.” Because of the close relationship between the language and the hardware architecture programs written in low-level languages tend to be relatively non-portable. Low-level languages can convert to machine code without a compiler or interpreter. A program written in a low-level language can be made to run very quickly, with a small memory footprint.
3.03: Unit 3 Summary
At the end of this unit, the learners will be conversant with Low-level programming, which is about machine and Assembly programming. It also looked at the language support offered.
Unit Assessment
The following section will test the learners understanding of this unit which is on low and high-level programming and its architecture
Instructions
Answer the following questions
1. Differentiate between low-level and high-level programming
2. Give 3 limitations of low-level architecture
3. Explain machine code
Grading Scheme
The marks will be awarded as shown below
question sub-question marks awarded
1 Any difference award 2 mark maximum 4 8
2 any limitation listed award 2 mark, maximum 4 8
3 explanation (maximum marks 4) 4
Total 20
Feedback
1. A low-level programming language is a programming language that provides little or no abstraction from a computer’s instruction set architecture—commands or functions in the language map closely to processor instructions. it refers to either machine code or assembly language. while high-level programming languages are those closer to human languages and further from machine languages.
2. Very hard to read or learn for the uninitiated.
• Not very self-documenting like higher level languages.
• Harder to modify and maintain.
• Less support, than high level languages, in development and debug environments.
3. Is a set of instructions executed directly by a computer’s central processing unit (CPU). Each instruction performs a very specific task, such as a load, a jump, or an Arithmetic logic unit (ALU) operation on a unit of data in a CPU register or memory. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/03%3A_Computer_Organization_and_low-level_Programming/3.02%3A_Architecture_support_from_Low_level_to_High_le.txt |
Learning Objectives
At the end of this unit, the learners will
1. Explain the Strategies of Interface I/O
2. Distinguish between handshaking and buffering
3. Understand the programmed IO mode of data transfer
4. Describe a DMA transfer.
This section introduces the learners to the strategies of I/O interfaces. They include: polled, interrupt driven and DMA.
• 4.1: Fundamentals I/O- handshake and buffering
This section introduces the learner to the various strategies used in I/O interfaces and other operations possible on an interface
• 4.2: Mechanisms of interruption- recognition of vector, and interrupt priority
The following section introduces the learner to Interruptions that occur in Programmed I/O
• 4.3: Direct Memory Access (DMA)
This section introduces the learners to the DMA programmed I/O which provides access to the microprocessor between devices operating at different speeds
• 4.4: Unit 4 Summary
At the end of this unit, the learners will be conversant with the strategies of I/O interfaces. This involves accessibility of devices connected to the processor and where I/O transfers must take place between them and the processor. the various access methods, e.g. polling, interrupt and DMA. The interrupt process is also learned in this section.
04: Strategies and Interface I O
Introduction
This section introduces the learner to the various strategies used in I/O interfaces and other operations possible on an interface
Activity details
The computer is useless without some kind of interface to to the outside world. There are many different devices which we can connect to the computer system; keyboards, VDUs and disk drives are some of the more familiar ones. Irrespective of the details of how such devices are connected we can say that all I/O is governed by three basic strategies.
• Programmed I/O
• Interruptdriven I/O
• Direct Memory Access (DMA)
In programmed I/O all data transfers between the computer system and external devices are completely controlled by the computer program. Part of the program will check to see if any external devices require attention and act accordingly. This process is known as polling. Programmed I/O is probably the most common I/O technique because it is very cheap and easy to implement, and in general does not introduce any unforeseen hazards.
Programmed I/O
Is a method of transferring data between the CPU and a peripheral, such as a network adapter or an ATA storage device. In general, programmed I/O happens when software running on the CPU uses instructions that access I/O address space to perform data transfers to or from an I/O device.
The PIO interface is grouped into different modes that correspond to different transfer rates. The electrical signaling among the different modes is similar — only the cycle time between transactions is reduced in order to achieve a higher transfer rate
The PIO modes require a great deal of CPU overhead to configure a data transaction and transfer the data. Because of this inefficiency, the DMA (and eventually UDMA) interface was created to increase performance. The simple digital logic required to implement a PIO transfer still makes this transfer method useful today, especially if high transfer rates are not required like in embedded systems, or with FPGA chips where PIO mode can be used without significant performance loss.
Interrupt driven I/O
Is a way of controlling input/output activity in which a peripheral or terminal that needs to make or receive a data transfer sends a signal that causes a program interrupt to be set. At a time appropriate to the priority level of the I/O interrupt, relative to the total interrupt system, the processor enters an interrupt service routine (ISR). The function of the routine will depend upon the system of interrupt levels and priorities that is implemented in the processor.
In a single-level single-priority system there is only a single I/O interrupt – the logical OR of all the connected I/O devices. The associated interrupt service routine polls the peripherals to find the one with the interrupt status set.
Handshaking
Handshaking is a I/O control method to synchronize I/O devices with the microprocessor. As many I/O devices accepts or release information at a much slower rate than the microprocessor, this method is used to control the microprocessor to work with a I/O device at the I/O devices data transfer rate.
Handshaking is an automated process of negotiation that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. It follows the physical establishment of the channel and precedes normal information transfer. The handshaking process usually takes place in order to establish rules for communication when a computer sets about communicating with a foreign device. When a computer communicates with another device like a modem, printer, or network server, it needs to handshake with it to establish a connection.
Handshaking can negotiate parameters that are acceptable to equipment and systems
at both ends of the communication channel, including information transfer rate, coding alphabet, parity, interrupt procedure, and other protocol or hardware features. Handshaking is a technique of communication between two entities. However, within TCP/IP RFCs, the term “handshake” is most commonly used to reference the TCP three-way handshake. For example, the term “handshake” is not present in RFCs covering FTP or SMTP. One exception is Transport Layer Security, TLS, setup, FTP RFC 4217. In place of the term “handshake”, FTP RFC 3659 substitutes the term “conversation” for the passing of commands.
A simple handshaking protocol might only involve the receiver sending a message meaning “I received your last message and I am ready for you to send me another one.” A more complex handshaking protocol might allow the sender to ask the receiver if it is ready to receive or for the receiver to reply with a negative acknowledgement meaning “I did not receive your last message correctly, please resend it” (e.g., if the data was corrupted en route).
Handshaking facilitates connecting relatively heterogeneous systems or equipment over a communication channel without the need for human intervention to set parameters.
Example: Supposing that we have a printer connected to a system. The printer can print 100 characters/second, but the microprocessor can send much more information to the printer at the same time. That’s why, just when the printer gets it enough data to print it places a logic
1 signal at its Busy pin, indicating that it is busy in printing. The microprocessor now tests the busy bit to decide if the printer is busy or not. When the printer will become free it will change the busy bit and the microprocessor will again send enough amounts of data to be printed. This process of interrogating the printer is called handshaking.
Buffering
Is the process of transferring data between a program and an external device, The process of optimizing I/O consists primarily of making the best possible use of the slowest part of the path between the program and the device. The slowest part is usually the physical channel, which is often slower than the CPU or a memory-to-memory data transfer. The time spent in I/O processing overhead can reduce the amount of time that a channel can be used, thereby reducing the effective transfer rate. The biggest factor in maximizing this channel speed is often the reduction of I/O processing overhead.
A buffer is a temporary storage location for data while the data is being transferred. A buffer is often used for the following purposes:
• Small I/O requests can be collected into a buffer, and the overhead of making many relatively expensive system calls can be greatly reduced.
• A collection buffer of this type can be sized and handled so that the actual physical I/O requests made to the operating system match the physical characteristics of the device being used.
• Many data file structures, such as the f77 and cos file structures, contain control words. During the write process, a buffer can be used as a work area where control words can be inserted into the data stream (a process called blocking). The blocked data is then written to the device. During the read process, the same buffer work area can be used to examine and remove these control words before passing the data on to the user (deblocking ).
• When data access is random, the same data may be requested many times. A cache is a buffer that keeps old requests in the buffer in case these requests are needed again. A cache that is sufficiently large and/or efficient can avoid a large part of the physical I/O by having the data ready in a buffer. When the data is often found in the cache buffer, it is referred to as having a high hit rate. For example, if the entire file fits in the cache and the file is present in the cache, no more physical requests are required to perform the I/O. In this case, the hit rate is 100%.
• Running the disks and the CPU in parallel often improves performance; therefore, it is useful to keep the CPU busy while data is being moved. To do this when writing, data can be transferred to the buffer at memory-to- memory copy speed and an asynchronous I/O request can be made. The control is then immediately returned to the program, which continues to execute as if the I/O were complete (a process called write-behind). A similar process can be used while reading; in this process, data is read into a buffer before the actual request is issued for it. When it is needed, it is already in the buffer and can be transferred to the user at very high speed. This is another form or use of a cache.
Conclusion
This section introduced the learner to the various ways, interfaces access and pass data. They include polling, interrupt and DMA. In them, speed between the different devices connected to the CPU are synchronized to be able to communicate effectively
Assessment
1. What is the difference between programmed-driven I/O and interrupt-driven I/O?
Programmed-driven I/O means the program is polling or checking some hardware item e.g. mouse within a loop.
For Interrupt driven I/O, the same mouse will trigger a signal to the program to process the mouse event.
2. What is one advantage and one disadvantage of each?
Advantage of Programmed Driven: easy to program and understand
Disadvantages: slow and inefficient
Advantage of Interrupt Driven: fast and efficient
Disadvantage: Can be tricky to write if you are using a low level language.
Can be tough to get the various pieces to work well together. Usually done by the hardware manufacturer or the OS maker e.g. Microsoft. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/04%3A_Strategies_and_Interface_I_O/4.01%3A_Fundamentals_I_O-_handshake_and_buffering.txt |
Introduction
The following section introduces the learner to Interruptions that occur in Programmed I/O
Activity Details
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt is a signal from a device attached to a computer or from a program within the computer that causes the main program that operates the computer (the operating system) to stop and figure out what to do next. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: hardware interrupts and software interrupts.
Hardware interrupts
Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, which is either a part of the computer itself, such as a disk controller, or an external peripheral.
For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Unlike the software type (described below), hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt (IRQ).
Software interrupt
A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction which causes an interrupt when it is executed. The former is often called a trap or exception and is used for errors or events occurring during program executions that are exceptional enough that they cannot be handled within the program itself.
For example, if the processor’s arithmetic logic unit is commanded to divide a number by zero, this impossible demand will cause a divide-by-zero exception, perhaps causing the computer to abandon the calculation or display an error message. Software interrupt instructions function similarly to subroutine calls and are used for a variety of purposes, such as to request services from low-level system software such as device drivers. For example, computers often use software interrupt instructions to communicate with the disk controller to request data be read or written to the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request (IRQ) lines to the processor, but there may be hundreds of different software interrupts. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven
Interrupts can be categorized into these different types:
• Maskable interrupt (IRQ): a hardware interrupt that may be ignored by setting a bit in an interrupt mask register’s (IMR) bit-mask.
• Non-maskable interrupt (NMI): a hardware interrupt that lacks an associated bit- mask, so that it can never be ignored. NMIs are used for the highest priority tasks such as timers, especially watchdog timers.
• Inter-processor interrupt (IPI): a special case of interrupt that is generated by one processor to interrupt another processor in a multiprocessor system.
• Software interrupt: an interrupt generated within a processor by executing an instruction. Software interrupts are often used to implement system calls because they result in a subroutine call with a CPU ring level change.
• Spurious interrupt: a hardware interrupt that is unwanted. They are typically generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware.
Conclusion
This section has introduced learners to the different categories of I/O interrupts, that is the hardware and software interrupts
Assessment
Briefly describe hardware and software Interruptions
Hardware interruptions
Hardware interruptions are generated by certain events which come up during the execution of a program. This type of interruptions is managed on their totality by the hardware and it is not possible to modify them.
A clear example of this type of interruptions is the one which actualizes the counter of the computer internal clock, the hardware makes the call to this interruption several times during a second in order to maintain the time up to date.
Hardware Interruptions
External interruptions are generated by peripheral devices, such as keyboards, printers, communication cards, etc. They are also generated by coprocessors. It is not possible to deactivate external interruptions.
These interruptions are not sent directly to the CPU but they are sent to an integrated circuit whose function is to exclusively handle this type of interruptions
Software Interruptions
Software interruptions can be directly activated by the assembler invoking the number of the desired interruption with the INT Instruction.
The use of interruptions helps us in the creation of programs and by using them our programs gets shorter. It is easier to understand them and they usually have a better performance mostly due to their smaller size. This type of interruptions can be separated in two categories: the operative system DOS interruptions and the BIOS interruptions. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/04%3A_Strategies_and_Interface_I_O/4.02%3A_Mechanisms_of_interruption-_recognition_of_vector_and_interrupt_p.txt |
Introduction
This section introduces the learners to the DMA programmed I/O which provides access to the microprocessor between devices operating at different speeds
Activity Details
Direct Memory Access and DMA-controlled I/O
The DMA I/O technique is used in personal computer systems including those using Intel family of microprocessors. The direct memory access (DMA) I/O technique provides direct access to the memory while the microprocessor is temporarily disabled. A DMA controller temporarily borrows the address bus, data bus, and control bus from the microprocessor and transfers the data bytes directly between an I/O port and a series of memory locations. The DMA transfer is also used to do high-speed memory-to memory transfers. Two control signals are used to request and acknowledge a DMA transfer in the microprocessor-based system. The HOLD signal is a bus request signal which asks the microprocessor to release control of the buses after the current bus cycle. The HLDA signal is a bus grant signal which indicates that the microprocessor has indeed released control of its buses by placing the buses at their high- impedance states. The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
Special hardware writes to / reads from memory directly (without CPU intervention) and saves the timing associated with op-code fetch and decoding, increment and test addresses of source and destination. The DMA controller may both stop the CPU and access the memory (cycle stealing DMA) or use the bus while the CPU is not using it (hidden cycle DMA). The DMA controller has some control lines (to do a handshake with the CPU negotiating to be a bus master and to emulate the CPU behaviour while accessing the memory), an address register which is auto-incremented (or auto-decremented) at each memory access, and a counter used to check for final byte (or word) count.
Conclusion
This section has introduced the learners to the DMA and its operations of synchronizing the I/O devices with the microprocessor
Assessment
Describe how DMA helps in the synchronization of different devices in accessing the microprocessor
The direct memory access (DMA) I/O technique provides direct access to the memory while the microprocessor is temporarily disabled. A DMA controller temporarily borrows the address bus, data bus, and control bus from the microprocessor and transfers the data bytes directly between an I/O port and a series of memory locations. The DMA transfer is also used to
do high-speed memory-to-memory transfers. Two control signals are used to request and acknowledge a DMA transfer in the microprocessor-based system. The HOLD signal is a bus request signal which asks the microprocessor to release control of the buses after the current bus cycle. The HLDA signal is a bus grant signal which indicates that the microprocessor has indeed released control of its buses by placing the buses at their high-impedance states. The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
4.04: Unit 4 Summary
At the end of this unit, the learners will be conversant with the strategies of I/O interfaces. This involves accessibility of devices connected to the processor and where I/O transfers must take place between them and the processor. the various access methods, e.g. polling, interrupt and DMA. The interrupt process is also learned in this section.
Unit Assessment
The following section will test the learners understanding of this unit
Instructions
Answer the following questions
1.Explain two strategies that govern I/O transfers
2.What is handshaking and how is it carried out?
Grading Scheme
The marks will be awarded as shown below
question sub-question marks awarded
1 explanations of any two @ 4 mark 8
2 definition award 2 marks, explanation of how it works 4 marks. 6
Total 14
Feedback
1. Expalin any two from the following
a. Programmed I/O
Programmed I/O (PIO) refers to data transfers initiated by a CPU under driver software control to access registers or memory on a device. The CPU issues a command then waits for I/O operations to be complete. As the CPU is faster than the I/O module, the problem with programmed I/O is that the CPU has to wait a long time for the I/O module of concern to be ready for either reception or transmission of data. The CPU, while waiting, must repeatedly check the status of the I/O module, and this process is known as Polling. As a result, the level of the performance of the entire system is severely degraded.
b. Interrupt driven I/O
The CPU issues commands to the I/O module then proceeds with its normal work until interrupted by I/O device on completion of its work.
For input, the device interrupts the CPU when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports, memory mapping.
For output, the device delivers an interrupt either when it is ready to accept new data or to acknowledge a successful data transfer. Memory-mapped and DMA-capable devices usually generate interrupts to tell the system they are done with the buffer.
Although Interrupt relieves the CPU of having to wait for the devices, but it is still inefficient in data transfer of large amount because the CPU has to transfer the data word by word between I/O module and memory.
c. Direct Memory Access (DMA)
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module controls exchange of data between main memory and the I/O device. Because of DMA device can transfer data directly to and from memory, rather than using the CPU as an intermediary, and can thus relieve congestion on the bus. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus. The controllers are programmed with source and destination pointers (where to read/write the data), counters to track the number of transferred bytes, and settings, which includes I/O and memory types, interrupts and states for the CPU cycles.
DMA increases system concurrency by allowing the CPU to perform tasks while the DMA system transfers data via the system and memory busses. Hardware design is complicated because the DMA controller must be integrated into the system, and the system must allow the DMA controller to be a bus master. Cycle stealing may also be necessary to allow the CPU and DMA controller to share use of the memory bus.
2. Handshaking is a I/O control method to synchronize I/O devices with the microprocessor. this method is used to control the microprocessor to work with a I/O device at the I/O devices data transfer rate. Handshaking is an automated process of negotiation that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/04%3A_Strategies_and_Interface_I_O/4.03%3A_Direct_Memory_Access_%28DMA%29.txt |
Learning Objectives
At the end of the lesson the learners should be able to;-
1. Define peripheral devices
2. Understand the meaning of digitization, coding-decoding and compression
3. understand how to convert analog to digital data
This unit introduces peripheral device which are connected to the computer. It mainly deals on what needs to be done to the data that flows between the processor and the Peripheral devices The peripherals include computer mouse, keyboard, image scanners, tape drives, microphones, loudspeakers, webcams, and digital cameras
• 5.1: Representation of digital and analog values - Sampling and Quantization
This section introduces the learner to sampling and quantization. This is where signals are represented into horizontal and vertical values (axes).
• 5.2: Sound and Audio, Image and Graphics, Animation and Video
The following sections describe various types of data that you might find, in addition to static graphics data, in multimedia files.
• 5.3: Coding and decoding multimedia systems
This section introduces the learner to the fact that Multimedia data which is voluminous in nature needs to be coded and decoded so that it can be transmitted fast on the existing media
• 5.4: Unit 5 Summary
This unit introduced the learner to the peripheral device that can be connected to the computer. It dealt on the conversions of analog to digital data. the different wave forms representing conversions of the analog to digital data were also introduced. Also the multimedia data and what needs to be done for it to be transmitted across networks. The peripherals include computer mouse, keyboard, image scanners, tape drives, microphones, loudspeakers, webcams, and digital cameras.
05: The Peripheral Devices
This section introduces the learner to sampling and quantization. This is where signals are represented into horizontal and vertical values (axes).
Activity Details
Analog and Digital Signals
Digitalization of an analog signal involves two operations:
1. Sampling, and
2. Quantization
Analog signals consist of continuous values for both axes. Consider an electrical signal whose horizontal axis represents time in seconds and whose vertical axis represents amplitude in volts. The horizontal axis has a range of values from zero to infinity with every possible value in between. This makes the horizontal axis continuous. The vertical axis is also continuous allowing the signal’s amplitude to assume any value from zero to infinity. For every possible value in time there is a corresponding value in amplitude for the analog signal.
An analog signal exists throughout a continuous interval of time and/or takes on a continuous range of values. A sinusoidal signal (also called a pure tone in acoustics) has both of these properties.
Fig 1: Analog signal. This signal $v(t)=\cos (2\pi ft)$ could be a perfect analog recording of a pure tone of frequency $f$ Hz. If $f=440$ Hz, this tone is the musical note $A$ above middle $C$, to which orchestras often tune their instruments. The period $T=1/f$ is the duration of one full oscillation.
In reality, electrical recordings suffer from noise that unavoidably degrades the signal. The more a recording is transferred from one analog format to another, the more it loses fidelity to the original.
Fig. 2: Noisy analog signal. Noise degrades the sinusoidal signal in Fig. 1. It is often impossible to recover the original signal exactly from the noisy version.
Digital signals on the other hand have discrete values for both the horizontal and vertical axes. The axes are no longer continuous as they were with the analog signal. In this discussion, time
will be used as the quantity for the horizontal axis and volts will be used for the vertical axis.
A digital signal is a sequence of discrete symbols. If these symbols are zeros and ones, we call them bits. As such, a digital signal is neither continuous in time nor continuous in its range of values. And, therefore, cannot perfectly represent arbitrary analog signals. On the other hand, digital signals are resilient against noise.
Fig. 3: Analog transmission of a digital signal. Consider a digital signal 100110 converted to an analog signal for radio transmission. The received signal suffers from noise, but given sufficient bit duration Tb, it is still easy to read off the original sequence 100110 perfectly.
Digital signals can be stored on digital media (like a compact disc) and manipulated on digital systems (like the integrated circuit in a CD player). This digital technology enables a variety of digital processing unavailable to analog systems. For example, the music signal encoded on a CD includes additional data used for digital error correction. In case the CD is scratched and some of the digital signal becomes corrupted, the CD player may still be able to reconstruct the missing bits exactly from the error correction data. To protect the integrity of the data despite being stored on a damaged device, it is common to convert analog signals to digital signals using steps called sampling and quantization.
Introduction to Sampling
The motivation for sampling and quantizing is the need to store a signal in a digital format. In order to convert an analog signal to a digital signal, the analog signal must be sampled and quantized. Sampling takes the analog signal and discretizes the time axis. After sampling, the time axis consists of discrete points in time rather than continuous values in time. The resulting signal after sampling is called a discrete signal, sampled signal, or a discrete-time signal. The resulting signal after sampling is not a digital signal. Even though the horizontal axis has discrete values the vertical axis is not discretized. This means that for any discrete point in time, there are an infinite number of allowed values for the signal to assume in amplitude. In order for the signal to be a digital signal, both axes must be discrete.
Sampling is the process of recording an analog signal at regular discrete moments of time. The sampling rate fs is the number of samples per second. The time interval between samples
is called the sampling interval $T_s=1/fs$.
Fig. 4: Sampling. The signal $v(t)=\cos (2\pi ft)$ in Fig. 1 is sampled uniformly with 3 sampling intervals within each signal period $T$. Therefore, the sampling interval $T_s=T/3$ and the sampling rate $f_s=3f$. Another way see that $f_s=3f$ is to notice that there are three samples in every signal period $T$.
To express the samples of the analog signal $v(t)$, we use the notation $v[n]$ (with square brackets), where integer values of $n$ index the samples. Typically, the $n = 0$ sample is taken from the $t=0$ time point of the analog signal. Consequently, the $n=1$ sample must come from the $t=T_s$ time point, exactly one sampling interval later; and so on. Therefore, the sequence of samples can be written as $v[0] = v(0), v[1] = v[T_s], v[2] = v(2T_s)$,...
$v[n] = v(nT_s) \ \ \ \ \ \ \ \ \ \ \ \ \text{for integer } n .........................................1$
In the example of Fig. 4, $v(t) = \cos (2 \pi ft)$ is sampled with sampling interval $T_s = T/3$
to produce the following $v[n]$.
$\begin{array} {rclcl} {v[n]} & = & {\cos (2 \pi fn/T_s)} & \ \ \ \ \ \ \ & {\text{by substituting } t = nT_s .................................2} \ {} & = & {\cos (2 \pi fn T/3)} & \ \ \ \ \ \ \ & {\text{since } T_s = T/3 .............................3} \ {} & = & {\cos (2 \pi n/3)} & \ \ \ \ \ \ \ & {\text{since } T = 1/f .............................4} \end{array}$
This expression for $v[n]$ evaluates to the sample values depicted in Fig. 4 as shown below.
$v[0] = \cos (0) = 1$
$v[1] = \cos (2\pi 3) = -0.5$
$v[2] = \cos (4 \pi 3) = -0.5$
$v[3] = \cos (2\pi) = 1$
Fig. 5: Samples. The samples from Fig. 4 are shown as the sequence $v[n]$ indexed by integer values of $n$.
Quantization
Since a discrete signal has discrete points in time but still has continuous values in amplitude, the amplitude of the signal must be discretized in order to store it in digital format. The values of the amplitude must be rounded off to discrete values. If the vertical axis is divided into small windows of amplitudes, then every value that lies within that window will be rounded off (or quantized) to the same value.
For example, consider a waveform with window sizes of 0.5 volts starting at –4 volts and ending at +4 volts. At a discrete point in time, any amplitude between 4.0 volts and 3.5 volts will be recorded as 3.75 volts. In this example the center of each 0.5-volt window (or quantization region) was chosen to be the quantization voltage for that region.
In this example the dynamic range of the signal is 8 volts. Since each quantization region is 0.5 volts there are 16 quantization regions included in the dynamic range. It is important that there are 16 quantization regions in the dynamic range. Since a binary number will represent the value of the amplitude, it is important that the number of quantization regions is a power of two. In this example, 4 bits will be required to represent each of the 16 possible values in the signal’s amplitude.
A sequence of samples like v[n] in Fig. 5 is not a digital signal because the sample values can potentially take on a continuous range of values. In order to complete analog to digital conversion, each sample value is mapped to a discrete level (represented by a sequence of bits) in a process called quantization. In a B-bit quantizer, each quantization level is represented with B bits, so that the number of levels equals 2B
Fig. 6: 3-bit quantization. Overlaid on the samples $v[n]$ from Fig. 5 is a 3-bit quantizer with 8 uniformly spaced quantization levels. The quantizer approximates each sample value in $v[n]$ to its nearest level value (shown on the left), producing the quantized sequence $vQ[n]$. Ultimately the sequence $vQ[n]$ can be written as a sequence of bits using the 3-bit representations shown on the right.
Observe that quantization introduces a quantization error between the samples and their quantized versions given by $e[n]=v[n]−vQ[n]$. If a sample lies between quantization levels, the maximum absolute quantization error $|e[n]|$ is half of the spacing between those levels. For the quantizer in Fig. 6, the maximum error between levels is 0.15 since the spacing is uniformly 0.3. Note, however, that if the sample overshoots the highest level or undershoots the lowest level by more than 0.15, the absolute quantization error will be that difference larger than 0.15.
The table below completes the quantization example in Fig. 6 for $n=0, 1, 2, 3$. The 3-bit representations in the final row can be concatenated finally into the digital signal 110001001110.
Table 1: Quantization example.
Sequence $n = 0$ $n = 1$ $n = 2$ $n = 3$
Samples $v[n]$ 1 -0.5 -0.5 1
Quantized samples $vQ[n]$ 0.9 -0.6 -0.6 0.9
0.1 0.1 0.1 0.1
3-bit representations 110 1 1 110
Conclusion
This section has made the learners learn how analog (continuous) data can be digitized
Assessment
1. What is the difference between analogue and digital data?
Analogue data is continuous, allowing for an infinite number of possible values. Digital data is discrete, allowing for a finite set of values
2. Why is it difficult to save analogue sound waves in a digital format?
Analogue is continuous data, converting continuous data to discrete values may lose some of the accuracy
3. Differentiate between anlog and digital data
Analog refers to circuits in which quantities such as voltage or current vary at a continuous rate. When you turn the dial of a potentiometer, for example, you change the resistance by a continuously varying rate. The resistance of the potentiometer can be any value between the minimum and maximum allowed by the pot. In digital electronics, quantities are counted rather than measured. There’s an important distinction between counting and measuring. When you count something, you get an exact result. When you measure something, you get an approximate result. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/05%3A_The_Peripheral_Devices/5.01%3A_-_Representation_of_digital_and_analog_values_-_sampling_and_quantizati.txt |
Introduction
The following sections describe various types of data that you might find, in addition to static graphics data, in multimedia files.
Activity Details
Computer animation lies somewhere between the motionless world of still images and the real-time world of video images. All of the animated sequences seen in educational programs, motion CAD renderings, and computer games are computer-animated (and in many cases, computer-generated) animation sequences.
Traditional cartoon animation is little more than a series of artwork cells, each containing a slight positional variation of the animated subjects. When a large number of these cells is displayed in sequence and at a fast rate, the animated figures appear to the human eye to move.
A computer-animated sequence works in exactly the same manner, i.e a series of images is created of a subject; each image contains a slightly different perspective on the animated subject. When these images are displayed (played back) in the proper sequence and at the proper speed (frame rate), the subject appears to move.
Computerized animation is actually a combination of both still and motion imaging. Each frame, or cell, of an animation is a still image that requires compression and storage. An animation file, however, must store the data for hundreds or thousands of animation frames and must also provide the information necessary to play back the frames using the proper display mode and frame rate.
Animation file formats are only capable of storing still images and not actual video information. It is possible, however, for most multimedia formats to contain animation information, because animation is actually a much easier type of data than video to store.
The image-compression schemes used in animation files are also usually much simpler than most of those used in video compression. Most animation files use a delta compression scheme, which is a form of Run-Length Encoding that stores and compresses only the information that is different between two images (rather than compressing each image frame entirely). RLE is relatively easy to decompress on the fly.
Storing animations using a multimedia format also produces the benefit of adding sound to the animation (what’s a cartoon without sound?). Most animation formats cannot store sound directly in their files and must rely on storing the sound in a separate disk file which is read by the application that is playing back the animation.
Animations are not only for entertaining kids and adults. Animated sequences are used by CAD programmers to rotate 3D objects so they can be observed from different perspectives; mathematical data collected by an aircraft or satellite may be rendered into an animated fly-by sequence. Movie special effects benefit greatly by computer animation.
Digital Video
One step beyond animation is broadcast video. Your television and video tape recorder are a lot more complex than an 8mm home movie projector and your kitchen wall. There are many complex signals and complicated standards that are involved in transmitting those late-night reruns across the airwaves and cable. Only in the last few years has a personal computer been able to work with video data at all.
Video data normally occurs as continuous, analog signals. In order for a computer to process this video data, we must convert the analog signals to a non-continuous, digital format. In a digital format, the video data can be stored as a series of bits on a hard disk or in computer memory.
The process of converting a video signal to a digital bitstream is called analog-to-digital conversion (A/D conversion), or digitizing. A/D conversion occurs in two steps:
1. Sampling captures data from the video stream.
2. Quantizing converts each captured sample into a digital format.
Each sample captured from the video stream is typically stored as a 16-bit integer. The rate at which samples are collected is called the sampling rate. The sampling rate is measured in the number of samples captured per second (samples/second). For digital video, it is necessary to capture millions of samples per second.
Quantizing converts the level of a video signal sample into a discrete, binary value. This value approximates the level of the original video signal sample. The value is selected by comparing the video sample to a series of predefined threshold values. The value of the threshold closest to the amplitude of the sampled signal is used as the digital value.
A video signal contains several different components which are mixed together in the same signal. This type of signal is called a composite video signal and is not really useful in high- quality computer video. Therefore, a standard composite video signal is usually separated into its basic components before it is digitized.
The composite video signal format defined by the NTSC (National Television Standards Committee) color television system is used in the United States. The PAL (Phase Alternation Line) and SECAM (Sequential Coleur Avec Memoire) color television systems are used in Europe and are not compatible with NTSC. Most computer video equipment supports one or more of these system standards.
The components of a composite video signal are normally decoded into three separate signals representing the three channels of a color space model, such as RGB, YUV, or YIQ. Although the RGB model is quite commonly used in still imaging, the YUV, YIQ, or YCbCr models are more often used in motion-video imaging. TV practice uses YUV or similar color models because the U and V channels can be downsampled to reduce data volume without materially degrading image quality.
Once the video signal is converted to a digital format, the resulting values can be represented on a display device as pixels. Each pixel is a spot of color on the video display, and the pixels are arranged in rows and columns just as in a bitmap. Unlike a static bitmap, however, the pixels in a video image are constantly being updated for changes in intensity and color. This updating is called scanning, and it occurs 60 times per second in NTSC video signals (50 times per second for PAL and SECAM).
A video sequence is displayed as a series of frames. Each frame is a snapshot of a moment in time of the motion-video data, and is very similar to a still image. When the frames are played back in sequence on a display device, a rendering of the original video data is created. In real- time video the playback rate is 30 frames per second. This is the minimum rate necessary for the human eye to successfully blend each video frame together into a continuous, smoothly moving image.
A single frame of video data can be quite large in size. A video frame with a resolution of 512 x 482 will contain 246,784 pixels. If each pixel contains 24 bits of color information, the frame will require 740,352 bytes of memory or disk space to store. Assuming there are 30 frames per second for real-time video, a 10-second video sequence would be more than 222 megabytes in size! It is clear there can be no computer video without at least one efficient method of video data compression.
There are many encoding methods available that will compress video data. The majority of these methods involve the use of a transform coding scheme, usually employing a Fourier or Discrete Cosine Transform (DCT). These transforms physically reduce the size of the video data by selectively throwing away unneeded parts of the digitized information.
Transform compression schemes usually discard 10 percent to 25 percent or more of the original video data, depending largely on the content of the video data and upon what image quality is considered acceptable.
Usually a transform is performed on an individual video frame. The transform itself does not produce compressed data. It discards only data not used by the human eye. The transformed data, called coefficients, must have compression applied to reduce the size of the data even further. Each frame of data may be compressed using a Huffman or arithmetic encoding algorithm, or even a more complex compression scheme such as JPEG. This type of intraframe encoding usually results in compression ratios between 20:1 to 40:1 depending on the data in the frame. However, even higher compression ratios may result if, rather than looking at single frames as if they were still images, we look at multiple frames as temporal images.
In a typical video sequence, very little data changes from frame to frame. If we encode only the pixels that change between frames, the amount of data required to store a single video frame drops significantly. This type of compression is known as interframe delta compression, or in the case of video, motion compensation. Typical motion compensation schemes that encode only frame deltas (data that has changed between frames) can, depending on the data, achieve compression ratios upwards of 200:1. This is only one possible type of video compression method. There are many other types of video compression schemes, some of which are similar and some of which are different.
Digital Audio
All multimedia file formats are capable, by definition, of storing sound information. Sound data, like graphics and video data, has its own special requirements when it is being read, written, interpreted, and compressed. Before looking at how sound is stored in a multimedia format we must look at how sound itself is stored as digital data. All of the sounds that we hear occur in the form of analog signals. An analog audio recording system, such as a conventional tape recorder, captures the entire sound wave form and stores it in analog format on a medium such as magnetic tape.
Because computers are now digital devices it is necessary to store sound information in a digitized format that computers can readily use. A digital audio recording system does not record the entire wave form as analog systems do (the exception being Digital Audio Tape [DAT] systems). Instead, a digital recorder captures a wave form at specific intervals, called the sampling rate. Each captured wave-form snapshot is converted to a binary integer value and is then stored on magnetic tape or disk.
Storing audio as digital samples is known as Pulse Code Modulation (PCM). PCM is a simple quantizing or digitizing (audio to digital conversion) algorithm, which linearly converts all analog signals to digital samples. This process is commonly used on all audio CD-ROMs.
Differential Pulse Code Modulation (DPCM) is an audio encoding scheme that quantizes the difference between samples rather than the samples themselves. Because the differences are easily represented by values smaller than those of the samples themselves, fewer bits may be used to encode the same sound (for example, the difference between two 16-bit samples may only be four bits in size). For this reason, DPCM is also considered an audio compression scheme.
One other audio compression scheme, which uses difference quantization, is Adaptive Differential Pulse Code Modulation (ADPCM). DPCM is a non-adaptive algorithm. That is, it does not change the way it encodes data based on the content of the data. DPCM uses the sample number of bits to represent every signal level. ADPCM, however, is an adaptive algorithm and changes its encoding scheme based on the data it is encoding. ADPCM specifically adapts by using fewer bits to represent lower-level signals than it does to represent higher-level signals. Many of the most commonly used audio compression schemes are based on ADPCM.
Digital audio data is simply a binary representation of a sound. This data can be written to a binary file using an audio file format for permanent storage much in the same way bitmap data is preserved in an image file format. The data can be read by a software application, can be sent as data to a hardware device, and can even be stored as a CD-ROM.
The quality of an audio sample is determined by comparing it to the original sound from which it was sampled. The more identical the sample is to the original sound, the higher the quality of the sample. This is similar to comparing an image to the original document or photograph from which it was scanned.
The quality of audio data is determined by three parameters:
• Sample resolution
• Sampling rate
• Number of audio channels sampled
The sample resolution is determined by the number of bits per sample. The larger the sampling size, the higher the quality of the sample. Just as the apparent quality (resolution) of an image is reduced by storing fewer bits of data per pixel, so is the quality of a digital audio recording reduced by storing fewer bits per sample. Typical sampling sizes are eight bits and 16 bits.
The sampling rate is the number of times per second the analog wave form was read to collect data. The higher the sampling rate, the greater the quality of the audio. A high sampling rate collects more data per second than a lower sampling rate, therefore requiring more memory and disk space to store. Common sampling rates are 44.100 kHz (higher quality), 22.254 kHz (medium quality), and 11.025 kHz (lower quality). Sampling rates are usually measured in the signal processing terms hertz (Hz) or kilohertz (kHz), but the term samples per second (samples/ second) is more appropriate for this type of measurement.
A sound source may be sampled using one channel (monaural sampling) or two channels (stereo sampling). Two-channel sampling provides greater quality than mono sampling and, as you might have guessed, produces twice as much data by doubling the number of samples captured. Sampling one channel for one second at 11,000 samples/second produces 11,000 samples. Sampling two channels at the same rate, however, produces 22,000 samples/second.
The amount of binary data produced by sampling even a few seconds of audio is quite large. Ten seconds of data sampled at low quality (one channel, 8-bit sample resolution, 11.025 samples/second sampling rate) produces about 108K of data (88.2 Kbits/second).
Adding a second channel doubles the amount of data to produce nearly a 215K file (176 Kbits/ second). If we increase the sample resolution to 16 bits, the size of the data doubles again to 430K (352 Kbits/second). If we now increase the sampling rate to 22.05 Ksamples/second, the amount of data produced doubles again to 860K (705.6 Kbits/second). At the highest quality generally used (two channels, 16-bit sample resolution, 44.1 Ksamples/second sampling rate), our 10 seconds of audio now requires 1.72 megabytes (1411.2 Kbits/second) of disk space to store.
Consider how little information can really be stored in 10 seconds of sound. The typical musical song is at least three minutes in length. Music videos are from five to 15 minutes in length. A typical television program is 30 to 60 minutes in length. Movie videos can be three hours or more in length. We’re talking a lot of disk space here.
One solution to the massive storage requirements of high-quality audio data is data compression. For example, the CD-DA (Compact Disc-Digital Audio) standard performs mono or stereo sampling using a sample resolution of 16 bits and a sampling rate of 44.1 samples/ second, making it a very high-quality format for both music and language applications. Storing five minutes of CD-DA information requires approximately 25 megabytes of disk space--only half the amount of space that would be required if the audio data were uncompressed.
Audio data, in common with most binary data, contains a fair amount of redundancy that can be removed with data compression. Conventional compression methods used in many archiving programs (zoo and pkzip, for example) and image file formats don’t do a very good job of compressing audio data (typically 10 percent to 20 percent). This is because audio data is organized very differently from either the ASCII or binary data normally handled by these types of algorithms.
Audio compression algorithms, like image compression algorithms, can be categorized as lossy and lossless. Lossless compression methods do not discard any data. The decompression step produces exactly the same data as was read by the compression step. A simple form of lossless audio compression is to Huffman-encode the differences between each successive 8-bit sample. Huffman encoding is a lossless compression algorithm and, therefore the audio data is preserved in its entirety.
Lossy compression schemes discard data based on the perceptions of the psychoacoustic system of the human brain. Parts of sounds that the ear cannot hear, or the brain does not care about, can be discarded as useless data.
An algorithm must be careful when discarding audio data. The ear is very sensitive to changes in sound. The eye is very forgiving about dropping a video frame here or reducing the number of colors there. The ear, however, notices even slight changes in sounds, especially when specifically trained to recognize audial infidelities and discrepancies. However, the higher the quality of an audio sample, the more data will be required to store it. As with lossy image compression schemes, at times you’ll need to make a subjective decision between quality and data size.
Audio
There is currently no “audio file interchange format” that is widely used in the computer-audio industry. Such a format would allow a wide variety of audio data to be easily written, read, and transported between different hardware platforms and operating systems.
Most existing audio file formats, however, are very machine-specific and do not lend themselves to interchange very well. Several multimedia formats are capable of encapsulating a wide variety of audio formats, but do not describe any new audio data format in themselves.
Many audio file formats have headers just as image files do. Their header information includes parameters particular to audio data, including sample rate, number of channels, sample resolution, type of compression, and so on. An identification field (“magic” number) is also included in several audio file format headers.
Several formats contain only raw audio data and no file header. Any parameters these formats use are fixed in value and therefore would be redundant to store in a file header. Stream- oriented formats contain packets (chunks) of information embedded at strategic points within the raw audio data itself. Such formats are very platform-dependent and would require an audio file format reader or converter to have prior knowledge of just what these parameter values are.
Most audio file formats may be identified by their file types or extensions. Some common sound file formats are:
• .AU Sun Microsystems
• .SND NeXT
• HCOM Apple Macintosh
• .VOC SoundBlaster
• .WAV Microsoft Waveform
• AIFF Apple/SGI
• 8SVX Apple/SGI
A multimedia format may choose to either define its own internal audio data format or simply encapsulate an existing audio file format. Microsoft Waveform files are RIFF files with a single Waveform audio file component, while Apple QuickTime files contain their own audio data structures unique to QuickTime files.
MIDI Standard
Musical Instrument Digital Interface (MIDI) is an industry standard for representing sound in a binary format. MIDI is not an audio format, however. It does not store actual digitally sampled sounds. Instead, MIDI stores a description of sounds, in much the same way that a vector image format stores a description of an image and not image data itself.
Sound in MIDI data is stored as a series of control messages. Each message describes a sound event using terms such as pitch, duration, and volume. When these control messages are sent to a MIDI-compatible device (the MIDI standard also defines the interconnecting hardware used by MIDI devices and the communications protocol used to interchange the control information) the information in the message is interpreted and reproduced by the device.
MIDI data may be compressed, just like any other binary data, and does not require special compression algorithms in the way that audio data does.
Conclusion
The activity introduced the various data formats that are possible in a multimedia, it also explained the conversions possible, e.g. sampling, quantization and animations
Assessment
1.What is digital conversion? is a very useful feature that converts an analog voltage on a pin to a digital number. By converting from the analog world to the digital world, we can begin to use electronics to interface to the analog world around us.
e.g. Analog-to-digital conversion is an electronic process in which a continuously variable (analog) signal is changed, without altering its essential content, into a multi-level (digital) signal.
The input to an analog-to-digital converter (ADC) consists of a voltage that varies among a theoretically infinite number of values. Examples are sine waves, the waveforms representing human speech, and the signals from a conventional television camera. The output of the ADC, in contrast, has defined levels or states. The number of states is almost always a power of two -- that is, 2, 4, 8, 16, etc. The simplest digital signals have only two states, and are called binary. All whole numbers can be represented in binary form as strings of ones and zeros.
2.Explain MIDI
MIDI (Musical Instrument Digital Interface) is a protocol designed for recording and playing back music on digital synthesizers that is supported by many makes of personal computer sound cards. Originally intended to control one keyboard from another, it was quickly adopted for the personal computer. Rather than representing musical sound directly, it transmits information about how music is produced. The command set includes note-ons, note-offs, key velocity, pitch bend and other methods of controlling a synthesizer. The sound waves produced are those already stored in a wavetable in the receiving instrument or sound card. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/05%3A_The_Peripheral_Devices/5.02%3A_Sound_and_Audio_Image_and_Graphics_Animation_and_Video.txt |
Introduction
This section introduces the learner to the fact that Multimedia data which is voluminous in nature needs to be coded and decoded so that it can be transmitted fast on the existing media
Activity Details
In multimedia system design, storage and transport of information play a significant role. Multimedia information is inherently voluminous and therefore requires very high storage capacity and very high bandwidth transmission capacity. There are two approaches that are possible - one to develop technologies to provide higher bandwidth (of the order of Gigabits per second or more) and the other to find ways and means by which the number of bits to be transferred can be reduced, without compromising the information content. There are two approaches that are possible - one to develop technologies to provide higher bandwidth (of the order of Gigabits per second or more) and the other to find ways and means by which the number of bits to be transferred can be reduced, without compromising the information content, i.e. data compression.
Data compression is often referred to as coding, whereas coding is a general term encompassing any special representation of data that achieves a given goal. Information coded (compression) done at the source end has to be correspondingly decoded at the receiving end. Coding can be done in such a way that the information content is not lost; that means it can be recovered fully on decoding at the receiver. However, media such as image and video (meant primarily for human consumption) provide opportunities to encode more sufficiently but with
a loss. Coding (consequently, the compression) of multimedia information is subject to certain quality constraints. For example, the quality of a picture should be the same when coded and, later on, decoded.
Coding and compression techniques are critical to the viability of multimedia at both the storage level and at the communication level. Some of the multimedia information has to
be coded in continuous (time dependent) format and some in discrete (time independent) format. In multimedia context, the primary motive in coding is compression. By nature, the audio, image, and video sources have built-in redundancy, which make it possible to achieve compression through appropriate coding. As the image data and video data are voluminous and act as prime motivating factors for compression, our references in this chapter will often be to images even though other data types can be coded (compressed).
Conclusion
This section has informed the learner the reasons why compression is necessary in multimedia data, that is why coding and decoding are necessary in multimedia data transmission. processes like coding, compression and decoding were learned.
Assessment
1. What is multimedia? is content that uses a combination of different content forms such as text, audio, images, animation, video and interactive content
2. Define the terms;
Coding; is the process of putting a sequence of characters (letters, numbers, punctuation, and certain symbols) into a specialized format for efficient transmission or storage.
Decoding; the conversion of an encoded format back into the original sequence of characters.
Compression; is a reduction in the number of bits needed to represent data. Compressing data can save storage capacity, speed file transfer, and decrease costs for storage hardware and network bandwidth.
5.04: Unit 5 Summary
This unit introduced the learner to the peripheral device that can be connected to the computer. It dealt on the conversions of analog to digital data. the different wave forms representing conversions of the analog to digital data were also introduced. Also the multimedia data and what needs to be done for it to be transmitted across networks. The peripherals include computer mouse, keyboard, image scanners, tape drives, microphones, loudspeakers, webcams, and digital cameras.
Unit Assessment
The following section will test the learners understanding of this unit
Instructions
Answer the following questions
1. Explain digitization of an analog signal?
2. What is sampling in digitization?
3. Explain the term animation
Grading Scheme
The marks will be awarded as shown below
Question Sub-question marks awarded
1 Explanation award 2 mark. extars like examples and diagrams award 2 extra marks; maximum 2 giving total 6 6
2 Definition award 2 marks 2
3 Explanation only 2 marks. 2
Total 10
Feedback
1. Digitalization of an analog signal involves two operations:
1) Sampling, and
2) Quantization
Analog signals consist of continuous values for both axes. An analog signal exists throughout a continuous interval of time and/or takes on a continuous range of values. A sinusoidal signal (also called a pure tone in acoustics) has both of these properties.
2.Sampling is the process of recording an analog signal at regular discrete moments of time.
3.Computer animation lies somewhere between the motionless world of still images and the real-time world of video images. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/05%3A_The_Peripheral_Devices/5.03%3A_Coding_and_decoding_multimedia_systems.txt |
Introduction
This assessment consists of an assignment, a sitting CAT and a final exam. The assiment has a weight of 20 while the CAt has a weight of 30. The final exam has a weighting of 50, giving a total of 100 percent.
06: Ancillary Materials
Final Exam (50 %)
Instructions
1. Answer question one and any other two
2. Question one carries 30 marks
3. Other questions carry 20 marks each
Questions
1. (a). What is Cache memory- Explain working of a Cache memory. (10 marks)
(b). What is pipelining? Explain instruction pipelining. (10 marks)
(c). Describe interrupt driven I/O. (10 marks)
2. Explain working of DMA data transfer. Compare it with programmed I/O and interrupt driven data transfer. (20 Marks)
3. Explain the difference between hardwired control and micro programmed control. Is it possible to have a hardwired control associated with a control memory? (20 Marks)
1. Explain the block diagram of an I/O interface unit. (20 marks)
2. Using a diagram, explain the following steps executing of a program
1. Fetch
2. Decode
3. Execute (20 marks)
Grading Scheme
Marks to be distributed as indicated against each question in the answers students will provide
Feedback
1.(a). Cache memory
Is memory that stores program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program.
(b). Pipelining
A form of computer organization in which successive steps of an instruction sequence are executed in turn by a sequence of modules able to operate concurrently, so that another instruction can be begun before the previous one is finished.
Instruction pipelining is a technique that implements a form of parallelism called instruction- level parallelism within a single processor. It therefore allows faster CPU throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate.
(c) . Interrupt driven I/O
The CPU works on its given tasks continuously. When an input is available, such as when someone types a key on the keyboard, then the CPU is interrupted from its work to take care of the input data.
2. Working of DMA data transfer
• First the CPU programs the DMA controller by setting its registers so it knows what to transfer where
• It also issues a command to the disk controller telling it to read data from the disk into its internal buffer and verify the checksum.
• When valid data are in the disk controller’s buffer, DMA can begin. The DMA controller initiates the transfer by issuing a read request over the bus to the disk controller . This read request looks like any other read request, and the disk controller does not know (or care) whether it came from the CPU or from a DMA controller. Typically, the memory address to write to is on the bus’ address lines, so when the disk controller fetches the next word from its internal buffer, it knows where to write it. The write to memory is another standard bus cycle.
• When the write is complete, the disk controller sends an acknowledgement signal to the DMA controller, also over the bus. The DMA controller then increments the memory address to use and decrements the byte count. If the byte count is still greater than 0, steps 2 through 4 are repeated until the count reaches 0.
• At that time, the DMA controller interrupts the CPU to let it know that the transfer is now complete. When the operating system starts up, it does not have to copy the disk block to memory; it is already there.
Comparing DMA with programmed I/O and interrupt driven data transfer
Programmed I/O (PIO) refers to data transfers initiated by a CPU under driver software control to access registers or memory on a device. while the device interrupts the CPU when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports, memory mapping.
3. Micro programmed control is a control mechanism to generate control signals by using a memory called control storage (CS), which contains the control signals. Although micro programmed control seems to be advantageous to CISC machines, since CISC requires systematic development of sophisticated control signals, there is no intrinsic difference between these 2 control mechanisms.
Hardwired control is a control mechanism to generate control signals by using appropriate finite state machine (FSM). The pair of “microinstruction-register” and “control storage address register” can be regarded as a “state register” for the hardwired control. Note that the control storage can be regarded as a kind of combinational logic circuit. We can assign any 0, 1 values to each output corresponding to each address, which can be regarded as the input for a combinational logic circuit.
4.
5. Is the process by which a computer or a virtual machine performs the instructions of a computer program. The instructions in the program trigger sequences of simple actions on the executing machine.
Fetch; first step the CPU carries out is to fetch some data and instructions (program) from main memory then store them in its own internal temporary memory areas. These memory areas are called ‘registers’. The computer fetches the instruction from its memory and then executes it. This is done repeatedly from when the computer is booted up to when it is shut down. When the instruction has been decoded, the CPU can carry out the action that is needed. This is called executing the instruction. The CPU is designed to understand a set of instructions - the instruction set.
Decode; The next step is for the CPU to make sense of the instruction it has just fetched. This process is called ‘decode’. The CPU is designed to understand a specific set of commands.
These are called the ‘instruction set’ of the CPU. Each make of CPU has a different instruction set. The CPU decodes the instruction and prepares various areas within the chip in readiness of the next step.
Execute; This is the part of the cycle when data processing actually takes place. The instruction is carried out upon the data (executed). The result of this processing is stored in yet another register. Once the execute stage is complete, the CPU sets itself up to begin another cycle once more. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/06%3A_Ancillary_Materials/6.01%3A_Final_Exam.txt |
Mid Term Exam 1(20 %)
Instructions
Answer the following question be as detailed as possible Question: Explain the concept of interrupts and DMA.
Grading Scheme
Marks to be awarded based on key issues mentioned in the explanation. maximum is 10 marks for correct answers
Feedback
When an interrupt occurs the CPU issues commands to the I/O module then proceeds with its normal work until interrupted by I/O device on completion of its work.
if an interrupt occurs due to the input device, the device interrupts the CPU when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports, memory mapping.
if it occurs due to the output device, the device delivers an interrupt either when it is ready to accept new data or to acknowledge a successful data transfer. Memory-mapped and DMA- capable devices usually generate interrupts to tell the system they are done with the buffer.
An Interrupt relieves the CPU of having to wait for the devices, but it is still inefficient in data transfer of large amount because the CPU has to transfer the data word by word between I/O module and memory. Below are the basic operations of Interrupt:
• CPU issues read command
• I/O module gets data from peripheral whilst CPU does other work
• I/O module interrupts CPU
• CPU requests data
• I/O module transfers data
Direct Memory Access (DMA)
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module controls exchange of data between main memory and the I/O device. Because of DMA device can transfer data directly to and from memory, rather than using the CPU as an intermediary, and can thus relieve congestion on the bus. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus. The controllers are programmed with source and destination pointers (where to read/write the data), counters to track the number of transferred bytes, and settings, which includes I/O and memory types, interrupts and states for the CPU cycles.
DMA increases system concurrency by allowing the CPU to perform tasks while the DMA system transfers data via the system and memory busses. Hardware design is complicated because the DMA controller must be integrated into the system, and the system must allow the DMA controller to be a bus master. Cycle stealing may also be necessary to allow the CPU and DMA controller to share use of the memory bus.
6.03: Mid term Exam 2
Mid term Exam 2 (30 %)
Instructions
Answer all the questions in this paper
1. (a). Draw the block diagram of a DMA controller (10 marks)
(b). explain its functioning (5 marks)
2.Describe the architecture of a shared memory multiprocessor (5 marks)
Grading Scheme
1.(a). Correct diagram 10 marks
(b). Correct function 2 marks each (max 10)
2. Any correct answer 2 marks each (max 10)
Feedback
1. (a). Draw the block diagram of a DMA controller
(b). Explain its functioning
During any given bus cycle, one of the system components connected to the system bus is given control of the bus. This component is said to be the master during that cycle and the component it is communicating with is said to be the slave. The CPU with its bus control logic is normally the master, but other specially designed components can gain control of the bus by sending a bus request to the CPU. After the current bus cycle is completed the CPU will return a bus grant signal and the component sending the request will become the master.
2. Describe the architecture of a shared memory multiprocessor
• Processors have their own connection to memory
• Processors are capable of independent execution and control
• Have a single OS for the whole system, support both processes and threads, and appear as a common multiprogrammed system
• Can be used to run multiple sequential programs concurrently or parallel programs
• Suitable for parallel programs where threads can follow different code (task-level-parallelism)
6.04: Module Summary
In this module, in unit 1 you learnt about advanced functional organization of computer, which included how data transfers occurs in the computer, the architecture of the microprocessor, the types of transfer of instructions within the computer and the processor and system performance of the computer.. In unit 2, Amadahl’s law, Flynn Taxonom, multiprocessing and Scheduling were introduced. Also Short vector processing and multicore and multiprocessor was also learned. In unit 3, Low-level programming was introduced. This included machine and Assembly programming. In unit 4, strategies of I/O interfaces were learnt together with the various access methods, e.g. polling, interrupt and DMA. Finally in unit 5, the peripheral devices that can be connected to the computer were introduced. Wave forms representing conversions of the analog to digital data was also introduced as well as multimedia data. | textbooks/workforce/Information_Technology/Information_Technology_Hardware/06%3A_Ancillary_Materials/6.02%3A_Mid_Term_Exam_1.txt |
• 1.1: The Language of Lines
It would be almost impossible for an engineer, designer, or architect to describe in words the shape, size, and relationship of a complex object. Therefore, drawings have become the universal language used by engineers, designers, technicians, as well as craftsmen, to communicate the Information necessary to build, assemble and service the products of industry.
• 1.2: Visualization
Now that you have learned about the kinds of lines found on prints, the next step is to develop your visualization abilities. The ability to ”see” technical drawings; that is to ”think in three dimensions,” is the most important part of this course. Since most engineering and architectural prints utilize some form of orthographic projection (multi-view drawing), that type of drawing will be emphasized.
• 1.3: Technical Sketching
On many occasions in your work it will most likely be necessary for you to make a sketch. Perhaps your boss can’t visualize a particular problem without one, or you find it’s necessary to make a dimensioned sketch to show an apprentice how to complete a job. In any case knowing how to sketch can make you more effective, and therefore more valuable, as a tradesperson.
• 1.4: Scaling
The ability to make accurate measurements is a basic skill needed by everyone who reads and uses blueprints. This section is intended as a review of the fundamental principles of measurement. Since some students have had little need to measure accurately, these exercises will provide the practice they need. Others, who have had more experience, may find these exercises a worthwhile review.
• 1.5: Dimensioning
If a drawing is to be complete, so that the object represented by the drawing can be made as intended by the designer, it must tell two complete stories. It tells this with views, which describe the shape of the object, and with dimensions and notes, which gives sizes and other information needed to make the object.
• 1.6: Auxiliary Views
When an object has a slanted or inclined surface, it usually is not possible to show the inclined surface in an orthographic drawing without distortion. To present a more accurate description of any inclined surface, an additional view, known as an auxiliary view, is usually required.
• 1.7: Sectional Views
You have learned that when making a multiview sketch, hidden edges and surfaces are usually shown with hidden (dash) lines. When an object becomes more complex, as in the case of an automobile engine block, a clearer presentation of the interior can be made by sketching the object as it would look if it were cut apart. In that way, the many hidden lines on the sketch are eliminated.
• 1.8: Machined Features
The machined features in this section are common terms related to basic industry processes. These terms are often found on prints. For a better understanding of these processes, look at the models of machined features in the Print Reading Lab.
• 1.9: Print Interpretation
This final section introduces basic print reading. Because machine drawings are used to some extent in nearly every trade, the working drawings used in this section are all machine drawings. The purpose of this package is to provide an opportunity to put your fundamental knowledge of print reading to use before you go on to more specialized and advanced print reading activities.
01: Chapters
You have heard the saying, “A picture is worth a thousand words”. This statement is particularly true in regards to technical drawings.
It would be almost impossible for an engineer, designer, or architect to describe in words the shape, size, and relationship of a complex object. Therefore, drawings have become the universal language used by engineers, designers, technicians, as well as craftsmen, to communicate the Information necessary to build, assemble and service the products of industry.
It is Important to remember, as you study Print Reading, that you are learning to communicate with the graphic language used by Industry: Lines are part of that language.
Since technical drawings are made of lines, it is logical that the first step in learning to “read” a drawing is to learn the meaning of each kind of line. Generally, there are 11 basic types of lines. Each kind of line has a definite form and “weight”. Weight refers to line thickness or width. When combined in a drawing, lines provide part of the Information needed to understand the print.
Being able to interpret a blueprint and accurately build objects is a needed skill to become successful in all trade crafts. It is a skill, like many others you will learn, and it will take time and practice to fully understand and become proficient.
Object Line
A visible line, or object line is a thick continuous line, used to outline the visible edges or contours of an object.
Hidden Line
A hidden line, also known as a hidden object line is a medium weight line, made of short dashes about 1/8” long with 1/16”gaps, to show edges, surfaces and corners which cannot be seen. Sometimes they are used to make a drawing easier to understand. Often they are omitted in an isometric view.
Section Line
Section lines are used to show the cut surfaces of an object in section views. They are fine, dark lines. Various types of section lines may indicate the type of material cut by the cutting plane line.
Center Line
Center lines are used to indicate the centers of holes, arcs, and symmetrical objects. They are very thin (size), long-short-long kinds of lines.
Dimension Line
Dimension lines are thin and are used to show the actual size of an object. There are arrowheads at both end that terminate at the extension lines.
Extension Line
Extension lines are also thin lines, showing the limits of dimensions. Dimension line arrowheads touch extension lines.
Leader Line
Leaders are more thin lines used to point to an area of a drawing requiring a note for explanation. They are preferably drawn at a 45° angles.
Cutting Plane Line
A cutting plane line (very heavy) helps to show the internal shape at a part or assembly by slicing through the object.
Break Line
There are three kinds of break lines used in drawings. They are used to remove, or ‘break out” part of a drawing for clarity, and also to shorten objects which have the same shape throughout their length and may be too long to place on the drawing.
Short and long break lines are used for flat surfaces. Cylindrical are used on rods, dowels, etc.
Phantom Line
Phantom lines are long-short-short-long lines most often used to show the travel or movement of an object or a part in alternate positions. It can also be used to show adjacent objects or features.
Border Line
Borderlines are very thick, continuous lines used to show the boundary of the drawing or to separate different objects drawn on one sheet. They are also used to separate the title block form the rest of the drawing.
Quiz….
Directions: Name the types of lines shown below. Check your own answers
Identify the various line types used in this drawing. (instructor will provide a copy of this drawing)
Name the types of lines shown below. Check your own answers.
Directions: Draw and identify the lines needed to complete the figures as indicated. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.01%3A_The_Language_of_Lines.txt |
Now that you have learned about the kinds of lines found on prints, the next step is to develop your visualization abilities. The ability to ”see” technical drawings; that is to ”think in three dimensions,” is the most important part of this course. Since most engineering and architectural prints utilize some form of orthographic projection (multi-view drawing), that type of drawing will be emphasized.
Before going into a study of orthographic projection, you should be able to recognize several other types of drawings. They are; 1. Perspective drawing, 2. Oblique drawing, and 3. Isometric drawing. As a group, they are called “pictorial drawings”. They are found on prints and are easy to visualize, so let’s look at their differences.
Perspective
Perspective is the most realistic form of drawing. Artists use one-point perspective, two-point (shown here), and three point to create visual depth. Perspectives are used by architects and for industrial pictorials of plan layouts, machinery, and other subjects where realism is required. Objects drawn in perspective grow smaller as they recede into the horizon.
Oblique
Oblique drawings are drawn with one plane (front) of the object parallel to the drawing surface. The side, or other visible part of the object is generally drawn at 30◦ or 45◦. Note that only the side is on an angle. Many times these types of drawing are not drawn to scale. The receding lines are drawn at 45◦ or 30◦ and will be drawn at a different scale as the vertical and horizontal lines. This make the drawing seem “out of shape”. This type of drawing is not used very often in industry.
Isometric
Isometric drawings have less distortion than oblique drawings, and are used more frequently by industry for that reason. An isometric drawing has both visible surfaces drawn at 30◦. These are the most used type of drawings in the piping industry and take a good deal of practice to fully understand how to draw. They best represent what is being built and what it will look like from the different sides with one drawing.
Directions: Name the types of drawings shown below. Check your own answers.
Single View
A single view of an object is sometimes all that is needed for a complete visual explanation. When dimensions, material, and other information is Included, an object requiring only a single view is easy to understand.
Most one-view drawings are of flat objects, made from materials such as sheet metal and gasket stock. Spherical objects, such as a cannonball, would require only one view and a note indicating the material and diameter of the sphere.
The object shown in the one-view drawing below could be made of any appropriate material that might be specified. In appearance, it is much like the gasket used as part of the cooling system on many cars. All that would need to be noted is the material type and thickness required.
Two View
Sometimes “two-view” drawings are use on prints. Two views may be all that is needed to show the shape of an object. Objects that are cylindrical, such as a length of pipe, are usually shown on a print with two views. In such a case, two views is sufficient to explain the shape. Notice in the two-view drawing shown below that the length of the pipe is shown in one view, while the diameter is called out in the other. Without the view on the right, what might this shape be mistaken for? Square tube, channel…
Orthographic Projection
Orthographic projection is a name given to drawings that usually have three views. Often, the three views selected are the top, front, and right side. It is possible, of course, to select other views such as the left side or bottom. Generally, though, it’s the top, front and right side that are traditionally seen by the person reading prints.
Since most prints make use of the orthographic projection system, and because the top, front, right side views are most often used, it is important that you have their order, or arrangement on the print fixed in your mind. To help you understand this system, think of a chalkboard eraser, a short length of 2″ x 4″ lumber, or a common brick. It looks like this:
When seen on a print, using orthographic projection, it would appear like this.
This system of orthographic projection may be difficult to understand or visualize at first, but you will grasp it with some practice. Here’s a basic example of how it works, using a simple object.
Orthographic projection does not show depth, so the object shown above will appear flat. With practice, however, you will learn to scan the three views and “read” depth into them. Remember that the location of the top, front and right side views does not change. The projection lines between the orthographic views below show the height, width, and depth relationship that exists between each view and the other two views.
In case you did not understand the three-view on the last page, let’s take another look at the same thing. This time numbers will be used for identification for the surfaces.
Using orthographic projection, the object with the surfaces numbered appears like this:
Notice that the front view (1) is the key to the drawing, because it most clearly shows the shape of the object. It tells you the object is “L” shaped from the front. The other two views don’t tell you much by themselves. By looking at surface 1, however, you can see that 2 is taller than 3. Therefore, in “reading” the surfaces, 2 should appear to be closer to you than 3. Now look at 4 and 5. Which surface is projected closest to you?·
Answer: Surface 5 (rotate and place at bottom of layout)
Now draw a simple box and tape all sides together to form a cube. The cube will be 2”x 2”x 2”. Once the instructor has approved your drawing you will proceed to cut out and tape edges together to form a cube.
Visualization Quiz
Directions:
All visible surfaces on the objects shown are numbered. To complete this quiz, you are to place those numbers on the corresponding surfaces of the orthographic drawings.
You may be wondering at this point why something like orthographic projection is used on prints when isometric or oblique drawings are so much easier to visualize. The answer is that both of those types of pictorials are used for relatively uncomplicated drawings. When an object is complex, however, neither can equal the orthographic system for clear presentation of dimensions, notes, and configuration details.
Hidden Surfaces
Another advantage of orthographic projection is that it allows the person reading the print to have the ability to see the inside, or surfaces of an object which normally could not be seen.
With complicated objects this can become very useful.
In the drawing below, the hidden line in the right side view represents the entire surface of the flat area between the two higher sides.
In this example, the hidden lines result from a square hole through the middle of the object.
The hidden lines in this example are there because a part of one corner of the front surface was cut away, or “recessed.
Hidden Surfaces
Directions: Draw the hidden lines which are missing in the views below. Each problem has one incomplete view.
Curved Surfaces
Curved surfaces are perhaps tricky to “see” until you remember that the curve is only shown in one view. You must put the curve in the other views yourself, through visualization. Try to think that when there is a sharp change of direction like at a corner, then that will produce a line visible in another view. When the change of direction is smooth like a curve, no line will be seen.
Here’s another example of curved surfaces:
Curved surfaces exercise.
Directions: Draw the lines which are missing in the views below. Each problem has one incomplete view. Do not draw center lines.
Inclined Surfaces
Inclined surfaces are those which are at an angle, or slanted. In other words, they are surfaces which are neither horizontal nor vertical. In viewing orthographic drawings you need to be alert to angles and inclined surfaces, for they are often found on the prints you will be reading later.
Notice the hidden line in the right view created by the inclined surface on this object:
Here is an object with two inclined surfaces.
Inclined surfaces exercise
Directions: Draw the lines which are missing in the views below. Each problem has one incomplete view. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.02%3A_Visualization.txt |
Many times out in the field you are working on sketches drawn on anything from napkins, cardboard, wood scrapes or any flat surface. Don’t get me wrong, you will work projects with professionally produced drawings but sometimes you have to work with what you have.
Being able to make technical sketches doesn’t mean you need to be an artist; and sketching isn’t difficult if you follow a few simple rules. You may be a little slow at first, but with some practice you will be able to turn out reasonably good sketches without too much effort.
You are not going to be judged as a professional draftsperson or architect but you need to be able to describe, with lines what you are needing built, repaired or modified. This take practice and is a very important skill to develop.
Sketching Techniques
All you need to start is a pencil and some paper. A soft pencil works best for most people, so try a #2 or an F. Keep the pencil sharp, but not too sharp: hold it with a grip firm enough for control, but not so tight that your arm isn’t relaxed. Don’t draw heavily at first. That way it is easier to erase without smudging. Darken the sketch when it begins to shape up the way you want it.
It’s generally best to begin sketching with plain paper, although some people like to use grid paper. On the job, you may find yourself sketching on the back of a work order or piece of packing crate. In any case, it’s learning to sketch quickly and effectively that’s important.
Here are some limbering up exercises wot get you started. To keep your pencil sharper longer, and for more even lines widths, try turning your pencil slowly while completing the lines in the exercises below.
Next, try sketching the objects on this page. Make your sketches as much like the examples as possible. Remember; sketching means freehand drawing. No Straightedges, compass, coins, etc!
Lettering
Now that you are warmed up, we will take the straight and curved lines from the sketching exercise and use them to form letters. The entire alphabet can be formed from the straight and curved lines you have practiced.
Look at the lettering below. If your printing is similar, and is easily readable, you can skip this exercise and go on to oblique sketching. If not, do some practicing. Some of the work ahead (and tests) require good lettering.
Remember, the most important requirement of good lettering is legibility. There is no use making a drawing if the person looking at drawing cannot read your writing.
Oblique Sketching
Oblique sketches are a type of pictorial having one plane parallel to the drawing surface, and the visible side sketched at an angle. Usually, that angle works best at 30 to 45, or somewhat in between. Beginners often have trouble keeping the 30 or 45 lines as the same angle. If that happens, your sketch will look distorted.
Here’s how to sketch an oblique cube in three steps:
Possibly you might want to show the left side of the cube, or perhaps draw the hidden lines, as at the right;
In the spaces below, sketch oblique cubes as indicated.
Sketch these objects in oblique, as shown:
In this more difficult exercise, you are to make oblique sketches of the objects shown. The third problem is drawn in isometric. You are to sketch it in oblique. Convert problem four from orthographic to an oblique sketch.
Isometric Sketching
Isometric sketches, unlike oblique, must maintain an angle very close to 30°.
Therefore, to get the “feel” of an isometric, try sketching 30° angles in this exercise:
Sketching in isometric can be done in different ways. Generally, it’s recommended that you start and the bottom of the object and “box it in”, thereby enclosing it within a rectangular framework. For example, if we were to take a simple object like this,
The steps needed to sketch it in isometric would be;
The unnecessary lines are then removed, leaving the object. Once you gain some practice at this, it will be possible for you to make an ISO drawing without the “guide” lines. It is important to use them early on the help train your brain.
To the beginner, building a “frame” before sketching the object often seems unnecessary. That may be true with simple objects. However, when things become more complex, a frame gives you a means of developing the various parts in an organized way. Without such guidelines you can easily “lose” your sketch.
Sketch the examples given, using guidelines as shown in the first exercise. Draw the guidelines lightly. Notice that 3 and 5 require the use of non-isometric lines. Since those lines are not at 30, it is best to connect the end points of those lines after the sketch nears completion.
Circles and arcs, when sketched in isometric, become elliptical in shape. If a circle is used, it will appear distorted, as in this example:
In sketching isometric circles and arcs, there are three positions in which they are normally sketched, depending upon the surface where the circular feature is located. Those surfaces, or picture planes are:
On the cube shapes below sketch ellipses as on the examples above.
You may also want to practice sketching ellipses on the other surfaces of the cubes.
Orthographic Sketching
Of all the methods of making drawings, orthographic projection is the most commonly used by draftsperson. Although the other methods serve their purposes, they cannot always show the parts of an object as well as orthographic representation.
Orthographic projection is a system of projecting from view to view to graphically describe the object. As a way of reviewing, study the views of the small garage in this drawing. Notice the location and relationship of each view to the other views.
The important thing to remember in orthographic sketching is the alignment of views. The top view is projected directly above the front view. The front and side view also line up with each other. The height, width, and length of the object must remain the same from view to view. There should be enough distance between the views to prevent crowding; and to leave room for dimensions.
Produce a top view in the drawing below.
Sketch each object in orthographic in the spaces on the right. Remember, the idea is to project! Front, top and right side view.
“On-THE-SPOT” Sketching
Now that you have learned the three different methods of sketching, it is time to make sketches you can place in front of you and touch, rather that sketching from drawings on paper.
Sketching “ON-THE-SPOT” is a standard industry practice. A piece of machinery needs to be changed, a support needs to be added, or pictorial information in some way requires a sketch.
Instructor will give you an object to sketch…
You will need to produce an isometric of the object and also an orthographic drawing with as many views as required to show details of object to be built. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.03%3A_Technical_Sketching.txt |
Whether or not you need to review these fundamentals, there is one important thing to remember about getting measurements from a print. If you need a dimension that is unclear or is not given, do not measure the print! Since prints shrink, stretch, and may not be drawn to scale, you can easily come up with some very inaccurate dimensions.
Scale Measurement
A drawing of an object may be the same size as the object (full size), or it may be larger or smaller than the object. In mo.st cases, if it is not drawn full size, the drawing is made smaller than the object. This is done primarily for the convenience of the users of the drawings. After all, who wants to carry around a full size drawing of a locomotive? Obviously, with an object as small as a wristwatch, it would be necessary to draw to a larger scale.
A machine part, for example, may be half the size (1/2”=1”); a building may be drawn 1/48 size (1/4”=1’-0”); a map may be drawn 1/200 size (1”=100’-0”); and a gear in that wristwatch may be ten-times size (10”=1”).
There are numerous scales for different needs. Since each occupational group has their own frequently used scales, some practice or basics review will help you to work with the scales used in your technology.
Full Scale
Full scale is simply letting one inch on a ruler, steel rule, or draftsman’s scale equal one inch on the actual object. Rules of this kind are usually divided into 1 /16” or 1 /32” units. The first measurement exercise will be with full size. If you can measure accurately in full scale, you may want to skip ahead.
Here is a “big Inch”. Each space equals 1/32”. If you have not worked with accurate measurement, spend some time studying it.
Measurement practice: on the scale above, locate the following fractions:
Directions:
Each of the fractions below is numbered. Write that number above the scale and point with an arrow where the fraction is located. Number 1 has been completed.
Half Size
The principle of half size measurements on a drawing is simply letting a unit, such as 1/2” on the scale, represent a larger unit such as 1” on the drawing. If the drawing is properly labeled, the words HALF SIZE or 1/2″= 1′ will appear in the title block.
Using the half-size scale is not difficult, but it does take some practice. To measure a distance of 2-3/16” you look first for the 2, then go backwards to the zero and count off another 3/16. You measure this way for each dimension that has a fraction. Whole numbers (numbers without fractions) are measured in the usual way.
Next, locate a half size scale (available in the lab) and measure the lines· below to the nearest 1 /32 of an inch. Write the length of the line in the space provided.
Because paper is dimensionally unstable due to humidity, exact answers to this half size measurement practice cannot be given. That is another reminder that it’s poor practice to measure from a piece of paper.
Quarter Size
Quarter size is used and read in a similar way to half size except that each unit, such as a quarter of an inch, represents a larger unit, such as one inch. If the drawing is properly labeled the words QUARTER SIZE, QUARTER.SCALE, or 1/4″ = 1″ will appear in the title block.
The quarter size scale is used in a similar manner as the half size scale.
For quarter size practice, draw lines in the area provided to the required length. Have another student or the lab instructor check your lines for accuracy. (A ¼”=1” scale is available in the lab.)
Quiz
For this quiz, you will be given an object to measure with a ruler or tape measure. You will record measurements in full scale and then draw each length in ½ and ¼ scale. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.04%3A_Scaling.txt |
If a drawing is to be complete, so that the object represented by the drawing can be made as intended by the designer, it must tell two complete stories. It tells this with views, which describe the shape of the object, and with dimensions and notes, which gives sizes and other information needed to make the object.
Therefore, your next step is to learn the basics of dimensioning. In that way you will understand not only how to interpret a drawing to get the information you need, but also how to dimension your sketches so that they can be used to communicate size information to others.
Numerals
It may seem a bit basic, but a few exercises with the shapes of numbers comes before dimensioning. The reason for such review is simply that incorrectly or carelessly made numbers on a drawing or sketch can easily be misinterpreted by someone on the job. That can be costly.
Therefore, the study of numbers forms is justified.
The number forms presented here have been determined to be the most legible, and are used by industry nationwide. The United States standardized 1/8” vertical numbers are correctly formed as follows:
Dimension Lines
The dimension line is a fine, dark, solid line with arrowheads on each end. It indicates direction and extent of a dimension. In machine sketches and drawings, in which fractions and decimals are used for dimensions, the dimension line is usually broken near the middle to provide open space for the dimension numerals. In architectural and structural sketches and drawings, the numerals are usually above an unbroken dimension line.
In either case, the dimension line which is closest to the object should be placed approximately
1/2″ away. The other dimensions beyond the first dimension (if any) should be approximately 3/8″ apart. You do not necessarily have to remember this, but you should remember not to crowd your dimension lines and to keep them a uniform distance apart.
The most important thing is that the drawing needs to be “clean” and dimensions need to be located in a space where they cannot be confused with a surface they are not intended to be used for.
Here is how dimension lines should be sketched:
Note: Dimensions less than six feet (72 in.) are given in inches. Dimensions over six feet are usually shown in feet and inches. Be sure that it is clear how dimensions are called out. When calling out dimensions that are over 12”, make sure ALL of dimensions are called out in total inches or feet inches throughout the entire drawing. Either 4’-5” or 53”, they both mean the same thing but if there is a mix of dimensioning it can become easy to look at 4’-8” and see 48”.
Extension Lines
Extension lines on a drawing are fine, dark, solid lines that extend outward from a point on a drawing to which a dimension refers. Usually, the dimension line meets the extension line at right angles. There should be a gap of about 1 / 16″ where the extension line would meet the outline of the object, and the extension line should go beyond the outermost arrowhead approximately 1 /8″. Also, there should be not gaps where extension lines cross. Notice in this example the larger dimensions are correctly placed outside, or beyond the shorter dimensions, and that the dimensions are preferably not drawn on the object itself. Sometimes, however, it is necessary to dimension on the object.
It is important to remember to place dimensions on the views, in a two or three view drawing, where they will be the most easily understood. Avoid dimensioning to a hidden line and avoid the duplication of dimensions. Use common sense; keep dimensions as clear and simple as possible. Remember, the person reading your drawing needs to clearly understand, beyond question, how to proceed. Otherwise, costly time and material will be wasted.
There are two basic methods of placing dimensions on a sketch. They may be placed so they read from the bottom of the sketch (unidirectional dimensions) or from the bottom and right side (aligned dimensions). The unidirectional system is usually best, because it is more easily read by workmen.
When dimensions will not fit in a space in the usual way, other methods are used to dimension clearly, when those crowded conditions exist.
Arrowheads
Arrowheads are placed at each end of dimension lines, on leader lines, etc. Correctly made, arrows are about 1/8” to 3/16” in length, and are about three times as long as they are wide. Usually they have a slight barb, much like a fishhook.
To make your drawing look clean, use the same style throughout your drawing or sketch.
Dimension numerals
Numerals used to dimension an object are normally about 1/8” in height.
When a dimension includes a fraction, the fraction is approximately 1 / 4″ in height, making the fractional numbers slightly smaller to allow for space above and below the fractional line.
Again, it is particularly important that the numbers and fractions you may put on a sketch or drawing be legible. Sloppy numbers can cause expensive mistakes.
Notes
Notes are used on drawings to provide supplementary information. They should be brief and carefully worded to avoid being misinterpreted, and located on the sketch in an uncrowded area. The leader lines going to the note should be kept short. Notes are usually added after a sketch has been dimensioned to avoid interference with dimensions.
Quiz
Directions: Dimension the examples as indicated.
Dimension this 3 ¼ x 6 15/32 rectangle unidirectionally on the top and right sides.
With a note, show a 5/16 drilled hole.
Dimension this object. The shorter lines are 3 inches in length.
Dimension this object. Use a ruler or scale to determine the line lengths.
Oblique Dimensioning
Oblique dimensioning is mostly remembering to avoid dimensioning on the object itself (when possible) and the use of common sense dimensioning principles. It is also usually best to have dimensions read from the bottom (unidirectional) as shown here.
Although it is best not to dimension on the view itself, its usually accepted practice to place diameter and radius dimension on the views if space permits.
Sometime space and time is limited and you might have to bend the typical rules of drawing and dimensioning. The most important thing is to keep the drawing clean, concise, try to not a repeat dimensions but give all required ones.
Directions: Complete as indicated.
Dimension this three inch cube.
The shorter section of this rod is 5/8 inches in diameter by 2 1/8 inches long. The longer section is 7/8 inches in diameter by 3 ½ inches long. Dimension the drawing.
Isometric Dimensioning
When dimensioning an isometric sketch, it is important to keep dimensions away from the object itself, and to place the dimension on the same plane as the surface of the object being dimensioned. You will probably find that to dimension well in isometric will take some practice.
Place notes on an isometric drawing without regard to placing them on the same plane, as with dimensions. It is easier to do, and easier to read.
Isometric notes do not have to be on the same plane.
Notice in the example above that part of each leader line to the notes are sketched at an approximate angle of 15, 30, 45, 60 or 75 degrees. This is done to avoid confusion with other lines. Never draw leader lines entirely horizontal or vertical.
Quiz
Directions: complete as indicated.
Dimension this drawing. The dimensions are 3” long, 2 1/8” wide, 1 5/8” high with a 45◦ angle ½” deep. The angle begins as the midpoint of the 3” long dimension.
Dimension this drawing. The base is ½” x 1 ½” square. The cylinder is 1” ∅. x 1-1/8”
long. The drilled through hole is ∅5/8”.
Quiz
Directions: You will be given an object to sketch and dimension.
Orthographic Dimensioning
When you look at the dovetailed object several pages back, it is easy to see that an isometric sketch can quickly become cluttered with dimensions. Because of this, more complicated sketches and drawings are dimensional in orthographic. This method provides the best way to dimension clearly and in detail.
Here are seven general rules to follow when dimensioning.
• Show enough dimensions so that the intended sizes can be determined without having a workman calculate or assume any distances.
• State each dimension clearly, so it is understood in only one way.
• Show dimensions between points, lines or surfaces which have a necessary relationship to each other or which control the location of other components or mating parts.
• Select or arrange dimensions to avoid accumulations of dimensions that may cause unsatisfactory mating of parts. (In other words, provide for a buildup of tolerances, as in the example below.
• Show each dimension only once. (Do not duplicate dimensions).
• Where possible, dimension each feature in the view where it appears most clearly, and where its true shape appears.
• Whenever possible, specify dimensions to make use of readily available materials, parts and tools.
Notice the dimensions are correctly placed between the views, rather than around the outside edges of the drawing.
Quiz
Directions: one a separate piece of paper, make a dimensioned orthographic sketch of this object.
Directions: on a separate piece of paper, make a dimensional orthographic sketch of the object.
Quiz
Directions: You will be given an object to sketch and dimension. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.05%3A_Dimensioning.txt |
When an object has a slanted or inclined surface, it usually is not possible to show the inclined surface in an orthographic drawing without distortion. To present a more accurate description of any inclined surface, an additional view, known as an auxiliary view, is usually required.
An auxiliary view is simply a “helper” view, which shows the slanted part of the object as it actually is. It turns, or projects, the. object so that the true size and shape of the surf ace (or surfaces) are seen as they actually are.
Auxiliary views are commonly found on many types of industrial drawings.
Front View Auxiliaries
There are three basic type of auxiliary views. In the first type, the auxiliary view is projected from the front view of a three view (orthographic) drawing. In the second and third types of drawings, the auxiliary views are projected from the top and side views.
Here is a front view auxiliary of a simple object with an inclined surface.
Notice that the projection lines are perpendicular to the slanted surface of the first view, and that only the slanted surface of the object is shown in the auxiliary view. · The rest of the object is omitted, however, for clarification portions of the adjacent· surfaces are sometimes shown. Also, notice that the slanted surfaces of the top and side views are shortened because of distortion, whereas the surface of the auxiliary view is true, or actual size.
To sketch an auxiliary view, you begin with orthographic. views of the object and add projection lines perpendicular (90) to the slanted surface, adding a reference line any convenient distance from the view with the slanted surface.
Next, the distance CB on the auxiliary view is made the same length as the related distance in one of the orthographic views; in this example it’s the side view. This completes the auxiliary view.
Top View Auxiliaries
A top view auxiliary is developed in the same way as a front view auxiliary, except that the auxiliary is projected from the top view.
Whether the auxiliary view is to be projected from the front, top, or side view depends on the position of the object, or which surface of the object is slanted. In this example, the top view is slanted. Therefore the auxiliary view must be projected from the top view.
Again, notice how the angled surfaces shown in the front and side views are not shown in true length.
Side View Auxiliary
Side view auxiliaries are drawn in the same way as front and top view auxiliaries. Again, where the auxiliary view is to be projected depends upon the position of the object or which surface of the object is slanted.
Obviously, these are very basic auxiliary view examples and are presented to introduce you to the concept of auxiliary views.
As objects with inclined surfaces become more complex, auxiliary views provide a means of presenting objects in their true size and shape.
Sketching Auxiliary Views
The following problems require and auxiliary view to be complete. Sketch the auxiliary views required in the spaces provided.
Drawing practice 1
Drawing practice 2
In this problem, a round hole is centered on the slanted surface and drilled through the object. The hole appears elliptical in the. front and side views because of distortion. It will appear in its true shape on the auxiliary view. Remember that the auxiliary is developed from the view with the slanted surface. Complete the auxiliary view.
Drawing practice 3
In this problem, a square hole has been cut part way into the object. Complete the auxiliary view.
Quiz
Directions: Complete the auxiliary view in the space provided. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.06%3A_Auxiliary_Views.txt |
You have learned that when making a multiview sketch, hidden edges and surfaces are usually shown with hidden (dash) lines.
When an object becomes more complex, as in the case of an automobile engine block, a clearer presentation of the interior can be made by sketching the object as it would look if it were cut apart. In that way, the many hidden lines on the sketch are eliminated.
The process of sketching the internal configuration of an object by showing it cut apart is known as sectioning. Sectioning is used frequently on a wide variety of Industrial drawings.
In this example, blocks A and B result after the block in figure 1 has been “Sectioned”. When you cut an apple in half you have sectioned it. Just as an apple can be sectioned any way you choose, so can an object in a sectional view of a drawing or sketch.
Cutting Plane
A surface cut by the saw in the drawing above is a cutting plane. Actually, it is an imaginary cutting plane taken through the object, since the object is imagined as being cut through at a desired location.
Cutting Plane Line
A cutting plane is represented on a drawing by a cutting plane line. This is a heavy long-short-short-long kind of line terminated with arrows. The arrows in show the direction of view.
Once again, here is an graphic example of a cutting plane line and the section that develops from it.
Section Lining
The lines in the figure above, which look like saw marks, are called section lining. They are found on most sectional views, and indicate the surface which has been exposed by the cutting plane. Notice that the square hole in the object has no section lining, since it was not changed by sectioning.
Different kinds of section lining is used to identify different materials. When an object is made of a combination of materials, a variety of section lining symbols makes materials identification easier. Here are a few examples:
Section lines are very light. When sketching an object or part that requires a sectional view, they are drawn by eye at an angle of approximately 45 degrees, and are spaced about 1/8” apart. Since they are used to set off a section, they must be drawn with care.
It is best to use the symbol for the material being shown as a section on a sketch. If that symbol is not known, you may use the general purpose symbol, which is also the symbol for cast iron.
Full Sections
When a cutting plane line passes entirely through an object, the resulting section is called a full section Fig. 7 illustrates a full section.
It is possible to section an object whenever a closer look intentionally is desired. Here is an object sectioned from two different directions.
Half Sections
If the cutting plane is passed halfway through an object, and one-quarter of the object is removed, the resulting section is a half section. A half section has the advantage of showing both inside and outside configurations.
It is frequently used for symmetrical objects. Hidden lines are usually not shown on the un-sectioned half unless they are needed for clearness or for dimensioning purposes. As in all sectional drawings, the cutting plane take precedence over the center line.
Here is another example of a half section. Remember that only one fourth of the object is removed with a half section, whereas half of the object is generally removed with a full section.
This manufacturer’s drawing, using both full and half section, illustrates the advantages of sectional views. The different line directions indicate different parts and materials used in the assembly of this valve.
Quiz
Directions: On a separate sheet of paper, complete the section view.
Broken Out Sections
In many cases only a small part of a view needs to be sectioned in order to show some internal detail. In the figure below, the broken out section is removed by a freehand break line. A cutting plane line does not need to be shown, since the location of the cut is obvious.
Revolved Sections
A revolved section shows the shape of an object by rotating a section 90 degrees to face the viewer. The three revolved sections illustrated in the spear-like object of figure 12 show the changes that take place in its shape.
Offset Sections
An offset section is a means of including in a single section several features of an object that are not in a straight line. To do this, the cutting plane line is bent, or “OFFSET” to pass through the features of the part.
Removed Sections
A section removed from its normal projected position in the standard arrangement of views is called a “removed” section. Such sections are labeled SECTION A-A, SECTION B-B, etc., corresponding to the letter designation at the ends of the cutting plane line. Removed sections may be partial sections and are often drawn to a different scale.
Quiz
Directions: Complete the half section view of a separate sheet of paper. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.07%3A_Sectional_Views.txt |
The machined features in this section are common terms related to basic industry processes. These terms are often found on prints. For a better understanding of these processes, look at the models of machined features in the Print Reading Lab.
Bevel
A surface cut at an angle. In regard to welding, a bevel will normally end up being a surface prep for a weld.
Boss
A circular pad on forgings or castings, which project out from that body of the part. The surface of the boss is machined smooth for a bold head to seat on and it has a hole drilled through to accommodate the bolt shank.
Chamfer
A process of cutting away a sharp external corner or edge. Not for welding.
Counterbore
To enlarge drilled hole to a given diameter and depth. Usually done for recessing a bolt head.
Countersink
To machine a conical depression in a drilled hole for recessing flathead screws or bolts.
Dovetail
A slot of any depth and width, which has angled sides.
Quiz
Directions: Name the machined features shown below.
Fillet
A small radius filling formed between the inside angle of two surfaces.
Kerf
The narrow slot formed by removing material while sawing or other machining.
Keyway
A narrow groove or slot cut in the shaft hole of a sleeve or hub for accommodating a key.
Keyseat
A narrow groove or slot cut in a shaft for accommodating a key.
Knurl
To uniformly roughen with a diamond or straight pattern a cylindrical or flat surface.
Lug
A piece projecting out from the body of a part. Usually rectangular in cross section with a hole or slot in it.
Neck
To machine a narrow groove on a cylindrical part or object.
Quiz
Directions: Name the machined features shown below. Check your answer.
Pad
A slightly raised surface projecting out from the body of a part. The pad surface can be of any size or shape. (Remember, bosses can only be round)
Round
A small radius rounded outside corner formed between two surfaces.
Spline
A gear-like serrated surface on a shaft. Take the place of a key when more torque strength is required.
Spotface
A round surface on a casting or forging for a bold head. Usually about 1/16” deep.
T-Slot
A slot of any dimensions to resemble a “T”.
Quiz
Directions: Name that machined features shown below. Check your Answers.
1.09: Print Interpretation
This final section introduces basic print reading. Because machine drawings are used to some extent in nearly every trade, the working drawings used in this section are all machine drawings.
The purpose of this package is to provide an opportunity to put your fundamental knowledge of print reading to use before you go on to more specialized and advanced print reading activities.
Exercise 1
Study the print below and fill in the related dimensions.
Exercise 2
Study the print below and fill in the related dimensions.
Exercise 3
Study the print below and fill in the related dimensions.
Exercise 4
Study the print below and fill in the related dimensions.
Exercise 5
Study the print below and fill in the related dimensions. | textbooks/workforce/Manufacturing/Basic_Blueprint_Reading_(Costin)/1.08%3A_Machined_Features.txt |
Traditionally, drafters sat at drafting boards and used pencils, pens, compasses, protractors, triangles, and other drafting devices to prepare a drawing manually. Today, however, most professional drafters use computer-aided drafting (CAD) systems to prepare drawings. Although drafters use CAD extensively, it is only a tool. Drafters and tradespersons still need knowledge of traditional drafting tools and techniques.
1: Describe the drafting tools and materials used in drawing plans
Drafting tools are needed to lay out the different shapes and lines used to create drawings and sketches. Basic knowledge of the available tools and how to use them will assist you in your drawing.
Drafting board or table
The drafting board is an essential tool. Paper will be attached and kept straight and still, so the surface of the drafting board must be smooth and true, with no warps or twists. The surfaces of most drafting boards are covered with vinyl because it is smooth and even.
The drafting board or table should have two parallel outside working edges made of either hardwood or steel.
Most drafting table tops can be set at different heights from the floor and at any angle from vertical to horizontal. Other drafting tables may not have the same adjustments and may be limited to being raised only from horizontal to a low slope.
To reduce back strain, use an adjustable drafting stool when working at a drafting table. Tables or boards should be a minimum of 1.2 m (4) in width and 0.9 m (3) in height.
T-square
The fixed head T-square is used for most work. It should be made of durable materials and have a transparent edge on the blade. To do accurate work, the blade must be perfectly square and straight, which should be checked regularly.
The T-square is used to draw horizontal lines and align other drawing instruments. If you are right-handed, you hold it tight against the left edge of the drawing board and move it up and down as required. When you make close adjustments, your fingers should be on top of the square, and you should use your thumb to control the T-square’s movement, Figure \(1\).
When drawing horizontal lines, incline your pencil in the direction you are drawing the line. Hold the pencil point as close as possible to the blade. Roll the pencil between your fingers to prevent the point from becoming flat on one side.
Triangle
A triangle (set-square) is made of clear plastic. Some triangles have rabbeted edges (Figure \(2\)) so that when you draw lines, the corner of the edge is set away from the paper to help prevent smudges and ink blotches.
Triangles are available in 45°-90°-45° or 30°-60°-90° combinations. For most work, triangles should be about 200 mm to 250 mm (8” to 10”) long. Triangles should be stored flat to prevent warping and not stored underneath other objects to prevent any pressure from causing them to deform.
Check a triangle for accuracy by drawing a perpendicular line, then reversing the triangle and drawing another perpendicular line (Figure 3).
Triangles are used to draw vertical lines and other lines at set angles. Rest the triangle on the T-square blade and slide it along the blade to the desired location. Draw the full length of the vertical line in one pass if possible. Hold the blades of the T-square and the triangle together to prevent movement when you are drawing, and hold the pencil point as close as possible to
the triangle. You can also draw 15° and 75° angles by using both a 45°-90°-45° and a 30°-60°-90° in combination. Figure \(4\) shows how triangles are placed to draw angles that are every multiple of 15°.
Protractor
A protractor, Figure \(5\), is an instrument used to measure angles. It is typically made of transparent plastic or glass. Protractors can be used for checking and transferring angles to and from a drawing sheet.
Drafting machine
A drafting machine (Figure \(6\)) is a device that is mounted to the drawing board. The drafting machine replaces the T-square and triangles, as it has rulers with angles that can be precisely adjusted with a controlling mechanism. A drafting machine allows easy drawing of parallel lines over the paper. The adjustable angle between the rulers allows the lines to be drawn at various accurate angles. The rulers are replaceable and can be replaced with scale rulers. Rulers may also be used as a support for separate special rulers and letter templates.
Drawing pencils
Both wood and mechanical pencils are used for drafting (Figure \(7\)). Manufacturers grade drawing pencils using numbers and letters. These range from 6B (very soft and black) to 9H (the hardest). From 6B, the pencils progress through 5B, 4B, 3B, 2B, B, and HB, and then to F, the medium grade. After that, they move to the harder graphite: H, 2H, 3H, 4H, 5H, 6H, 7H, 8H, and finally 9H. The softer grades are used for sketching and rendering drawings. The harder grades are used for instrument drawings. Mechanical pencils do not require sharpening and are made to hold leads (they are actually made of graphite) that are bought separately. Thin-lead mechanical pencils, with leads as small as 0.5 mm, are available in different grades of lead. Most draftspersons use four or five different mechanical pencils with a different lead in each. The pencils come in different colors, so it is easy to track which lead is in each.
Erasers and erasing shields
The best eraser to use on drawings is either a soft pink eraser with beveled ends or a white plastic eraser. Electric rotary erasers are also available. They permit easy erasure of small errors without erasing adjacent lines. A metal erasing shield helps to confine erasures to the desired area. Erasing shields are made from very thin stainless steel and have holes of various shapes to accommodate the sections to be erased. Figure \(8\) shows two erasers and an erasing shield.
Templates
Templates (Figure \(9\)) are available for many different trades. Templates incorporate cut-outs of symbols and fixtures commonly used in that trade. These cut-outs make it easy to trace shapes onto drawing paper.
French curves and splines
A French curve (\(10\)) is a plastic template designed to help you draw curves. The French curve contains many different curves, but each is represented over a very short distance only. One radius of the curve blends into another radius. It takes a lot of practice to use French curves effectively.
A spline or flexible curve (Figure 11) can be used instead of a French curve to draw most curves. A spline is a plastic or rubber rod that is reinforced with metal. To use a spline, bend it to the shape of the curve you need. The design of the spline lets you hold a pencil against an edge and draw an accurate line without smudging. A spline cannot be used to draw curves with a very short radius because the spline will not bend tightly.
Compass
A compass can be used for drawing circles, bisecting lines, or dividing angles. For very large circles, you can use a beam compass. The four types of compasses are shown in Figure \(12\) — Figure \(14\). Most compasses can be fitted with leads, pens, or points.
When using the compass, tilt it in the direction of the line, as shown in Figure 13
Dividers
Dividers (Figure \(16\)) are used for transferring dimensions from a drawing to a measuring device such as a ruler or scale. They are also used when scribing directly on material like metal.
Dusting cloth or brush
It is essential to keep your drawings and drafting surface clean. When equipment gets dirty from the lead pencils, you should clean it regularly so that it does not smudge your drawings. Any soft, clean cloth is suitable. You may want to wash your board occasionally with a spray cleaner.
Use a brush like the one in Figure \(17\) to clean your table before placing paper down and sweep away any debris as you draw. If you use your hand to brush, you could leave marks on the paper. After sharpening a pencil, wipe off any dust clinging to the pencil’s point to prevent smudging.
Scale rulers
Scale rulers let you draw diagrams at a reduced scale. They also let you obtain dimensions from a scaled drawing. Scale rulers come in various types to meet the requirements of many kinds of work. Most scale rulers have three edges and six different scales. The scales are read from either end of the rule. A typical combination of metric scales is 1:20, 1:50, 1:100, 1:25, 1:75, and 1:125.
Because of the decimal basis of metric measurements, metric scale rulers are both applicable and easy to use at any scale. Figure \(18\) shows the two scales from both ends of the same side.
Imperial scale rulers may be an architect’s ruler, a mechanical engineer’s ruler, or a civil engineer’s ruler (Figure 17). The architect’s scale ruler is the most common and is in inches and fractions of inches. A mechanical engineer’s scale ruler comes in inches and decimals of inches. A civil engineer’s scale ruler comes in feet and decimals of feet. | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/1%3A_Describe_the_drafting_tools_and_materials_used_in_drawing_plans/1.1%3A_Tools.txt |
The most common support for drawing is paper. Even though the original creative surface has changed from the drafting table to the computer screen, on the work site drawings are still primarily in printed form.
Drawing paper
There is a wide variety of drawing paper available in many sizes and of different qualities.
Good quality drawing paper is acid-free and will not turn yellow with age. Light-colored drawing papers are available in pale yellow or buff, but these should be used only when it is not necessary to make copies.
Tracing paper
Tracing paper, which is transparent, can be used to make copies of drawings. It is thin enough to allow the light of photocopy machines to shine through the unmarked areas, and only the lines and figures will block the light. Materials used for tracing include tracing paper, vellum, tracing cloth, glass cloth, and polyester film with a matte finish.
Standard paper sizes
Paper sizes typically comply with one of two different standards: ISO (world standard) or ANSI/ ASME Y14 (American).
The standard ISO series of paper sizes are as follows:
• A0 841 mm × 1189 mm
• A1 594 mm × 841 mm
• A2 420 mm × 594 mm
• A3 297 mm × 420 mm
• A4 210 mm × 297 mm
• A5 148 mm × 210 mm
The standard ANSI/ASME series of paper sizes is as follows:
• E 34 inch × 44 inch
• D 22 inch × 34 inch
• C 17 inch × 22 inch
• B 11 inch × 17 inch
• A 8.5 inch × 11 inch
The 81/2" × 11" standard letter paper corresponds to 216 mm × 279 mm. You can buy precut sheets that have a border and a preprinted title block in the lower right-hand corner. These are available in many standard sizes.
If the paper you use does not have a border and title block, you will have to draw them in. The left-hand border should be wider than the right-hand border and should be at least 50 mm wide to allow room for the prints to be bound. Figure \(1\) shows a title block with suitable dimensions added.
Paper rolls
Many grades of paper rolls are available in different widths that can be cut to any length required.
Drafting or masking tape
Use drafting or masking tape to hold the paper on the drafting surface. The tape should be attached at the corners to hold the sheet firmly stretched with no wrinkles. Only short pieces of tape are required.
1.3: Computer drafting printing
Computer drafting programs are used effectively for all manner of drafting and have virtually replaced manual drafting. Small size computer-generated drawings can be printed on normal computer printers. However, larger drawings require a plotter. Older plotters used pencils, pens, or felt pens, but the new plotters are laser-based or jet printers and are capable of multiple colors. They are made to print all the sizes of drawings. Plotters also print well on vellum and some other non-paper media.
1.E: Self Test 1
Self-Test 1
1. What are most drafting tables covered with?
1. Vinyl
2. Wood
3. Metal
4. Plastic wrap
2. Drafting tables are adjustable in height and angle to the floor.
1. True
2. False
3. What are T-squares used for?
1. Drawing angled lines
2. Drawing vertical lines
3. Setting paper on a table
4. Drawing horizontal lines
4. Why might a set square have rabbeted edges?
1. To help you prevent smudges
2. To help keep your pencil sharp
3. To help keep your pencil aligned
4. To allow you to draw straight lines
5. Checking a triangle should be done periodically.
1. True
2. False
6. What is a set square used for?
1. Drawing circles
2. Drawing curved lines
3. Drawing vertical and angled lines
4. Drawing horizontal and angled lines
1. When using a 45°-90°-45° and a 30°-60°-90° triangle, angles can be drawn every 10°.
1. True
2. False
2. What is a protractor used for?
1. Measuring lines
2. Measuring angles
3. Drawing angled lines
4. Drawing straight lines
3. What kind of line is drawn to check a triangle?
1. Straight
2. Parallel
3. Oblique
4. Perpendicular
4. What is a compass used for?
1. Drawing angled lines
2. Drawing straight lines
3. Drawing arcs and circles
4. Drawing irregular curves
5. What is an erasing shield used for?
1. To erase mistakes
2. To hold the eraser
3. To erase in a desired area
4. To prevent the need for erasing
6. A spline is a plastic or rubber rod reinforced with metal used for drawing curves.
1. True
2. False
1. What is the tool called a divider used for?
1. Drawing circles
2. Drawing a diameter
3. Scribing arcs on metal
4. Drawing arcs and curves
2. What is the purpose of a scale ruler?
1. To draw straight lines
2. To enlarge the scale of a drawing
3. To create drawings at a reduced scale
4. To convert between imperial and metric measures | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/1%3A_Describe_the_drafting_tools_and_materials_used_in_drawing_plans/1.2%3A_Drafting_Materials.txt |
The purpose of engineering drawings is to convey objective facts, whereas artistic drawings convey emotion or artistic sensitivity in some way.
Engineering drawings and sketches need to display simplicity and uniformity, and they must be executed with speed. Engineering drawing has evolved into a language that uses an extensive set of conventions to convey information very precisely, with very little ambiguity.
Standardization is also very important, as it aids internationalization; that is, people from different countries who speak different languages can read the same engineering drawing and interpret it the same way. To that end, drawings should be as free of notes and abbreviations as possible so that the meaning is conveyed graphically.
2: Describe lines lettering and dimensioning in drawings
Standard lines have been developed so that every drawing or sketch conveys the same meaning to everyone. In order to convey that meaning, the lines used in technical drawings have both a definite pattern and a definite thickness. Some lines are complete, and others are broken. Some lines are thick, and others are thin. A visible line, for example, is used to show the edges (or “outline”) of an object and to make it stand out for easy reading. This line is made thick and dark. On the other hand, a center line, which locates the precise center of a hole or shaft, is drawn thin and made with long and short dashes. This makes it easily distinguishable from the visible line.
When you draw, use a fairly sharp pencil of the correct grade and try to maintain an even, consistent pressure to make it easier for you to produce acceptable lines (Figure \(1\)). Study the line thicknesses (or “line weights”) shown in Figure \(2\) and practice making them.
In computer drafting, the line shape remains the same, but line thickness may not vary as it does in manually created drawings. Some lines, such as center lines, may not cross in the same manner as in a manual drawing. For most computer drafting, line thickness is not important.
To properly read and interpret drawings, you must know the meaning of each line and understand how each is used to construct a drawing. The ten most common are often referred to as the “alphabet of lines.” Let’s look at an explanation and example of each type.
Object lines
Object lines (Figure \(3\)) are the most common lines used in drawings. These thick, solid lines show the visible edges, corners, and surfaces of a part. Object lines stand out on the drawing and clearly define the outline and features of the object.
Hidden lines
Hidden lines (Figure \(4\)) are used to show edges and surfaces that are not visible in a view. These lines are drawn as thin, evenly-spaced dashes. A surface or edge that is shown in one view with an object line will be shown in another view with a hidden line.
Center lines
Center lines (Figure \(5\)) are used in drawings for several different applications. The meaning of a center line is normally determined by how it is used. center lines are thin, alternating long and short dashes that are generally used to show hole centers and center positions of rounded features, such as arcs and radii. Arcs are sections of a circle, and radii are rounded corners or edges of a part. center lines can also show the symmetry of an object.
Dimension and extension lines
Dimension and extension lines (Figure \(6\)) are thin, solid lines that show the direction, length, and limits of the dimensions of a part. Dimension lines are drawn with an arrowhead at both ends.
Extension lines are drawn close to, but never touching, the edges or surface they limit. They should be perpendicular, or at right angles, to the dimension line. The length of extension lines is generally suited to the number of dimensions they limit.
Leader lines
Leader lines (Figure \(7\)) show information such as dimensional notes, material specifications, and process notes. These lines are normally drawn as thin, solid lines with an arrowhead at one end. They are bent or angled at the start but should always end horizontally at the notation. When leader lines reference a surface, a dot is used instead of an arrowhead.
Note that the symbol ø is used to indicate a diameter rather than the abbreviation “DIA.” The number that immediately follows this symbol is the diameter of the hole, followed by the number of holes that must be drilled to that dimension.
Phantom lines
Like center lines, phantom lines (Figure \(8\)) are used for several purposes in blueprints. Phantom lines are used to show alternate positions for moving parts and the positions of related or adjacent parts and to eliminate repeated details. Phantom lines are drawn as thin, alternating long dashes separated by two short dashes.
Cutting plane lines
Cutting plane lines (Figure \(9\)) show the location and path of imaginary cuts made through parts to show internal details. In most cases, sectional views (or views that show complicated internal details of a part) are indicated by using a cutting plane line. These lines are thick, alternating long lines separated by two short dashes. The arrowheads at each end show the viewing direction of the related sectional view. The two main types of cutting plane lines are straight and offset.
Section lines
Section lines, also known as sectional lining, (Figure \(10\))indicate the surfaces in a sectional view as they would appear if the part were actually cut along the cutting plane line. These are solid lines that are normally drawn at 45-degree angles. Different symbols are used to represent different types of materials.
Break lines
Break lines are drawn to show that a part has been shortened to reduce its size on the drawing. The two variations of break lines common to blueprints are the long break line and the short break line (Figure \(11\)). Long break lines are thin solid lines that have zigzags to indicate a break. Short break lines are thick, wavy solid lines that are drawn freehand. When either of these break lines is used to shorten an object, you can assume that the section removed from the part is identical to the portions shown on either side of the break. | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/2%3A_Describe_lines_lettering_and_dimensioning_in_drawings/2.1%3A_Line_styles_and_types.txt |
The letters and numbers on a drawing or sketch are as important as the lines. Scribbled, smudged, or badly written letters and numbers can become impossible to read. This may lead to time-consuming and costly errors. Lettering is necessary to describe:
• the name or title of a drawing
• when it was made
• the scale
• who sketched it
• the dimensions
• the special notations that describe the size
• the materials to be used
• the construction methods
The American Standard Vertical letters, Figure \(1\), have become the most accepted style of lettering used in the production of manual drafting. This lettering is a Gothic sans serif script, formed by a series of short strokes.
Font styles and sizes may vary in computer drafting. Note that all letters are written as capital (upper case) letters. Practice these characters, concentrating on forming the correct shape. Remember that letters and numbers must be black so that they will stand out and be easy to read. Lettering and figures should have the same weight and darkness as hidden lines.
Figure \(2\) shows a simple drawing. Notice that the dimensions are given between arrows that point to extension lines. By using this method, the dimensions do not get in the way of the drawing. One extension line can be used for several dimensions. Notice also that the titles require larger letter sizes than those used for dimensions and notations. It is important that the title and sketch number stand out, as shown in Figure \(2\). When you begin lettering, you may wish to use very light lettering guidelines to ensure uniformity in lettering size and alignment.
2.3 Abbreviations
Abbreviations are commonly used to help simplify a drawing and conserve space. Although many fields share common abbreviation conventions, there are also field- or trades-specific conventions that you will see as you become more specialized. Here is a common list of abbreviations that are used on drawings. Each trade will have specific abbreviations from this list, and therefore a set of drawings will usually include an abbreviation key.
AB anchor bolt HLS holes REF reference
ABT about HSS hollow structural steel REQ'D required
AUX auxiliary ID inside diameter REV revision
BC bolt circle IN inches RF raised face
BBE bevel both ends INT internal RH right hand
BCD bolt circle diameter ISO International Standards Organization SCH schedule
BOE bevel one end KP kick plate SI International System of Units
BE both ends LH left hand SPECS specifications
BL baseline LAT lateral SQ square
BM benchmark LR long radius SM seam
Btm bottom LG long SMLS seamless
BP base plate MB machine bolt S/S seam to seam
B/P blueprint MS mild steel SO slip on
BLD blind MIN minimum SEC section
C/C center to center MAX maximum STD standard
COL column MAT'L material SS stainless steel
CPLG coupling MISC miscellaneous SYM symmetrical
CS carbon steel NC national course T top
C/W complete with NF national fine T&B top and bottom
CYL cylinder NO number T&C threaded and coupled
DIA diameter NOM nominal THD threaded
DIAG diagonal NTS not to scale TBE threaded both ends
DIM dimension NPS nominal pipe size TOE threaded one end
DWG drawing NPT national pipe thread THK thick
EA each O/C on center TOL tolerance
EL elevation OA overall TOC top of concrete
EXT external OD outside diameter TOS top of steel
F/F face to face OR outside radius TYP typical
FF flat face OPP opposite U/N unless noted
FLG flange PAT pattern VERT vertical
FW fillet weld PBE plain both ends WD working drawing
Ga gauge POE plain one end WP working point
Galv galvanized PSI pounds per square inch WT weight
HVY heavy PROJ project W/O without
HH hex head RD running dimension XH extra heavy
HR hot rolled R or RAD radius XS extra strong
HT heat treatment RND round | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/2%3A_Describe_lines_lettering_and_dimensioning_in_drawings/2.2%3A_Standard_lettering.txt |
A good sketch of an object is one that you can use as a blueprint to manufacture the object. Your sketch must show all the necessary dimensions of the part, locate any features it may have (such as holes and slots), give information on the material it is to be made from, and if necessary, stipulate the processes to be used in the manufacture of the object.
Three principles of dimensioning must be followed:
1. Do not leave any size, shape, or material in doubt.
2. To avoid confusion and the possibility of error, no dimension should be repeated twice on any sketch or drawing.
3. Dimensions and notations must be placed on the sketch where they can be clearly and easily read.
Consider Figure \(1\) and note whether these three dimensioning principles have been followed.
Although the dimensions and notations are clear and easy to read in Figure 14, the following points should be made:
• Leg and rail sizes have not been shown.
• The thickness of the top has not been given.
• The material has not been given as a notation.
• The 600 dimension has been repeated.
• The type of finish to be used has not been given.
• Note 2 is redundant.
The sketch of the shop table is far from complete, and the table could not be made without a lot of guesswork. Figure \(2\), on the other hand, shows a completed sketch that, along with the necessary notes and dimension information, can be readily used for construction purposes.
Rules of dimensioning
For most objects, there are three types of dimensions:
• size dimensions
• location dimensions
• notation dimensions
Figure \(3\) illustrates the difference between size and location dimensions. (S = size dimension and L = location dimension).
Size dimensions are necessary so that the material size of the object can be determined. Location dimensions are necessary so that parts, holes, or other features can be positioned in or on the object. Notation dimensions describe the part, hole, or other feature with a short note, such as the “ø20 2 holes” notation (see Figure 16). Keep these points in mind:
• Keep all dimension lines at least 10 mm (3/8") clear of object lines wherever possible.
• Try to group related dimensions rather than scattering them.
• Try to keep dimensions off the views themselves.
• Separate one line of dimensions from another line of dimensions or from a notation by a space of at least 10 mm (3/8").
• Leave a space of approximately 3 mm (1/8") between the object outline and the beginning of any extension line.
• Keep arrowheads slim and neat.
• Never dimension to a hidden line.
• Draw leader lines at an angle when intersecting object lines to avoid confusing them with extension lines.
Figure \(4\) illustrates good placement of dimensions and notations. Note the placement of extension lines and the use of center lines to locate features such as holes. Also, note the shape and size of arrowheads.
Dimensioning systems
Two systems are used for dimensioning drawings. They are aligned and unidirectional systems. Figure \(5\) shows examples of both systems. As you can see, the aligned system requires that you turn the drawing on its side, whereas the unidirectional system may be read from the normal reading position. For most drawings, the unidirectional system is preferred, as it is easier to read; however, architectural drawings still use the aligned system.
Systems of measurement
You may be required to sketch or read drawings constructed with either metric (SI) or imperial dimensions. You may also encounter drawings that are dual-dimensioned and contain both systems of measurement on the same drawing.
SI system of measurement
The SI system of measurement has become the official standard in Canada. It is common practice on shop drawings to express all metric dimensions in millimeters. Figure \(6\) shows a detail drawing of a connector arm using metric measurements. All metric drawings should contain a note specifying that all dimensions are in millimeters.
Imperial system of measurement
An imperial drawing may use the decimal-inch system, the fractional-inch system, or feet and
inches.
• In the decimal-inch system, very accurate dimensions for items such as machine parts are expressed as decimals of an inch, such as 0.005". In words, this reads as five one-thousandths of an inch.
• In the fraction-inch system, dimensions for things such as steel and lumber sizes are expressed as inches and fractions of an inch from as small as 1/64" (Figure \(7\)). Most drawings that are dimensioned in the imperial system will use the fraction-inch system.
In the feet-inch system (Figure \(8\)), the dimensions of large structures such as machine frames and buildings are expressed in feet and inches, such as 2'-6" (two feet, six inches).
Dimensioning orthographic sketches
The following are rules and procedures for dimensioning single- and multi-view sketches:
• Place dimensions on views that show parts of features as solid outlines. Avoid dimensioning hidden lines wherever possible.
• Try to keep dimensions between views. Leave adequate room between views when you begin your sketch.
• Keep the smallest dimensions nearest to the object outline.
• Diameters in metric measurement should be denoted on a sketch using the symbol ø (e.g., ø20 – 2 holes). A radius should be denoted using the letter R (e.g., R 25).
• Diameters in imperial measurements may be denoted on a sketch by the symbol ø or the abbreviation DIA (e.g., 3" ø DRILL or 4½" DIA). A radius may be denoted using the letter R or the abbreviation RAD (e.g., 3" R or 6½" RAD).
• Arrows carrying notations should always point toward the center of circular objects.
• Arrows should always point toward a circle center when dimensioning a diameter and away from the center when dimensioning a radius.
2.E: Self-Test 2
Self-Test 2
1. Which line in a drawing should be the darkest and thickest?
1. Center line
2. Hidden line
3. Object line
4. Dimension line
2. This line in a drawing is a broken line of alternating short and long dashes.
1. Center line
2. Hidden line
3. Phantom line
4. Compression line
3. What is the name of a line in a drawing that shows a hidden feature?
1. Buried line
2. Missing line
3. Hidden line
4. Concealed line
4. A break line shows where part of an object in a drawing has been removed.
1. True
2. False
5. A drawing should have all dimensions shown in every view.
1. True
2. False
6. Which line is used to show notes or specifications in a drawing?
1. Leader line
2. Object line
3. Extension line
4. Phantom line
1. How are alternate positions of moving parts shown in a drawing?
1. With a break line
2. With a hidden line
3. With an object line
4. With a phantom line
2. What does a sectional view in a drawing normally show?
1. Outside of a part
2. Inside dimensions
3. Internal holes and slots
4. Internal features of a part
3. What do the arrows that locate a sectional view in a drawing indicate?
1. Side the part is cut on
2. Internal holes and slots
3. Direction of the standard view
4. Direction of observation when the section is drawn
4. In the drawing below, indicate whether the dimensions shown by letters D2 and D4 are size dimensions, no dimensions, location dimensions, or notation dimensions.
1. Size dimensions
2. No dimensions
3. Notation dimensions
4. Location dimensions | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/2%3A_Describe_lines_lettering_and_dimensioning_in_drawings/2.4%3A_Principles_of_Dimensioning.txt |
Learning Task 3
Use scale rulers to determine actual dimensions from drawings
Scale drawings are accurate and convenient visual representations made and used by engineers, architects, and people in the construction trades. The accuracy is achieved because the drawing is proportional to the real thing. The convenience comes from the size of the drawing. It is large enough to provide the desired detail but small enough to be handy.
The flexibility to draw proportionally in different sizes is provided by scales. For the purposes of representation, we will only be concerned with reduction scales. Reduction scales make the drawing smaller than the object. The kinds of rulers we will be discussing for making scaled drawings are the architect’s scale and the metric scale, both shown in Figure 1.
1. Architect’s and metric rulers
The scale of the drawing is always written on the drawing, unless the drawing is not drawn to scale. In the latter case, this will be indicated by the “not to scale” abbreviation (NTS). The scale is the ratio of the size of the drawing to the object. For drawings smaller than the object, the ratio is that of a smaller distance to a larger one.
The architect’s scales use ratios of inches to a foot. The most common architect’s scale used is 1/4 inch to the foot, written on drawings as:
Scale 1/4" = 1'-0"
This means that a line 1/4" long on the drawing represents an object that is one foot long. At the same scale, a line 1½" long represents an object 6' long, because 1½" contains 6 quarter-inches.
Metric scale ratios use the same units in both ratio terms, resulting in an expression of how many times smaller than the object the drawing is. For example, the standard metric scale ratio that corresponds approximately to ¼" = 1'-0" is written on drawings as "Scale 1:50."
This means that the object is 50 times as large as the drawing, so that 50 mm on the object is represented by 1 mm on the drawing. For another example, 30 mm on the drawing represents 50 × 30 mm = 1500 mm (or 1.5 metres) on the object.
Figure 2 lists the scale ratios used for building plans and construction drawings in both metric and the approximate equivalent architectural scale ratios.
Type of Drawing
Common Metric Ratios
Imperial Equivalents and Ratios
Use
Site plan
1:500
1:200
1" = 40'-0"
1/16" =1'-0"
1:480
1:192
• To locate the building, services and reference points on the site
Sketch plans
1:200
1/16" =1'-0"
1:192
• To show the overall design of the building
• To indicate the juxtaposition of the rooms and locate the positions of piping systems and components
General locations
1:100
1/8" =1'-0"
1:96
Drawings
1:50
1/4" =1'-0"
1:48
Construction details
1:20
1:10
1:5
1:1
1/2" =1'-0"
1" = 1'-0"
3" =1'-0"
Full size
1:24
1:12
1:4
1:1
• To show the detail of system components and assemblies
2. Preferred scales for building drawings
3.2: Metric Scales
A triangular metric scale is similar to the architectural scale in that it has six edges, but it has only one scale ratio per edge. The ratio is marked at the left end of the scale. For example, the scale of 1:50 means that 1 mm on the drawing represents 50 mm on the object. This means that the object is 50 times larger than the drawing of it. An object 450 mm long would be represented by a line 9 mm long (450 mm/50).
Figure 7 shows one of the three sides of a metric scale. The scale labelled 1:50 is read from left to right, from 0 to 15 m. The 1:5 scale (on the bottom) can also be read from left to right (0 to 600 mm) by turning the scale around.
1. One side of a metric ruler
2. Metric scales marked at 250 mm
3.3: Obtain dimensions from drawings
The best way to get exact dimensions from drawings is to use the explicit dimensions (in millimeters or in feet and inches) written between the dimension lines. Any measurements that you need should be somewhere on the drawings. Drawings normally only give each dimension once. If there are a number of parallel lengths, only one will have a measurement. To find the dimension you need, you may need to refer to other views or you may have to add or subtract other dimensions.
Measuring lines on a drawing to determine the measurement is not an accurate way to extract dimensions. This is because the drawing is only a representation and may not be exact. Photocopies of drawings may not be to the scale of the original.
The scale of the drawing and your accuracy in measuring will lead to inaccuracies. For example, if the scale of a drawing is 1/8" = 1'-0", an error of 1/32" in measuring the plan amounts to 3" of error in the object measured. Detail drawings permit more exactness because they are proportionately larger. Details, however, often require more exactness and usually contain any needed dimensions.
When accuracy is not required and approximate dimensions are adequate, measuring plans is a quick method of taking off material for estimating the cost of a job. In such cases, 10% is usually added for cut-off and waste allowance.
If you use the scale of the drawing, it will be simple to read off the measurements. However, in the field you will often need approximate measurements and the only measuring tool at hand will be a measuring tape.
A steel pocket tape measure has a movable hook on the end that allows accurate measuring both when butted against a surface or when hooked on the end of an object (Figure 9). The end of the flexible tape itself is shortened to allow for the hook.
1. Tape measure with movable hook
2. Measuring metric drawings
Scale
3/32
1/8
3/16
1/4
3/8
1/2
3/4
1
1 1/2 (3/2)
3
Reciprocal
32/3
8
16/3
4
8/3
2
4/3
1
2/3
1/3
3. Reciprocals of standard scales
4. Reading lengths of piping runs | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/3%3A_Use_scale_rulers_to_determine_actual_dimensions_from_drawings/3.0%3A_Prelude_to_Scales.txt |
Self-Test 3
1. Scale rulers are available in both imperial and metric.
1. True
2. False
2. Which scale ruler would be most likely to have a ¼" to 1' scale on it?
1. Metric scale ruler
2. Architect’s scale ruler
3. Civil engineer’s scale ruler
4. Mechanical engineer’s scale ruler
3. How many scale ratios per edge do metric scale rulers have?
1. 1
2. 2
3. 3
4. 4
4. What is the best way to get exact dimensions from a drawing?
1. Measure using a tape measure.
2. Exact dimensions aren’t important.
3. Scale it with your combination square.
4. Use the dimension written between the dimension lines.
5. If a line measures 4½", what is the equivalent in 1/4"=1' scale?
1. 9"
2. 18'
3. 9'
4. 18"
6. If a line measures 63/8", what is the equivalent in 1/8" = 1' scale?
1. 27"
2. 48'
3. 4'3"
4. 51'
1. What is the measurement of the line below?
1. 68'
2. 6'3"
3. 80'
4. 12'6"
2. What is the measurement of the line below?
1. 1'3"
2. 1'6"
3. 27'3"
4. 27'6"
3. What is the measurement of the line below?
1. 16'2"
2. 16'4"
3. 16'6"
4. 16'9"
1. What is the measurement of the line below?
1. 3'4"
2. 3'7"
3. 9'4"
4. 13'7"
2. What is the measurement of the line below?
1. 35 mm
2. 3.5 m
3. 35 m
4. 350 m
3. What is the measurement of the line below?
1. 0.19 m
2. 1.9 m
3. 19 m
4. 190 m
1. What is the measurement of the piping run below in 1/2"=1' scale?
1. 8'6"
2. 10'6"
3. 17'
4. 34'
Untitled Page 19
Architect’s (imperial) scales
Traditional architectural measurements of length are written very precisely in feet and inches using the appropriate symbols for feet and inches separated by a dash (e.g., 4'-3 ½" and 7'-0"). This is the way that all imperial measurements are written on construction drawings.
Listed below are the scales found on the architect’s triangular scale ruler.
1. 3/32" =1'-0"
2. 3/16" = 1'-0"
3. 1/8" = 1'-0"
4. ¼" = 1'-0"
5. ¾" = 1'-0"
6. 3/8" = 1'-0"
7. 1" =1'-0"
8. ½" = 1'-0"
9. 1½" = 1'-0"
10. 3" = 1'-0"
11. 1" = 1" (full size—use the scale labelled 16)
Figure 3 shows one face of an architect’s imperial triangular scale ruler. There are two edges on each face and each edge contains two scales that run in opposite directions. At each end of an edge, a number or fraction indicates the distance in inches that represents one foot. The top edge is in eighths of an inch from left to right, and in quarters of an inch from right to left. Note that the 1/8" scale from 0 to the right end represents 95 feet, and the ¼" scale from 0 to the left end represents 47 feet.
1. One face of an architect’s ruler (NTS)
2. Units in an architect’s scale ruler (NTS)
3. Reading dimensions using an architect’s ruler (NTS)
4. Reading dimensions using an architect’s ruler (NTS)
4.1: Types of views used in drawings
Architectural drawings are made according to a set of conventions, which include particular views (floor plan, section, etc.), sheet sizes, units of measurement and scales, annotation, and cross-referencing.
4: Describe drawing projections
Types of views used in drawings
The two main types of views (or “projections”) used in drawings are:
• pictorial
• orthographic
Pictorial views
Pictorial views show a 3-D view of how something should look when completed. There are three types of pictorial views:
• perspective
• isometric
• oblique
Perspective view
A perspective view presents a building or an object just as it would look to you. A perspective view has a vanishing point; that is, lines that move away from you come together in the distance. For example, in Figure 1, we see a road and line of telephone poles. Even though the poles get smaller in their actual measurement, we recognize them as being the same size but more distant.
1. Perspective view
2. An isometric view
3. Oblique view of the object in Figure 2
4. Multi-view through a glass box
5. Box opened to produce orthographic views
6. Drawing with the glass box flattened out
7. Orthographic views of the object in Figure 2
8. Main floor plan of a house
9. Left elevation of house in Figure 8
10. Section A-A
11. Section A-A
4.E: Self Test 4
Self-Test 4
1. A perspective drawing is one form of which type of view?
1. Oblique
2. Pictorial
3. Isometric
4. Orthographic
2. At what angle should isometric drawings have horizontal lines drawn?
1. 15°
2. 30°
3. 45°
4. 60°
3. What do perspective drawings always have?
1. Scale
2. Dimensions
3. Hidden lines
4. Vanishing points
4. At what angles should oblique drawings have lines drawn?
1. 0°–15°
2. 15°–30°
3. 30°–45°
4. 45°–60°
5. Orthographic projection drawings are three-dimensional drawings.
1. True
2. False
6. What is a common name for a top view in an orthographic drawing?
1. Plan view
2. Down view
3. Ceiling view
4. Elevation view
1. In orthographic projection, how many views are most commonly shown?
1. 1
2. 2
3. 3
4. 4
2. In the diagrams below, match letters A to L with numbers 1 to 12.
3. What is a top view called in a construction drawing?
1. Plan view
2. Floor plan
3. Floor detail
4. Building plan
4. What are drawings called that show door and window locations, and other exterior finishes of a building?
1. Wall drawings
2. Front drawings
3. Exterior drawings
4. Elevation drawings | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/3%3A_Use_scale_rulers_to_determine_actual_dimensions_from_drawings/3.E%3A_Self_Test_3.txt |
Interpreting drawings requires the ability to visualize and the ability to interpret what is being drawn and written. The drawing should be studied carefully before beginning any work. The reader should attempt to visualize what is being shown on the drawing and how it will be built. Mistakes can be made when the tradesperson does not take the time to become fully familiar with the drawing.
5: Interpret mechanical drawings
The layout of most drawings is similar in that the drawing format has some standard features or components. A typical drawing format will include some or all of these features:
• title block
• bill of materials or material list
• area where the job specifications are listed
• general notes
• reference drawing list
• revision chart
These drawing components are common for some types of drawings; however, other components may be used to show the necessary information for the complete design.
Each of the listed drawing components serves a specific purpose and contains information about the job and its specifications, Figure \(1\).
The following information is a guideline to use each time a new drawing is observed.
Title block
The title block is located in the lower right corner of the drawing and is separated from the main drawing. The contents of the title block will vary from company to company and often differ from drawing to drawing by the same company. As a standard feature, a company will have its name and logo in the title block along with other standard company information, plus particular information such as the customer’s name.
The following information can be located in the title block; however, note that not all of the items listed below are necessarily included:
• job title
• name of the item to be fabricated or installed
• name of the customer
• name of the designing engineer or firm
• name or initial of draftsperson and checker
• date drawn
• drawing number
• contract number or job number
• number of revisions, if any, for the drawings
• scale of the drawing
Bill of materials
The bill of materials is usually located in the upper right corner of the drawings, above the title block. As with the title block, the bill of materials is separated from the rest of the drawing and is essentially a table with partitioned rows and columns showing:
• item number or mark number
• quantity
• material description
• material grade
• material weight
• remarks
Revision chart
The revision chart, is often located to the left of the title block, and is bordered off from the rest of the drawing. The revision chart, like the bill of materials, is divided into rows and columns. These columns identify the revision number and give a general description of the revision, who checked the revision, and the date of the revision. It is essential to make sure that you are working from the latest revision, as it is not uncommon for changes to be made from the original drawing.
Drawing specifications
The specifications section of a drawing is used to list all the design information of the item being built or installed. This section is often located to the left of the revision chart. If drawing space is a problem, it may be located elsewhere. A common location is the area below the bill of materials. (The contents of specifications are covered in more detail later.)
Drawing notes
Drawings will often contain two types of notes: general and specific. The general and specific notes should not be confused with the information found in the bill of materials, title block, revision chart, or the drawing specifications.
The general notes are usually located in the upper left corner of the drawing. A general note is information about the fabrication that refers to similar items or procedures throughout the drawing. Specific notes can be found anywhere on the drawing as needed and are most often written with a leader line pointing to the relevant part or area.
Reference drawings
A job requiring more than one drawing is called a drawing set, and it contains two or more drawings. It is a common practice to have a list of all of the drawings that make up the drawing set. This listing is referred to as the reference drawings for the project.
Often work being shown on one drawing requires you to look at or reference other drawings in the set. There is no specific area where the reference drawings are listed; however, two common locations are near the bottom of the print on the far left-hand side, and just below the bill of materials.
Specifications
Specifications in North America form part of the contract documents that accompany and govern the construction of a building. Specifications are written descriptions of the materials and procedures that must be used in constructing a building or system. These specifications translate working drawings into words to ensure that systems will be neither overdesigned nor underdesigned. They tell the contractor exactly which materials must be used. Aside from serving as a manual on how to do the job, the book of specifications has another function: it is a legal document outlining each contractor’s obligations. These obligations may include the need to provide fire insurance, to pay for municipal inspections, or to complete the job by a certain deadline.
5.2: Specifications
Specifications
Specifications in North America form part of the contract documents that accompany and govern the construction of a building. Specifications are written descriptions of the materials and procedures that must be used in constructing a building or system.
These specifications translate working drawings into words to ensure that systems will be neither overdesigned nor underdesigned. They tell the contractor exactly which materials must be used.
Aside from serving as a manual on how to do the job, the book of specifications has another function: it is a legal document outlining each contractor’s obligations. These obligations may include the need to provide fire insurance, to pay for municipal inspections, or to complete the job by a certain deadline.
Specifications are divided into 50 divisions of construction information as defined by the Construction Specifications Institute’s (CSI’s) MasterFormat. Before 2004, MasterFormat consisted of 16 divisions. MasterFormat is the most widely used standard for organizing specifications and other written information for commercial and institutional building projects in the United States and Canada. It provides a master list of divisions, and section numbers and titles within each division, to follow in organizing information about a facility’s construction requirements and associated activities. Standardizing the presentation of such information improves communication among all parties involved in construction projects.
5.E: Self-Test 5
Self-Test 5
1. Where is the scale of a drawing found?
1. In the revisions
2. In the title block
3. In the specifications
4. In the bill of materials
2. The specifications of a drawing serve as a legal contract for the job.
1. True
2. False
3. Where on a drawing would information on material grade be found?
1. In the title block
2. In the specifications
3. In the revision chart
4. In the bill of materials
4. Specifications are a written description of a construction project.
1. True
2. False
5. What should the title block on a drawing always include?
1. Title
2. Date
3. Name of the draftsperson
4. All of the above | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/5%3A_Interpret_mechanical_drawings/5.1%3A_Basic_layout_of_a_drawing.txt |
Freehand sketching is a very useful skill that can be mastered with practice and by following a few guidelines. The ability to interpret drawings is complemented by the ability to sketch information from a drawing to take to your work location. Sketching is also a valuable tool when no drawing is available and you need to communicate job information to someone else. For freehand sketching, you require a pad of graph paper (8 ½" × 11" sheets with a 5 mm or ¼" grid), a sharp HB pencil, and an eraser. Do not begin any sketch with a dull pencil.
6: Creating drawings and sketches
Freehand sketching is an instrumental skill that can be mastered with practice and by following a few guidelines. The ability to interpret drawings is complemented by the ability to sketch information from a drawing to take to your work location. Sketching is also a valuable tool when no drawing is available and you must communicate job information to someone else.
For freehand sketching, you require a pad of graph paper (8 ½ " × 11" sheets with a 5 mm or ¼" grid), a sharp HB pencil, and an eraser. Do not begin any sketch with a dull pencil.
Sketching technique
Sketching provides a quick and straightforward way to express ideas and communicate an object's shape and general size.
Sketching parallel lines
Start by drawing lines parallel to the paper's edges, such as a border line and title block. Use your finger as a guide when you draw along the grid line on the sketch pad (Figure \(1\)). Letting the end of your little finger run down the edge of the paper pad as you draw will steady your hand and make it easier to get a straight line.
Sketching non-parallel lines
When sketching lines that are not parallel to the sides of the paper, turn the paper around so that the line you wish to draw is either straight up and down in front of you or straight across the sheet of paper.
Drawing lines this way rather than at an angle across the sheet is much easier. Let the side of your little finger rest on the paper as you draw. This will help you steady your hand (Figure \(2\)).
Sketching a rectangle
Locate the corners of the rectangle first. Then place your paper in a comfortable position for sketching and sketch downward for vertical lines and left to right for horizontal lines. Use the grid lines as a guide to help maintain your parallel and at 90 degrees lines to each other (Figure \(3\)).
Sketching a circle
First, locate the center of the circle (Figure \(4\)), and then very lightly box in the size of the circle (using the diameter as a guide), as in the top right. Sketch in the circle, one quarter at a time, as shown in the bottom row, left to right. You may find it necessary at first to add light points along the projected circumference to help guide you through each quarter. Remember to move your sketch pad to maintain a comfortable sketching position.
Sketching to approximate scale
The full-size square is on the left in Figure \(5\). The center square is half size, and the right square is quarter size. Note that the center and right squares are the same shape as the left square, only smaller.
When sketching freehand, your sketches should reflect the actual shapes of objects as much as possible. If you use grid paper, sketching to an approximate scale is not difficult. Assume that the object in Figure \(6\) is shown full size. As it is necessary to show all orthographic views on the same sheet of paper, the views must be scaled. Figure \(7\) shows the views at approximately one-half the original size. | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/6%3A_Creating_drawings_and_sketches/6.1%3A_Creating_Drawings_and_Sketches.txt |
Isometric sketches are helpful because they are easy to draw and clearly represent an object or system. This clarity comes from using directional lines to represent the three dimensions of length, width, and height, much like a picture.
Construction methods
The following steps explain how to draw an isometric cube. The three dimensions of length, width, and height are drawn along the isometric axes shown in Figure \(1\). The lengths of objects running parallel to these axes can be drawn to scale. Lines at other angles will not be to scale.
Draw a small star-shaped axis on the bottom corner of your grid paper. The sloping axes should be drawn at a 30-degree angle from the horizontal grid line. The vertical axis of the star indicates height (H) or depth (D), and the two sloping axes indicate the length (L) and the width (W) of the rectangle. The vertical axis can be used as a guide when making lines on your drawing. Notice we have labeled the points on the star in Figure \(2\). When drawing a stationary object, these labels can change depending on your desired view. The bottom two horizontal points indicate the view that is being drawn. In this case, we would be creating a front-right view.
Sketch the top of the block by drawing two lines, one parallel to L and one parallel to W (Figure \(3\)).
Sketch two lines, one parallel to L and one parallel to D, as shown in Figure \(4\).
Sketch two lines, one parallel to W and one parallel to D, to complete the outline of the rectangular block as shown in Figure \(5\). Begin with light construction lines so that you can make any necessary adjustments before darkening them. Figure \(6\) shows the finished isometric sketch.
Sketching irregular shapes with isometric lines
Not all rectangular objects are as simple as the block you have just sketched. Sometimes the shapes are irregular and have cut-out sections, or some sides longer than others. All rectangular objects can be fitted into a box having the maximum length (L), width (W), and depth (D). Begin by sketching a light outline of a basic box that is the size of the object to be drawn.
Consider the object shown in the three-view orthographic sketch in Figure \(7\). To produce an isometric sketch of this object, you need to find the maximum L, W, and D for the containing box (Figure \(7\)). In this case:
• L = 5 grid spaces
• W = 3 grid spaces
• D = 3 grid spaces
Sketch a light outline of the basic rectangular box to the required size, as shown in Figure \(8\).
The front view shows the outline most clearly. Place this view on the front surface of the isometric box. Use the dimension given in the front view of Figure \(7\) and mark the number of units indicated along the axes L and D (Figure \(9\)).
Lightly sketch lines parallel to the L and D axes from the marked points on the front surface (Figure \(10\)). Once you are sure your sketch is correct, the step outline is drawn more heavily to emphasize the object's profile.
Sketch in a series of lines parallel to the axes (L, W, and D) from the corners numbered 1 to 7 (Figure \(11\)). These lines establish the stepped outline as shown in Figure \(12\).
When you are sure your isometric sketch is correct, erase all unnecessary construction lines and darken the object lines. Your completed sketch of the rectangular object should be similar to that in Figure \(13\).
6.3: Sketching Figures with Non-isometric Lines
Figure \(1\) shows an object that is basically rectangular but has one face machined at an angle. You can easily construct an isometric sketch of the basic rectangular block. To show the machined face, it is necessary to plot the appropriate points of intersection and join those points to produce the correct angle.
Sketch a light outline of the basic rectangular block using the size measurements given in Figure \(1\). Mark the number of units indicated along the length (L) and the depth (D), as shown in Figure \(2\).
Lightly sketch lines parallel to the original block outlines from the marked points on the front and side surfaces, as shown in Figure \(3\).
Join the two points on the front face and the two ends of the lines you have just sketched across the object (Figure \(4\)). Once you are sure your sketch is correct, erase the light lines that originally outlined the block and darken the outline of the completed block as shown in Figure \(5\). | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/6%3A_Creating_drawings_and_sketches/6.2%3A_Make_isometric_sketches_of_simple_rectangular_objects.txt |
Use sketching techniques to produce orthographic sketches of the following figures. For the purposes of these exercises, do not be concerned with dimensions. Concentrate on producing good, dark outlines, good circular shapes, and correctly drawn hidden and center lines.
1. Sketch one orthographic view of the object shown pictorially in Figure \(1\). Remember that the holes in the gasket are circular (not elliptical as they appear in the three-dimensional sketch shown). Add a title block with the following:
• Title: Gasket
• Sk. No.: D-1/031
• Sk. by: (your name)
• Date: (today’s date)
1. Sketch two orthographic views (front view and top view), in the approximate scale of the object shown pictorially in Figure \(2\). Remember to place the views correctly and make sure that all hidden lines are clearly shown. Add a title block with the following:
• Title: Stop piece
• Sk. No.: D-1/032
• Sk. by: (your name)
• Date: (today’s date)
7.02: Practical 2: Make an orthographic three-view fully dimensioned sketch of a simple object
Sketch the necessary views of the object in the pictorial drawing below and dimension fully. Sketch to approximate scale to suit your sheet size.
Add a title block with the following:
• Title: Clamp Bracket
• Sk. No.: D1/PC3
• Sk. by: (your name)
• Date: (today’s date)
7.03: Practice 3: Make Isometric Sketches
Given the orthographic sketches shown of the two objects in 1 and 2, make isometric sketches of each object to the same scale as the object shown. Borders and title blocks are not necessary for these sketches.
7.04: Answers
Self-Test 1
1. a. Vinyl
2. a. True
3. d. Drawing horizontal lines
4. a. To help you prevent smudges
5. a. True
6. c. Drawing vertical and angled lines
7. b. False
8. b. Measuring angles
9. d. Perpendicular
10. c. Drawing arcs and circles
11. c. To erase in a desired area
12. a. True
13. c. Scribing arcs on metal
14. c. To create drawings at a reduced scale
Self-Test 2
1. c. Object line
2. a. Centre line
3. c. Hidden line
4. a. True
5. b. False
6. a. Leader line
7. d. With a phantom line
8. d. Internal features of a part
9. d. Direction of observation when the section is drawn
10. a. Size dimensions
Self-Test 3
1. a. True
2. b. Architect’s scale ruler
3. a. 1
4. d. Use the dimension written between the dimension lines.
5. b. 18'
6. d. 51'
7. d. 12'6"
8. a. 1'3"
9. a. 16'2"
10. a. 3'4"
11. b. 3.5 m
12. b. 1.9 m
13. a. 8'6"
Self-Test 4
1. b. Pictorial
2. b. 30°
3. d. Vanishing points
4. c. 30°–45°
5. b. False
6. a. Plan view
7. c. 3
8. A. 10
B. 11
C. 5
D. 6
E. 1
F. 12
G. 7
H. 2
I. 8
J. 4
K. 9
L. 3
9. b. Floor plan
10. d. Elevation drawings
Self-Test 5
1. b. In the title block
2. a. True
3. d. In the bill of materials
4. a. True
5. d. All of the above | textbooks/workforce/Manufacturing/Book%3A_Fundamentals_-_Drawings_and_Specifications/7%3A_Exercises/7.01%3A_Practice_1%3A_Make_orthographic_sketches.txt |
We will start out with a flat pan template build. This will be one the simplest template we will construct, but there are key components of this simple template that will follow through to more complex templates.
Here we have both the perspective view and an orthographic drawing. Using this format it is easy to see what the finish product will look like. What we have to do here is reverse the process to see what shape we have to start with to build the pan. The one thing we need now are dimensions.
Now that we have the parts overall size we can determine how to go about laying this part flat and work with the dimensions given so we can create the pan.
By inspecting the drawing we can see that the over flat portion of the pan or the inside dimension is 8” x 12”. Each side has a total inside height of 1” each. By adding these together we can tell that what we will need to material to build the pan. This can commonly be called the STRETCHOUT. 8” + 1” + 1” + 12” + 1” +1” gives us an overall sheet size needed of 10” X 14”.
Before we get too far, let just draw the inside base of the pan.
A simple rectangle that measure 8” x12”.
Now we need to add the 1” to all four corners that will become the edges of the pan. First you need to extend each line by 1” at each corner.
Now we simply connect a line on each side for each 1” extended line.
Now you can see in each corner there is a square notch. These notches allow 1” portion of the sheet to be bent up 90° to form the pan.
After sides are bent at 90°, the finish product should looks like this.
Notes about flat sheet layout and dimensions.
You need to pay attention to how the parts are dimensioned, inside or outside. If parts are dimensioned to the outside you will need to account for material thickness for each formed edge to properly determine Stretch-out. The Stretch-out will be based off of the inside surface. For each 90° bend .12” will be gained to overall length.
For example:
The outside dimension are 6.24” x 8.24” x 1” tall. To figure out what inside dimension is for bottom of pan for layout we need to take 2 x the sheet thickness minus the given outside dimension. The inside dimension of pan is 6” x 8” with an inside height of .88”.
To figure the stretch-out, we will need to add 6” + .88” + .88” = 7.76” and 8” + .88” +.88”=9.76”
The flat Stretch-out will look like this:
1.02: Rectangular Sleeve
For this example we want to build a template for a “chimney” or sleeve that will rest on a sloped roof surface. The roof surface will have a slope of 30° and the sleeve will have the dimensions of the drawing below.
Now that we have needed measurements we can begin to draw the stretch-out.
To help keep everything in order it helps to number points on stretch-out so they match up to each corner. Since this part will have a seam, we can use “0” to indicate the seam. Looking at the side & top view we can see that the points 1&4 have the same height as well as points 2&3 since they are in the same plane. The blue lines are there to show how the lines to extend to help develop the stretch-out.
Next we will transfer the measurements from the side view to the stretch-out to determine the shape of the template.
We know at points 0, 1 & 4, the height is the overall height measure at 9.46”, since all those point are in the same plane in the side view. For points 2 & 3 we can see that the overall height there is 6”. We need to transfer this to the stretch-out. There are a few ways to this function. You can simply measure 6” down from each point on stretch-out at points 2 & 3 and mark it with a tic mark, you can also transfer the line 90° to the right and where the horizontal line intersects points 2 & 3 place a tic mark.
Now we can add a couple lines connecting the base of the stretch-out to vertical line 2 & 3 to show what the final shape of stretch-out will be. We can also remove any lines we used to transfer features from side view to stretch-out. The green line indicate the sheet size and shape the red lines indicate where the sheet will be formed. | textbooks/workforce/Manufacturing/Book%3A_Layout_Procedures_for_Metals_(Rose)/1.01%3A_Chapter_1_Flat_Pan.txt |
CIRCUMFERENCE AND HOW TO FIND
There are a few ways to find the circumference of pipe and round tube. Knowing the circumference is key into building accurate templates for use with pipe. The more accurate your numbers are when developing these template, the better fit you will have. As with anything, practice and experience in building templates will also increase accuracy.
The most used method for figuring circumference is using the formula of Pi x diameter. 6” OD round tube has a circumference of 18.85”. 6”x Pi=18.85”. When it comes to working with pipe, you need to be aware that pipe is measure nominally. This means that 6” pipe is not 6” OD. Luckily, all pipe manufactures follow a standard and there is a countless number of tables and charts that list all pipe sizes and even include the circumference of all sizes! Please refer to index to locate these tables, charts and other information.
Over the next several sections we will begin to develop templates for use with pipe. No matter the complexity of the template there are several key concepts that are used with all of them, including determining the circumference. We will only introduce these concepts in detail once, if you need help please refer back to an earlier sections.
We will be talking about dividing the circumference of pipe into several equal parts that will help develop the template. We will refer to these lines as element lines. The more element lines you have the more accurate the fit.
Like was stated earlier, there are many ways that craftspeople have figured out solutions to complex problems, this is book offers one of those.
Below we have some 4” pipe. By referring to the chart we can see that the OD of that pipe is 4.50”. Also on the chart we see that it has a total circumference of 14.125”. For this book, we will divide all circumferences into 16 equal spaces in which will become the element lines. The handy chart in the back also shows us the spacing for dividing circumference into 16 parts, as well as 12, 8, 6, 4 and 2. Remember, the more element lines you have the more accurate your template will be. If we did not have the chart, you would have to divide the total circumference by the number of spaces needed. Some rounding will be required when doing this but you need to be aware that being off 1/16”, 16 times will end up being off by a full 1”.
If you divide 14.125 by 16 you end up at .883. The chart in the back state .875. .875, or 7/8” is much easier to work with on a tape measure than .883. The difference is about 1/132 per line, this will be acceptable.
When working with pipe and developing templates you will need to brush up on some basic geometry and the bisecting of angles. We will use a compass for this task.
Below we have angle CAB with a vertex of point A.
First we can draw an arc from vertex A that crosses the line near point C & B. From those two intersection we can then draw two additional arcs to the right.
From vertex A draw a line that crosses at point D where the arc intersect.
Using this method will equally split the angle into two angle of the same measurement. This method and can repeated if necessary to split it again into 4 equal parts.
Lets try a similar method on a circle. The drawn circle has a diameter of 3” so a radius of 1 ½”.
By setting your compass to the radius of the circle, 1 ½” and then drawing an arc from point A and then B they intersect at 1 & 2. If you then to draw a line from the center of the circle to each point, you just divided that quarter of the circle into three equal parts. See below…
If you then did this again between A, 1, 2, & B you can divide that quarter of a circle into 6 equal areas.
As we covered briefly earlier, we use numbers to help with lining up of where lines will connect. These lines will be used when we are joining two or more parts together to make one assembly. As we keep going, you will better understand how this numbering system works. | textbooks/workforce/Manufacturing/Book%3A_Layout_Procedures_for_Metals_(Rose)/1.03%3A_Circumference_and_Bisecting_Angles.txt |
Depending of what size of pipe you are working with will determine what size of paper you need. Typically you will work with the paper in a landscape(sideways) orientation. Once it is confirmed your template will work, you can then trace your working template onto heavier paper or sheet material that you can easily reuse time and time again.
We will be constructing a template to help us generate a 2 piece 90° elbow. With some basic geometry we know that the two angles that make up a 90° are 45°. To start with we will draw a view that shows the pipe and the miter cut joint.
Lets remove one half of the elbow to begin making our template.
Now let’s draw a half circle off the bottom end and divide that into 8 equal parts. Number each place where a line intersects the arc of half circle.
Now, at each point on the arc draw a straight line from arc to where it intercepts the 45° line. Those are you element lines.
Now we will construct the stretch out of the template. Extend the bottom and top lines to the right from the first view drawn. The overall length of your stretch out will be determined by the circumference of your pipe. The green line indicate how we transfer the overall height of object to the stretch-out.
Now divide your stretch-out into 16 equal parts.
Now we have both the stretch-out and the side view in alignment and can start to transfer elements over to the stretch-out.
The numbers will correlate to each other from one drawing to the other. Place a straight horizontal line from mitered line over to the stretch-out and where the numbers math-up, make a tick mark.
Now with connect a line between tick marks with a curved line.
You can now remove the top section of lines above the curve on the stretch-out. Cut your template out and it is ready to use.
Now you can take the stretch-out and wrap around your pipe, trace the curved line onto pipe and proceed to cut.
1.05: 3 Piece 90
Three piece turn-
We know that when building a two piece 90° turn we needed to cut two pieces at 45°. When making a three piece turn we will need to add one more piece of pipe and thus change the angle we need to cut.
The plan view of the joint will look like this. We have bisected the joint to show what the finished angle will be, once our template is completed.
When will use the center portion of our elbow to make our stretch-out and begin to develop the template.
As before, we will draw a half circle off one end and also draw our stretch-out. The green lines are to show how to transfer elements from one drawing to another. You will need to calculate the stretch-out depending on what size pipe you are working with. Since we will be making a template that can be used to cut two ends at same time, the dark line on stretch out is in reference to center of stretch-out.
Next we will divide the half circle into 8 equal parts and the stretch-out into 16.
Now we can transfer elements to the stretch-out. Indicated by dashed lines you see how they reference the numbered lines on each drawing.
Now we can draw in our curved line to all tick marks giving the template shape.
Now remove all the non-essential lines and what is left is the final shape of the template.
You can take your template, wrap around pipe, trace the patter and then cut and fit.
1.06: Branch and Header Connections
Many times in the piping industry there is a need to make a connection to a pipe where it is not practical or feasible to use a manufactured fitting or it is not possible to do so. These types of connections are sometime called branch and header connections. The “Header” is the main line that is being tied into and the “Branch” is the line come off of the Header. The branch is either the same size as Header or smaller in size.
We will go over three types of templates for making these types connections. We will use all of the tools from the two and three piece 90° turns covered in previous chapters.
1.07: Concentric 90 Branch on Header
Concentric 90° branch
This fitting is Concentric, meaning the branch and header are on the center line.
We will begin by drawing the header pipe from an end view and the branch connection on same center line directly above. Remember, all pipe regardless of wall thickness has the same O.D. so we will only worry about drawing the O.D. lines and not worry about the I.D.
Now we will draw the half circle on end of branch and extend out the height of branch and draw the stretch-out. You will have to verify size of pipe or refer to chart to find total circumference.
Greens lines are to show the extension of branch to develop the stretch-out.
Now we can divide up the stretch-out into 16 equal spaces and divide the half circle up as well. Note: Since this is a concentric branch and everything is off of center, we can get away with only dividing half the half circle, it is a mirror of the other side.
Remember to number each line…
Now we can transfer lines from end view to the stretch-out and number match them and place out tick marks.
Now we can draw our curved line between tick marks and remove transfer line.
Remove the non-essential lines and the template is ready to use. | textbooks/workforce/Manufacturing/Book%3A_Layout_Procedures_for_Metals_(Rose)/1.04%3A_2_piece_90.txt |
The procedure for the Eccentric and Concentric are near identical. The only real difference comes into the fact that the Eccentric branch if off center. If you remember back when we divided up the half circle for the concentric, we could get away with only laying out half because it was a mirror image. The Eccentric is off center and to show the details of where the lines intersect, we will divide up the entire circle.
Let get started by drawing the end view of Header and Branch. You will notice that the Branch in off of the center line of the header. This off set will be determined by layout needed to make up this fitting and may vary depending on what the situation dictates.
Now we will draw the half circle on end of branch and extend out the height of branch and draw the stretch-out. You will have to verify size of pipe or refer to chart to find total circumference.
Greens lines are to show the extension of branch to develop the stretch-out.
Now we can divide up the stretch-out into 16 equal spaces and divide the half circle up as well. Note: Since this is an eccentric branch and it is not on center. We have to divide up the entire half circle.
Now we can transfer lines from end view to the stretch-out and number match them and place out tick marks. Remember to number each line as well.
Now we can draw our curved line between tick marks and remove transfer lines.
Remove the non-essential lines and the template is ready to use.
1.09: 45 lateral branch
45 degree lateral
Now as before, we will draw half circle on the ends of pipe and divide them up into equal parts to help develop our element lines. Remember to number the lines as well so when we draw out the stretch-out.
Now we can draw lines horizontally between the two views to help develop the first set of lines we will need. Line up and draw from point 1 to point 1 and so on through point 5. You will notice how the lines will go straight through line 6, 7, 8 & 9. Since this is a concentric reducer from end view, the points will be the same on both side of the pipe. Place a tick mark and each point of intersection.
When you draw a curved line from tick mark to tick mark it will look like this.
Now we can make our stretch-out from the 45 lateral and divide up as we have on past templates. Divide the stretch-out up into 160 equal spaces.
Now we can transfer lines at a 45° angle down from the side view and connect them to same number lines on the stretch-out. Place a tick mark on each point that intersect with the same number as that of where the line originated from.
Now we can connect between tick marks and draw in our curved line.
Getting rid of all the nonessential lines our template is ready for use. | textbooks/workforce/Manufacturing/Book%3A_Layout_Procedures_for_Metals_(Rose)/1.08%3A_Eccentric_Branch.txt |
Description
The milling machine is one of the most versatile machines in the shop. Usually they are used to mill flat surfaces, but they can also be used to machine irregular surfaces. Additionally, the milling machine can be used to drill, bore, cut gears, and produce slots into a workpiece.
The milling machine uses a multi-toothed cutter to remove metal from moving stock. There is also a quill feed lever on the mill head to feed the spindle up and down. The bed can also be manually fed in the X, Y, and Z axes. Best practices are to adjust the Z axis first, then Y, then X.
When an axis is properly positioned and is no longer to be fed, use the gib locks to lock it in place.
It is common for milling machines to have a power feed on one or more axes. Normally, a forward/reverse lever and speed control knob is provided to control the power feed. A power feed can produce a better surface finish than manual feeding because it is smoother. On long cuts, a power feed can reduce operator fatigue.
Safety
The following procedures are suggested for the safe operation of a milling machine.
1. Have someone assist you when placing a heavy machine attachment like a rotary table, dividing head, or vise.
2. Always refer to speed and feed tables.
3. Always use cutting tools that are sharp and in good condition.
4. Seat the workpiece against parallel bars or the bottom of the vice using a soft hammer or mallet. Check that the work is firmly held and mounted squarely.
5. Remove the wrench after tightening the vice.
6. Most operations require a FORWARD spindle direction. There may be a few exceptions.
7. Make sure there is enough clearance for all moving parts before starting a cut.
8. Make sure to apply only the amount of feed that is necessary to form a clean chip.
9. Before a drill bit breaks through the backside of the material, ease up on the drilling pressure.
10. Evenly apply and and maintain cutting fluids to prevent morphing.
11. Withdraw drill bits frequently when drilling a deep hole. This helps to clear out the chips that may become trapped within the hole.
12. Do not reach near, over, or around a rotating cutter.
13. Do not attempt to clean the machine or part when the spindle is in motion.
14. Stop the machine before attempting to make adjustments or measurements.
15. Use caution when using compressed air to remove chips and shavings. They flying particle may injure you, or those around you.
16. Use a shield or guard for protection against chips.
17. Remove drill bits from the spindle before cleaning to prevent injury.
18. Clean drill bits using a small brush or compressed air.
19. Properly store arbors, milling cutters, collets, adapters, etc., after using them. They can be damaged if not properly stored.
20. Make sure the machine is turned off and clean before leaving the workspace.
01.2: Unit 1: Tramming the Head
Objective
After completing this unit, you should be able to:
• Describe how to tram the mill head.
• Explain how to indicate the vise.
• Explain the use of spring collets.
• Describe the difference between climb vs. conventional milling.
• Explain how to use an edge finder.
• Describe how to set the quick change gearbox correctly.
• Describe how to square the stock.
• Describe face milling.
• Describe advanced workholding.
Tools For Tramming
A dial indicator is a precision tool used to measure minute amounts of deflection between two surfaces.
When tramming, a dial indicator attached to the chuck is used to determine the orientation of the mill head to the mill table. The same wrench used to tighten and loosen the quill can be used to adjust the various bolts on the mill head.
Dial indicator used for tramming the head.
Tramming the Mill Head
Tramming ensures that the mill head is perpendicular to the mill table’s X and Y axis. This process ensures that cutting tools and the milling surfaces are perpendicular to the table. Proper tramming also prevents irregular patterns from forming when milling.
A dial indicator attached to the spindle for precise mill head alignment.
A vertical mill’s head is able to tilt from front to back and side to side. Occasionally these adjustments can drift. The mill head should be checked and adjusted periodically, ensuring that the spindle is perpendicular to the table.
1. Remove the vice from the milling table.
2. Attach a dial indicator to the spindle and offset the dial six inches from the spindle’s axis. Make sure the indicator probe is facing down.
3. Raise the mill table so that when it contacts the indicator, the indicator reads between 0.005 inches to 0.010 inches. This reading is called the preload.
4. Position the dial indicator so that it is visible, then set the bezel to zero.
5. Hand-turn the spindle while watching the indicator.
6. If the reading on the dial indicator stays at zero, the spindle is aligned.
7. If the reading is not zero, continue tramming the head as shown below.
Tramming Process for the X-Axis
1. To tram around the x-axis (the left-to-right direction of the mill bench when facing the front of the mill), loosen the six bolts (three on each side of the mill) using the mill wrench.
Location of the bolts to be loosened to allow the head to rotate about the X-axis.
1. After loosening the bolts, re-tighten them by hand plus a ¼ of a turn using the mill wrench.
2. The adjustment bolt that moves the mill head up and down around the x-axis is located at the back of the mill.
Adjustment bolt used to position the mill head vertically around the X-axis.
1. Two protractors are used to indicate general alignment. The larger protractor on the mill head has a red indicating arrow that should align with the zero marker on the curved protractor on the body of the mill. This only provides a general guide, the dial indicator reading is required for precise alignment.
2. Position the dial indicator to the rear of the table. Zero the dial indicator (preloaded at 0.005″ to 0.010″). Be sure to measure on a pristine surface of the mill table. It may be necessary to shift the table to avoid the gaps that are in the table.
Dial indicating around the mill head X-axis.
1. With the dial zeroed and the spindle in neutral, rotate the spindle so that the dial indicator is now on the front of the table, ideally a 180 degree turn. Be sure to grab the clamp that is attached to the spindle (to avoid altering the dial’s vertical configuration).
2. Note the direction that the dial rotates to determine the direction that the mill head needs to travel. A clockwise movement requires that the mill head will need to be adjusted up, while a counter-clockwise reading requires that the mill head will need to be adjusted downward.
Mill head adjustment about the X-axis.
1. The diagram above shows how movement of the adjustment bolt correlates to movement in the mill head. Once confident in the correct direction the adjustment bolt needs to be turned, adjust the mill head so that ½ the difference between the back and front measurements is reached. For example, if the rear reading is zero and the front reading is 0.010″, adjust the mill head so that the dial reads 0.005″ closer to zero.
2. After the first adjustment is complete, again zero the dial indicator. It is recommended to zero off the same position to avoid confusion, however, it is not necessary. Continue the adjustment process until the difference between the front and the rear is no greater than 0.002 inches.
3. Once satisfied with the readings, begin re-tightening the bolts that were loosened, tightening them evenly in rotation to prevent change in the alignment. Recheck the measurement between the front and the rear to ensure that the mill head did not move significantly from tightening.
Tramming Process for the Y-Axis
1. To begin tramming about the y-axis, there are four bolts on the front of the mill that need to be loosened to allow movement of the mill head. The bolts should be loosened, then re-tightened to just beyond hand-tight (about ¼ turn past hand-tight with the appropriate wrench).
Location of the bolts to be loosened to allow the head to rotate about the Y-axis.
1. The adjustment bolt to move the mill head left and right about the y-axis is shown in the figure below. By twisting this bolt clockwise and counter-clockwise the mill head will move accordingly.
Adjustment bolt used to position the mill head around the Y-axis.
1. The indicating arrow on the protractors for tramming around the y-axis is located on a standalone plate that is in contact with the vertical protractor. This indicating arrow and the zero on the vertical protractor can be used to estimate a starting point for tramming.
Mill head adjustment about the Y-axis.
1. The figure above shows how the adjustment bolt for tramming about the y-axis affects the mill head. Use the same process as described for tramming about the x-axis, however, use locations left and right of the mill head as your reference points in contrast to the front and the rear as done previously.
2. Once the adjustments are complete, tighten the bolts on the head of the mill and re-check the measurements about the x-axis and the y-axis. It is possible that the tram in either direction may have been altered by the re-tightening of the bolts. Ensure that all measurements are within 0.002 inches. If the measurements are not within tolerance, the tramming process will have to be redone.
Indicating the Vise
1. Most workpieces are held in a vice that is clamped to the table.
2. It is important to line the vice up with the feed axes on the machine in order to machine features that are aligned with the stock’s edges.
3. Fix the vice on the bed by using T-bolts and secure it snugly, while still allowing adjustment to the vice.
4. Install a dial indicator in the machine’s spindle with the probe facing away from the operator.
5. Bring the spindle down then position the table’s bed until the fixed jaw on the vice is touching the indicator. Continue until the indicator has registered half of a revolution.
6. Set the dial indicator’s bezel to zero.
7. Run the indicator across the vice’s face with the cross feed.
8. The indicator will stay at zero if the vice is squared.
9. If the indicator does not stay at zero, realign the vice by lightly tapping with a soft hammer until the indicator reads half of its previous value.
10. Repeat the process until the dial indicator shows zero through a complete travel from one side of the vice to the other.
11. Fasten the T-bolts securely, while not changing the orientation of the vice. Recheck the alignment of the vice.
Types of Milling Cutters
An assortment of milling cutters.
1. Milling cutters that have solid shafts are usually used in vertical mills.
2. Milling cutters that have keyed holes are usually used in horizontal mills.
3. End mills are used to cut pockets, keyways, and slots.
4. Two fluted end mills can be used to plunge into a workpiece like a drill.
5. 2 and 3 flutes are generally for aluminum, 4 flutes is better for stainless steel. More flutes are better cutting, but come at a higher price.
6. End mills with more than two flutes should not be plunged into the work.
7. Fillets can be produced with ball end mills.
8. Multiple features like round edges can be made by formed milling cutters.
Methods of retaining an end mill.
Spring Collets
1. If a tool needs to be removed, lock the quill at the highest position.
2. Next, loosen the drawbar with a wrench while using the brake.
3. Make sure that the threads of the draw bar remain engaged in the collet. If they are not engaged, the cutter will fall and potentially be damaged when the collet is released from the spindle.
4. To release the collet from the spindle, tap on the end of the draw bar.
5. Finally, unscrew the drawbar off of the collet.
6. To install a different cutter, place the cutter in a collet that fits the shank.
7. Insert the collet into the spindle while making sure that the keyway aligns properly with the key in the spindle.
8. Begin threading the draw bar into the collet while holding the cutter with one hand. Afterwards, use a wrench to tighten the drawbar while engaging the brake.
Climb vs. Conventional Milling
It is important to know the difference between conventional and climb milling. Using the wrong procedure may result in broken cutters and scrapped workpieces.
Conventional Milling
1. The workpiece is fed against the rotation of the cutter.
2. Conventional milling is usually preferred for roughing cuts.
3. Conventional milling requires less force than climb milling.
4. Does not require a backlash eliminator and tight table gibs.
5. Recommended when machining castings and hot-rolled steel.
6. Also recommended when there is a hard surface that has resulted from scale or sand.
Shown above: Conventional Milling
Climb Milling
1. The workpiece is fed with the rotation of the cutter.
2. This method results in a better finish. Chips are not carried into the workpiece, thus not damaging the finish.
3. Fixtures cost less. Climb milling forces the workpiece down, so simple holding devices can be utilized.
4. The chip thickness tends to get smaller the closer it is to an edge, so there is a less chance of an edge breaking, especially with brittle materials.
5. Increases tool life. The tool life can be increased by up to 50% due to chips piling up behind the tool.
6. Chips can be removed easier since the chips fall behind the cutter.
7. Reduces the power needed by 20%. This is due to the use a of a higher rake angle cutter.
8. Not recommended if the workpiece cannot be held securely or if the machine cannot support high forces.
9. Cannot be used to machine castings and hot-rolled steel.
10. This method may pull the workpiece into the cutter and away from the holding device, resulting in broken cutters and scrapped workpieces.
Shown above: Climb Milling
Setting Spindle Speed
1. Spindle speed changes depending on the geometry of the drive train.
2. A hand crank can be used to adjust the spindle speed on newer machines.
3. To change the speed, the spindle has to be rotating.
4. The speed (in RPM) is shown on the dial indicator.
5. There are two scales on the dial indicator for the low and high ranges.
6. A lever is used to change the machine’s range.
7. Occasionally, slight rotation of the spindle is necessary for the gears to mate correctly.
Using an Edge Finder
1. The edges of a workpiece must be located before doing mill work that requires great accuracy. An edge finder helps in finding the edges.
2. 800-1200 spindle rpm is recommended.
3. To use an edge finder, slightly offset the two halves so they wobble as they spin.
4. Slowly move the workpiece towards the edge finder.
5. The edge finder will center itself, then suddenly lose concentricity.
6. The digital readout tells you the position of the spindle.
7. The diameter of the edge finder is 0.200″. So adding or subtracting half of that (0.100″) will be the tool center.
8. If centering on the top left, add 0.100″ to the X-axis and subtract 0.100″ from the Y-axis. If centering on the top right, subtract 0.100″ from the X-axis and subtract 0.100″ from the Y-axis.
9. Part Reference Zero is when the bit is zeroed on the X and X axes.
10. A pointed edge finder is a lot easier, but not as precise. Only use a pointed edge finder if precision is not necessary.
Using the Micrometer Dials
1. Most manual feeds on a milling machine have micrometer dial indicators.
2. If the length of the feed is known, the dial indicator should be set to that number (thousandths of an inch).
3. To free the dial indicator, rotate the locking ring counterclockwise. Set the dial and re-tighten.
4. Before setting the dial indicator, ensure that the table-driving mechanism backlash is taken up.
5. It is common for newer machines to have digital readouts, which are preferable because they directly measure table position. When using a digital readout, backlash concerns are negated.
Squaring Stock
1. When making a square corner, vertically orient a completed edge in the vice and clamp it lightly to the part.
2. Place machinist’s square against the completed edge and the base of the vice.
3. Align the workpiece with the square by tapping it lightly with a rubber mallet.
4. Firmly clamp the vice.
5. The top edge of the part is ready to be milled.
Face Milling
1. It is frequently necessary to mill a flat surface on a large workpiece. This is done best using a facing cutter.
2. A cutter that is about an inch wider than the workpiece should be selected in order to finish the facing in one pass.
Shown above: Face milling
Milling Slots
1. Square slots can be cut using end mills.
2. In one pass, slots can be created to within two one-thousandths of an inch.
3. Use an end mill that is smaller than the desired slot for more accuracy.
4. Measure the slot and make a second pass to open the slot to the desired dimension.
5. The depth of cut should not exceed the cutter diameter.
Advanced Workholding
1. Use a v-block to secure round stock in a vice. It can be used both horizontally and vertically.
2. Clamping round stock in a v-block usually damages the stock.
• Collet blocks are made to hold round workpieces.
• To mill features at 90 degree increments, use a square collet block.
• To mill features at 60 degree increments, use a hexagonal block.
3. It is easiest to set up stock when the features are perpendicular or parallel to the edges of the workpiece. It is more difficult to set up a workpiece when features are not parallel or perpendicular to the edges. Sometimes, an angle plate can be used to mill stock at any desired angle.
4. Parts that don’t fit well in a vise can be directly secured to the table with hold-down clamps.
5. Use parallels to create a gap between the work and bed.
6. Slightly tilt the clamps down into the work.
7. Rotary tables can be put on the bed to make circular features.
• Rotary tables allow rotation of the workpiece.
• Use a dial indicator to precisely control the angle of rotation.
8. Use a ball for irregularly shaped workpieces. Make sure to only take a small cuts to avoid throwing the workpiece out of the vice.
UNIT TEST
1. What tool is used for tramming the head?
2. Explain the process for the X-axis tramming.
3. Explain the process for the Y-axis tramming.
4. What is the purpose of indicating the vise?
5. Name three types of milling cutters.
6. Explain how a spring collet works.
7. What is the difference between conventional and climb milling?
8. Describe briefly how a rotary table may be centered with the vertical mill spindle.
9. Describe briefly how to set spindle speed on the milling machine.
10. What tool is used for milling large workpiece surfaces? | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/01%3A_Milling_Machines/01.1%3A_Milling_Machines.txt |
Objective
After completing this unit, you should be able to:
• Identify and select vertical milling machine setups and operations for a variety of machining tasks.
• Select a proper cutting speed for different types of materials.
• Calculate cutting speeds and feeds for end milling operations.
• Explain how to correctly set up for power feed tapping.
Cutting Speed
Cutting speed is defined as the speed at the outside edge of the tool as it is cutting. This is also known as surface speed. Surface speed, surface footage, and surface area are all directly related. If two tools of different sizes are turning at the same revolutions per minute (RPM), the larger tool has a greater surface speed. Surface speed is measured in surface feet per minute (SFM). All cutting tools work on the surface footage principle. Cutting speeds depend primarily on the kind of material you are cutting and the kind of cutting tool you are using. The hardness of the work material has a great deal to do with the recommended cutting speed. The harder the work material, the slower the cutting speed. The softer the work material, the faster the recommended cutting speed (See Figure 1).
Steel Iron Aluminum Lead
Figure 1: Increasing Cutting Speed Based on work material hardness
The hardness of the cutting tool material will also have a great deal to do with the recommended cutting speed. The harder the drill, the faster the cutting speed. The softer the drill, the slower the recommended cutting speed (See Figure 2).
Carbon Steel High Speed Steel Carbide
Figure 2: Increasing Cutting Speed Based on Cutting tool hardness
Table 1: Cutting Speeds for Material Types
Type of Material Cutting Speed (SFM)
Low Carbon Steel 40-140
Medium Carbon Steel 70-120
High Carbon Steel 65-100
Free-machining Steel 100-150
Stainless Steel, C1 302, 304 60
Stainless Steel, C1 310, 316 70
Stainless Steel, C1 410 100
Stainless Steel, C1 416 140
Stainless Steel, C1 17-4, pH 50
Alloy Steel, SAE 4130, 4140 70
Alloy Steel, SAE 4030 90
Tool Steel 40-70
Cast Iron–Regular 80-120
Cast Iron–Hard 5-30
Gray Cast Iron 50-80
Aluminum Alloys 300-400
Nickel Alloy, Monel 400 40-60
Nickel Alloy, Monel K500 30-60
Nickel Alloy, Inconel 5-10
Cobalt Base Alloys 5-10
Titanium Alloy 20-60
Unalloyed Titanium 35-55
Copper 100-500
Bronze–Regular 90-150
Bronze–Hard 30-70
Zirconium 70-90
Brass and Aluminum 200-350
Silicon Free Non-Metallics 100-300
Silicon Containing Non-Metallics 30-70
Spindle Speed
Once the SFM for a given material and tool is determined, the spindle can be calculated since this value is dependent on cutting speed and tool diameter.
RPM = (CS x 4) / D
Where:
• RPM = Revolutions per minute.
• CS = Cutter speed in SFM.
• D = Tool Diameter in inches.
Milling Feed
The feed (milling machine feed) can be defined as the distance in inches per minute that the work moves into the cutter.
On the milling machines we have here at LBCC, the feed is independent of the spindle speed. This is a good arrangement and it permits faster feeds for larger, slowly rotating cutters.
The feed rate used on a milling machine depends on the following factors:
1. The depth and width of cut.
2. The type of cutter.
3. The sharpness of the cutter.
4. The workpiece material.
5. The strength and uniformity of the workpiece.
6. The finish required.
7. The accuracy required.
8. The power and rigidity of the machine, the holding device, and the tooling setup.
Feed per Tooth
Feed per tooth, is the amount of material that should be removed by each tooth of the cutter as it revolves and advances into the work.
As the work advances into the cutter, each tooth of the cutter advances into the work an equal amount producing chips of equal thickness.
This chip thickness or feed per tooth, along with the number of teeth in the cutter, form the basis for determining the rate of feed.
The ideal feed rate for milling is measured in inches per minute (IPM) and is calculated by this formula:
IPM = F x N x RPM
Where:
• IPM = feed rate in inches per minute
• F = feed per tooth
• N = number of teeth
• RPM = revolutions per minute
For Example:
Feeds for end mills used in vertical milling machines range from .001 to .002 in. feed per tooth for very small diameter cutters on steel work material to .010 in. feed per tooth for large cutters in aluminum workpieces. Since the cutting speed for mild steel is 90, the RPM for a 3/8” high-speed, two flute end mill is
RPM = CS x 4 / D = 90 x 4 / (3/8) = 360 /.375 = 960 RPM
To calculate the feed rate, we will select .002 inches per tooth
Machine Feed
The machine movement that causes a cutting tool to cut into or along the surface of a workpiece is called feed.
The amount of feed is usually measured in thousandths of an inch in metal cutting.
Feeds are expressed in slightly different ways on various types of machines.
Drilling machines that have power feeds are designed to advance the drill a given amount for each revolution of the spindle. If we set the machine to feed at .006” the machine will feed .006” for every revolution of the spindle. This is expressed as (IPR) inches per revolution
Tapping Procedures
Good Practices:
Using Tap Guides
Tap guides are an integral part in making a usable and straight thread. When using the lathe or the mill, the tap is already straight and centered. When manually aligning a tap, be careful, as a 90° tap guide is much more accurate than the human eye.
Using Oil
When drilling and tapping, it is crucial to use oil. It keeps the bits from squealing, makes the cut smoother, cleans out the chips, and keeps the drill and stock from overheating.
Pecking
Pecking helps ensure that bits don’t overheat and break when using them to drill or tap. Peck drilling involves drilling partway through a part, then retracting it to remove chips, simultaneously allowing the piece to cool. Rotating the handle a full turn then back a half turn is common practice. Whenever the bit or tap is backed out, remove as many chips as possible and add oil to the surface between the drill or tap and the workpiece.
Hand Tapping Procedure
1. Select a drill size from the chart.
When choosing a tap size, this chart is the first place to look.
1. If necessary, add a chamfer to the hole before tapping.
Chamfers and countersinks are additional features that are sometimes desired for screws. For best results, the speed of the spindle should be between 150 and 250 rpm.
2. Get a tap guide.
The hole is now ready to tap. To do this, use the taps and guide blocks near the manual mills. The guide blocks will have several holes for different sized taps. Select the one closest to the size of the tap being used and place it over the drilled hole.
3. Tap the threads.
Peck tap using the tap wrenches. Apply gentle pressure while turning the wrench a complete turn in, then a half-turn out. Peck tap to the desired depth.
4. Complete the tap.
If the tap does not go any further or the desired depth has been reached, release pressure on the tap; it has likely bottomed out. Remove the tap from the hole.Applying any more pressure is likely to break the tap. The smaller the tap, the more likely it is to break.
Power Feed Tapping Procedure (Vertical Mill)
1. Power feed tapping is similar to hand tapping. Instead of tapping by hand, however, use the vertical mill to tap the workpiece.
2. Before starting the machine, change the mill to low gear.
3. Release the quill lock and move the quill to the lowest it can go. This ensures that there is sufficient space to tap to the desired depth.
4. Turn the spindle on FORWARD and set the spindle speed to 60 RPM.
5. Feed the tap down. When the tap grabs the stock, it will automatically feed itself into the hole.
6. When the desired depth has been reached, quickly flip the spindle direction switch from forward to reverse. This will reverse the direction of the tap and remove it from the hole. Reversing the direction in one fluid motion will prevent damage to the tapped hole and the tap.
7. Turn off the machine.
8. Clean the tapped hole, tap, and power feed machine before leaving.
UNIT TEST
1. Explain cutting speeds for harder and softer materials.
2. What is the cutting speed for Tool Steel and Aluminum?
3. Calculate the RPM for a ½ in. diameter HSS end mill to machine aluminum.
4. Calculate the feed rate for a three-flute tool. Use the RPM from Question 3.
5. Calculate the RPM for a ¾ in. diameter HSS end mill to machine bronze.
6. Calculate the feed rate for two-flute ½ in. diameter carbide end mill to machine low-carbon steel.
7. What is the purpose of pecking when using them to drill or tap?
8. Select a proper drill size for 5/16 – 24 tap.
9. Why are cutting fluids used?
10. Describe the difference between hand and power feed tapping. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/01%3A_Milling_Machines/01.3%3A_Unit_2%3A_Speeds%2C_Feeds%2C_and_Tapping.txt |
Objective
After completing this unit, you should be able to:
• Understand the principle of the sine bar.
• Explain how to use a sine bar correctly.
• Understand slip gauge blocks and wringing.
• Calculate gauge block height.
The Sine Bar
A sine bar is used in conjunction with slip gauge blocks for precise angular measurement. A sine bar is used either to measure an angle very accurately or face locate any work to a given angle. Sine bars are made from a high chromium corrosion resistant steel, and is hardened, precision ground, and stabilized.
Figure 1. The Sine Bar
Two cylinders of equal diameter are placed at the ends of the bar. The axes of these two cylinders are mutually parallel to each other, and are also parallel to, and at equal distance from, the upper surface of the sine bar. Accuracy up to 0.01mm/m of length of the sine bar can be obtained.
A sine bar is generally used with slip gauge blocks. The sine bar forms the hypotenuse of a right triangle, while the slip gauge blocks form the opposite side. The height of the slip gauge block is found by multiplying the sine of the desired angle by the length of the sine bar: H = L * sin(θ).
For example, to find the gauge block height for a 13˚ angle with a 5.000″ sine bar, multiply the sin(13˚) by 5.000″: H = 5.000″ * sin(13˚). Slip gauge blocks stacked to a height of 1.124″ would then be used elevate the sine bar to the desired angle of 13˚.
Sine Bar Principles
• The application of trigonometry applies to sine bar usage.
• A surface plate, sine bar, and slip gauges are used for the precise formation of an angle.
• It is possible to set up any angle ϴ by using the standard length of side AB, and calculating the height of side BC using BC = AB * sin(ϴ).
• The angle ϴ is given by ϴ = asin(BC/AB).
• Figure 1 shows a typical sine bar set up on a surface plate with slip gauge blocks of the required height BC to form a desired angle ϴ.
Figure 2: Forming an Angle with a Sine Bar and Gauge Blocks
Wringing
The term wringing refers to a condition of intimate and complete contact by tight adhesion between measuring faces. Wringing is done by hand by sliding and twisting motions. One gauge is placed perpendicular to other using standard gauging pressure then a rotary motion is applied until the blocks are lined up. In this way air is expelled from between the gauge faces causing the blocks to adhere. This adherence is caused partially by molecular attraction and partially by atmospheric pressure. Similarly, for separating slip gauges, a combined sliding and twisting motion should be used.
1. To set an angle on any sine bar, you must first determine the center distance of the sine bar (C), the angle you wish to set (A) and whether the angle is in degrees-minutes-seconds or decimal degrees.
2. Next, enter that information in the appropriate input areas below. Use a decimal point for the separator, whether the angle is in degrees-minutes-seconds or decimal degrees.
3. Hit the ‘Calculate’ button and then assemble a stack of gauge blocks (G) to equal the size that is returned. The units of the stack will match the units of the center distance (i.e., If you enter the center distance as 5 for a 5 inch sine plate, the gage block stack will also be in inches.).
4. Place these slip gauges blocks under the gauge block roll of the sine device and the desired angle is set.
5. Tighten the locking mechanism on those devices that have one and you’re ready to go.
Figure 3: Uses the formula: G = C * Sin(A)
If you just want to set an angle with a sine bar and stack of blocks, then take the sine of the desired angle on your calculator and multiply the result by the distance between the centers of the cylinders in the sine bar. Assemble a stack of blocks equal to this value and put it under one of the cylinders.
Sine Bar Set-Up Calculation
To calculate the gauge block’s height needed to set-up a sine bar to a specific angle all you have to do is take the SIN of the angle and multiply it by the sine bar length. The length of the sine bar is the distance between the centers of the sine bar gauge pins.
Figure 4. Sine Bar
Example:
Set up a 5.0” sine bar or sine plate to 30°
SIN (30˚) = 0.5000
0.5000 x 5.0″ (sine Bar Length) = 2.5000″
Round 2.5000″ to 4 Decimal Places = 2.5000″ Gage Block Height.
Table 1 Common Angles and heights for a 5-inch sine bar:
Angle Height
0.4358″
10° 0.8682″
15° 1.2941″
20° 1.7101″
25° 2.1131″
30° 2.5000″
35° 2.8679″
40° 3.2139″
45° 3.5355″
50° 3.8302″
55° 4.0958″
60° 4.3301″
Sine Bar Usage
To measure a known angle or locate any work to a given angle:
1. Always use a perfectly flat and clean surface plate.
2. Place one roller on the surface plate and the other roller on the slip gauge block stack of height H.
3. Let the sine bar be set to an angle ϴ.
4. Then sin(ϴ) = H/L, where L is the distance between the center.
5. Thus knowing ϴ, H can be found and any work can be set out at this angle as the top face of the sine bar is inclined at angle ϴ to the surface plate.
6. For better result both rollers must placed on slip gauge block of height H1 and H2 respectively. See above figure,
7. ??? ? = (?? − ??) / L
UNIT TEST
1. Describe the use of the sine bar.
2. Calculate the required sine bar elevation for angle of 37˚.
3. A 5.00” sine bar is elevated 1.50”. Calculate the angle.
4. Determine the elevation for 30˚ using 5.00” sine bar.
5. Determine the elevation for 42˚ using 5.00” sine bar.
6. A 5.00” sine bar is elevated 1.25”. What angle is established?
7. What gauge block stack would establish an angle of 35˚ using a 5.00” sine bar? | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/01%3A_Milling_Machines/01.4%3A_Unit_3%3A_Sine_Bar.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Identify offset Boring head
• Explain how to correct set up for Rotary Table.
Offset Boring Head
The offset boring is an attachment that fits the milling machine spindle and permits most drilled holes to have a better finish and greater diameter accuracy. Offset boring head are used to create large hole when tolerance do not allow for a drill bit or do not have a large enough drill or reamer. A offset boring head can be used to enlarge hole, or adjust hole centerline in certain instances.
Safety:
Be sure all set screws are tight before operation. Be sure offset boring head has a clearance to fit into hole when boring. Remove Allen wrench before turning the mill one. Double check mill speed before operation.
Figure 1. Offset Boring Head
OFFSET BORING HEAD AND TOOLS
Figure 1. shows an offset boring head. Note that the boring bar can be adjusted at a right angle axis. This feature makes it possible to position the boring cutter accurately to bore holes of varying diameters.
This adjustment is more convenient than adjusting the cutter in the boring bar holder or changing the boring bar. Another advantage of the offset boring head is the fact that graduated micrometer collar allows the tool to be moved accurately a specified amount usually in increments of (0.001) without the use of dial indicator or other measuring device.
Offset Boring Head
A Boring Heads have three major components:
• boring head body
• bar holder/insert holder
• dial screw
The boring head body has a black oxide finish for rust prevention. The bar holder or insert holder (#1) has been satin chromed for wear resistance. The dial screw (#3) has been precision ground to give accurate movement of the bar holder/insert holder in the dove tail slide. The gib tension has been preset at the factory. The two gib screws (#5) should not be loosened to make size adjustments. These screws are for adjusting the gib pressure only and are filled with red wax to prevent accidental adjustment. The locking screw (#6) is the only screw used for making size changes to the boring head.
Diameter Adjustment
To adjust the diameter of an Allied Criterion standard boring head:
1. Loosen the locking screw (#6).
2. Turn the dial screw (#3) clockwise to increase the diameter and counterclockwise to decrease the diameter.
3. Tighten the locking screw (#6). Adjusting Standard Boring Heads
Procedure:
1. Set up and carefully align the work parallel to the table travel.
2. Align the center of the Milling Machine spindle with the reference point on the work.
3. Spot the location of hole with a center drill or spotting tool.
4. Drilled hole over ½ inch, Be sure offset boring head has a clearance to fit into hole when boring.
5. Install bore head into Milling Machine.
6. Install boring bar and tighten set screw and loosen lock screw and adjust boring bar to hole edge.
7. Recheck the work alignment, as well as the alignment of the spindle with the reference point, to make sure it has not shifted. If any error is evident, it will necessary to repeat procedure 6 before processing.
8. Adjust Milling Machine speed for hole size and material.
9. Engage worm feed on Mill. Bring quill to material. Pull handle out to engage power feed. When at desired depth push hand back to disengage feed and then turn off Mill. Remove boring head from hole.
10. Finish bore hole to the required size.
NOTE: Repeat Procedures 6-9 until hole is desired size.
Rotary Table
A rotary table can be used to make arcs and circles. For example, the circular T-slot in the swivel base for a vise can be made using a rotary table. Rotary tables can also be used for indexing, where a workpiece must be rotated an exact amount between operations. You can make gears on a milling machine using a rotary table. Dividing plates make indexing with a rotary table easier.
Rotary tables are most commonly mounted “flat”, with the table rotating around a vertical axis, in the same plane as the cutter of a vertical milling machine. An alternate setup is to mount the rotary table on its end (or mount it “flat” on a 90° angle plate), so that it rotates about a horizontal axis. In this configuration a tailstock can also be used, thus holding the workpiece “between centers.”
With the table mounted on a secondary table, the workpiece is accurately centered on the rotary table’s axis, which in turn is centered on the cutting tool’s axis. All three axes are thus coaxial. From this point, the secondary table can be offset in either the X or Y direction to set the cutter the desired distance from the workpiece’s center. This allows concentric machining operations on the workpiece. Placing the workpiece eccentrically a set distance from the center permits more complex curves to be cut. As with other setups on a vertical mill, the milling operation can be either drilling a series of concentric, and possibly equidistant holes, or face or end milling either circular or semicircular shapes and contours.
A rotary table can be used:
• To machine spanner flats on a bolt
• To drill equidistant holes on a circular flange
• To cut a round piece with a protruding tang
• To create large-diameter holes, via milling in a circular toolpath, on small milling machines that don’t have the power to drive large twist drills (>0.500″/>13 mm)
• To mill helixes
• To cut complex curves (with proper setup)
• To cut straight lines at any angle
• To cut arcs
• With the addition of a compound table on top of the rotary table, the user can move the center of rotation to anywhere on the part being cut. This enables an arc to be cut at any place on the part.
• To cut circular pieces
Setting Up a Rotary Table
When using a rotary table on a Milling Machine, whether to mill an arc or drill holes in some circular pattern, there are two things that must be done to set up the workpiece. First, the workpiece must be centered on the rotary table. Second, the rotary table must be centered under the spindle. Then the mill table can be moved some appropriate distance and you can start cutting.
You could center the table under the spindle first, by indicating off the hole in the center of the table. Then you could mount the workpiece on the table and indicate off the workpiece. There are two problems with this approach. First, you are assuming that the hole in the table is true and centered. That may or may not be true. Second, this approach risks a sort of accumulation of errors, as you’re measuring from two different features (the rotary table’s hole and some feature on the workpiece). First center the workpiece on the rotary table, and then center the rotary table under the spindle.
To center the workpiece on the rotary table, spin the rotary table and watch for deflection of the indicator pointer. Adjust the position of the mill table(X and Y) as required, until the needle no longer deflects.
You dial in a rotary table by placing a dial test indicator in a chuck or collet in the spindle, which is then rotated by hand with the indicator tip in contact with the hole of the rotary table. If your machine can be taken out of gear, it helps to do so, so the spindle swings freely. It’s obviously easier to use a drill chuck than a collet, too, so you have something that you can turn easily. Make your adjustments using the saddle and table hand wheels.
Once you have center located (the indicator will read the same as you rotate the spindle, it’s a very good idea to set both of your dials at “0”, instead of marking some random location. Make sure you have backlash set properly, too. Set the dial is reading in a positive direction so it’s easy to count off any changes, and you never have to remember which way you had chosen to set backlash. I also always mark the table and saddle with a wax pencil so I know where center is located. That tells you when to stop turning the handle when “0” comes around if you want to get the table back to center to load another part.
Once you have located center of the table and have set dials and locked the table and saddle, you usually have some feature on your part that you desire to be centered. In some cases it may be a hole, in others it may be the outside edge of the circular part. In a case like either of these, it’s common practice to use the same indicator and swing it inside the hole or the perimeter of the part. The perimeter may require you to get around clamps, which can usually be accomplished by using the quill to move the indicator up far enough to clear them. When you dial in parts to a table that has already been located, you tap the part around, you do not make adjustments with the saddle or table handles. Tap the part after you’ve snugged up the clamps slightly, so it doesn’t move about wildly. You can achieve virtually perfect location that way, certainly as close as the machine is capable of working.
After the workpiece is centered on the rotary table, you now turn the spindle by hand, so the indicator tip sweeps the inside of the hole. Adjust the position of the mill table as required until no needle deflection is noted.
Setting up your Rotary Table
How to center the spindle over the center of the rotary table. Here are some of the methods to use.
To Center the Rotary Table with the Vertical Mill Spindle
Follow The following procedure:
1. Square the vertical head with the machine table.
2. Mount the rotary table on the milling machine table.
3. Place a test plug in the center hole of the rotary table.
4. Mount an dial indicator in the milling machine spindle.
5. With the dial indicator just clearing the top of the test plug, rotate the machine spindle by hand and approximately align the plug with the spindle.
6. Bring the dial indicator into contact with the diameter of the plug, and rotate the spindle by hand.
7. Adjust the machine table by the longitudinal(X) and crossfeed(Y) handles until the dial indicator registers no movement.
8. Lock the milling machine table and saddle, and recheck the alignment.
9. Readjust if necessary.
A way to setup your rotary table
Indicate Jig
Centering the jig or workpiece over the center of the rotary table. To do this, rotate the rotary table and adjust the work piece until I get consistent run out all the way around.
To Center a Workpiece with the Rotary Table
Often it is necessary to perform a rotary table operation on several identical workpieces, each having a machined hole in the center. To quickly align each workpiece, a special plug can be made to fit the center hole of the workpiece and the hole in the rotary table. Once the machine spindle has been aligned with the rotary table, each succeeding piece can be aligned quickly and accurately by placing it over the plug.
If there are only a few pieces, which would not justify the manufacture of a special plug, or if the workpiece does not have a hole through it center, the following method can be used to center the workpiece on the rotary table.
1. Align the rotary table with the vertical mill head spindle.
2. Lightly clamp the workpiece on the rotary table in the center. Do not move the longitudinal(X) or crossfeed(Y) feed handles.
3. Disengage the rotary table worm mechanism.
4. Mount an dial indicator in the milling machine spindle or milling machine table, depending upon the workpiece.
5. Bring the dial indicator into contact with the surface to be indicated, and revolve the rotary table by hand.
6. With a soft metal bar, tap the workpiece(away from the indicator movement) until no movement is registered on the indicator in a complete revolution of the rotary table.
7. Clamp the workpiece tightly, and recheck the accuracy of the setup.
Radius Milling
To mill the end on the workpiece to a certain radius or to machine circular slots having a definite radius, following procedure below should be followed.
1. Align the vertical milling machine at 90* to the table.
2. Mount an dial indicator in the milling machine spindle.
3. Mount rotary table on the milling machine table.
4. Center the rotary table with the machine spindle using a test plug in the table and a dial indicator on the spindle.
5. Set the longitudinal(X)feed dial and the crossfeed(Y) dial to zero.
6. Mount the workpiece on the rotary table, aligning the center of the radial cuts with the center of the table. A special arbor may be used for this. Another method is to align the center of the radial cut with a wiggler mounted in the machine spindle.
7. Move either the crossfeed or the longitudinal feed(whichever is more convenient) an amount equal to the radius required.
8. Lock both the table and the saddle.
9. Mount the proper end mill.
10. Set the correct speed(RPM).
11. Rotate the workpiece, using the rotary table feed handwheel, to the starting point of the cut.
12. Set the depth of the cut and machine the radius to the size indicated on the drawing, using hand or power feed.
UNIT TEST
1. When is an offset boring head used?
2. Name three major components of Boring Heads.
3. Why is the locking screw tightened after tool slide adjustments have been made.
4. Why does the tool slide have multiple holes to hold boring tools?
5. What determines the cutting speed in boring?
6. For what purpose may a rotary table be used?
7. What is the purpose of the hole in the center of a rotary table?
8. Describe briefly how a rotary table may be centered with a vertical mill spindle.
9. Describe briefly how a single workpiece would be centered on a rotary table.
10. Explain how a large radius may be cut using a rotary table.
Chapter Attribution Information
This chapter was derived from the following sources.
• Tapping Procedures derived from Drilling and Tapping by the University of Idaho, CC:BY-SA 3.0.
• Tramming derived from Tramming Mill Head by the University of Idaho, CC:BY-SA 3.0.
• Dial Indicator (Photo) derived from Dial Gauge by Wikimedia, CC:BY-SA 3.0.
• Milling Machine Procedures derived from Mechanical Engineering Tools by the Massachusetts Institute of Technology, CC:BY-NC-SA 4.0.
• Rotary Table derived from Rotary Table by the University of Idaho, CC:BY-SA 3.0. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/01%3A_Milling_Machines/01.5%3A_Unit_4%3A_Offset_Boring_Head.txt |
Unit 1: The Engine Lathe
OBJECTIVE
After completing this unit, you should be able to:
• Identify the most important parts of the Lathe and their functions.
• Understand the Lathe safety rules. • Describe setup a cutting tool for machining.
• Describe mount workpiece in the lathe.
• Explain how to install cutting tool.
• Describe the positioning the tool.
• Describe how to centering the workpiece and tailstock center.
Description
The lathe is a very versatile and important machine to know how to operate. This machine rotates a cylindrical object against a tool that the individual controls. The lathe is the forerunner of all machine tools. The work is held and rotated on its axis while the cutting tool is advanced along the line of a desired cut. The lathe is one of the most versatile machine tools used in industry. With suitable attachments, the lather may be used for turning, tapering, form turning, screw cutting, facing, dulling, boring, spinning, grinding, polishing operation. Cutting operations are performed with a cutting tool fed either parallel or at right angles to the axis of the work. The cutting tool may also be fed at an angle, relative to the axis of the work, for machining taper and angles. On a lathe, the tailstock does not rotate. Instead, the spindle that holds the stock rotates. Collets, centers, three jaw chucks, and other work-holding attachments can all be held in spindle. The tailstock can hold tools for drilling, threading, reaming, or cutting tapers. Additionally, it can support the end of the workpiece using a center and can be adjusted to adapt to different workpiece lengths.
Figure 1. Parts of a lathe
1. Power On/Off
2. Spindle Forward/Reverse (flip handle up or down)
3. Carriage Handwheel 4. Cross Feed Handwheel
5. Compound Feed Handwheel
6. Carriage/Cross Feed Engage
7. Threading Half Nut
8. Threading Dial
9. Spindle Speed
10. Brake
11. Spindle High/Low Range
12. Thread/Feed Reverse (push in/pull out)
13. Feed Ranges (A, B, C)
14. Feed Ranges (R, S, T)
15. Feed Ranges (V, W, X, Y, Z) – V and Z are settings for threading
16. Gear Box
17. Gear Box Low/High
18. Tailstock
19. Tool Post
20. Toolholder
21. Three – Jaw Chuck
22. DRO (Digital Read Out) Threading/Feed Selector (see item15)
Lathe Safety
As always we should be aware of safety requirements and attempt to observe safety rules in order to eliminate serious injury to ourselves or others.
Wear glasses, short sleeves, no tie, no rings, no trying to stop the work by hand. Stop the machine before trying to check the work. Don’t know how it works? –“Don’t run it.” Don’t use rags when the machine is running.
1. Remove the chuck key from the chuck immediately after use. Do not turn the lathe on if the chuck is still in the chuck key.
2. Turn the chuck or faceplate through by hand unless there are binding or clearance issues.
3. It is important that the chuck or faceplate is securely tightened onto the lathe’s spindle.
4. Move the tool bit to a safe distance from the chuck, collet, or face plate when inserting or removing your part.
5. Place the tool post holder to the left of the compound slide. This will ensure that the compound slide will not run into the spindle or chuck attachments.
6. When installing and removing chucks, face plates, and centers, always be sure all mating surfaces are clean and free from burrs.
7. Make sure the tool bit is sharp and has correct clearance angles.
8. Clamp the tool bit as short as possible in the tool holder to prevent it from vibrating or breaking.
9. Evenly apply and maintain cutting fluids. This will prevent morphing.
10. Do not run a threaded spindle in reverse.
11. Never run the machine faster than the recommended speed for the specific material.
12. If a chuck or faceplate is jammed on the spindle nose, contact an instructor to remove it.
13. If any filing is done on work revolving in the lathe, file left handed to prevent slipping into the chuck.
14. Always stop the machine before taking measurements.
15. Stop the machine when removing long stringy chips. Remove them with a pair of pliers.
16. Make sure that the tailstock is locked in place and that the proper adjustments are made if the work is being turned between centers.
17. When turning between centers, avoid cutting completely through the piece.
18. Do not use rags while the machine is running.
19. Remove tools from the tool post and tailstock before cleaning.
20. Do not use compressed air to clean the lathe.
21. Use care when cleaning the lathe. The cutting tools are sharp, the chips are sharp, and the workpiece may be sharp.
22. Make sure the machine is turned off and clean before leaving the workspace. Always remove the chuck wrench after use, avoid horseplay, keep floor area clean. Use care when cleaning the lathe, the cutting tools are sharp, the chips are sharp, and the workpiece may be sharp.
Here are some questions which are important when running a lathe:
• Why is proper Cutting Speed important?
When set too high the tool breaks down quickly, time is lost replacing or reconditioning the tool. Too low of a CS results in low production.
Know:
• Depth of cut for Roughing.
• Depth of cut for Finishing.
Notice the largest roughing cuts range from .010 to .030 depending on the material being machined, and .002 to .012 for the finish feed for the different materials.
• Feedrate for Roughing cut
• Feedrate for Finishing cut
Notice the Feedrate for roughing cuts range from .005 to .020 depending on the material being machined, and .002 to .004 for the finish feed for the different materials.
Cutting Tool Terminology
There are many different tools that can be used for turning, facing, and parting operations on the lathe. Each tool is usually composed of carbide as a base material, but can include other compounds. This section covers the different appearances and uses of lathe cutting tools.
Figure A:depicts a standard turning tool to create a semi-square shoulder. If there is enough material behind the cutting edge, the tool can also be used for roughing.
Figure B:depicts a standard turning tool with a lead angle. This angle enables for heavy roughing cuts. Ititalso possible to turn the tool to create a semi-square shoulder.
To setup a Cutting Tool for Machining
• Move the toolpost to the left-hand side of the compound rest.
• Mount a toolholder in the toolpost so that the set screw in the toolholder is about 1 inch beyond the toolpost.
• Insert the proper cutting tool into the toolholder, having the tool extend .500 inch beyond the toolholder.
• Set the cutting tool point to center height. Check it with straight rule or tailstock.
• Tighten the toolpost securely to prevent it from moving during a cut
Figure 2: Toolpost and Toolholder
To Mount Workpiece in Lathe
• Check that the line center is running true. If it is not running true, remove the center, clean all surfaces, and replace the center. Check again for trueness.
• Clean the lathe center points and the center holes in the workpiece.
• Adjust the tailstock spindle until it projects about 3 inch beyond tailstock.
• Loosen the tailstock clamp nut or lever.
• Place the end of the workpiece in the chuck and slide the tailstock up until it supports the other end of the workpiece.
• Tighten the tailstock clamp nut or level.
Figure 3: Workpiece in Lathe
Installing a Cutting Tool
• Tool holders are used to hold lathe cutting tools.
• To install, clean the holder and tighten the bolts.
• The lathe’s tool holder is attached to the tool post using a quick release lever.
• The tool post is attached to the machine with a T-bolt.
Figure 4: Installing a Cutting Tool
Positioning the Tool
To reposition the cutting tool, move the cross slide and lathe saddle by hand. Power feeds are also available. Exact procedures are dependent on the machine. The compound provides a third axis of motion, and its angle can be altered to cut tapers at any angle.
1. Loosen the bolts that keep the compound attached to the saddle.
2. Swivel the compound to the correct angle, using the dial indicator located at the compound’s base.
3. Tighten the bolts again.
4. The cutter can be hand fed along the chosen angle. The compound does not have a power feed.
5. If needed, use two hands for a smoother feed rate. This will make a fine finish.
6. Both the compound and cross slide have micrometer dials, but the saddle lacks one.
7. If more accuracy is needed when positioning the saddle, use a dial indicator that is attached to the saddle. Dial indicators press against stops.
Figure 5: Positioning the Tool
Centering the Workpiece
Steel Rule
1. Place the steel rule between the stock and the tool.
2. The tool is centered when the rule is vertical.
3. The tool is high when the rule is lean forward.
4. The tool is low when the rule is lean backward.
Tailstock Center
1. Reference the center of the tailstock when setting the tool.
2. Position the tip of the tool with the tailstock center.
UNIT TEST
1. Please list the ten most important parts of the Lathe.
2. Please list five Lathe safety guidelines.
3. Why is cutting speed important?
4. What is a Toolholder?
5. Where do you mount a Toolholder?
6. How far do you extend the cutting tool in the Toolholder?
7. Please list three different cutting tools.
8. Please describe the positioning of the tool.
9. Explain how to center the workpiece.
10. What are the two way to center the workpiece? | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.1%3A_Chapter_2%3A_Lathe_Machine.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Describe the Speed, Feed, and Depth of cut.
• Determine the RPM for different materials and diameters.
• Describe the federate for turning.
• Describe the setting speed.
• Describe the setting feed.
To operate any machine efficiently, the machinist must learn the importance of cutting speeds and feeds. A lot of time can be lost if the machines are not set at the proper speed and feeds for the workpiece.
In order to eliminate this time loss, we can, and should, use recommended metal-removal rates that have been researched and tested by steel and cutting-tool manufactures. We can find these cutting speeds and metal removal rates in our appendix or in the Machinery’s Handbook.
We can control the feed on an engine lathe by using the change gears in the quick-change gearbox. Our textbook recommends whenever possible, only two cuts should be taken to bring a diameter to size: a roughing cut and a finishing cut.
It has been my experience to take at least three cuts. One to remove excess material quickly: the rough cut, one cut to establish finish and to allow for tool pressure, and one to finish the cut.
If you were cutting thread all day long: day in and day out. You might set the lathe up for only two cuts. One cut to remove all but .002 or .003 of material and the last cut to hold size and finish. This is done all the time in some shops today.
Have you noticed that when you take a very small cut on the lathe .001 to .002 that the finish is usually poor, and that on the rough cut you made prior to this very light cut, the finish was good? The reason for this is: some tool pressure is desirable when making finish cuts.
IPM = Inches Per Minute
RPM = Revolutions Per Minute
Feed = IPM
#T = Number of teeth in cutter
Feed/Tooth = Chip load per tooth allowed for material
Chip/Tooth = Feed per tooth allowed for material
Feed Rate = ChipTooth × #T × RPM
Example: Material = Aluminum 3” Cutter, 5 Teeth Chip Load = 0.018 per tooth RPM = 3000 IPS = 0.018 × 5 × 3000 = 270 Inches Per Minute
Speed, Feed, and Depth of Cut
1. Cutting speed is defined as the speed (usually in feet per minute) of a tool when it is cutting the work.
2. Feed rate is defined as tool’s distance travelled during one spindle revolution.
3. Feed rate and cutting speed determine the rate of material removal, power requirements, and surface finish.
4. Feed rate and cutting speed are mostly determined by the material that’s being cut. In addition, the deepness of the cut, size and condition of the lathe, and rigidity of the lathe should still be considered.
5. Roughing cuts (0.01 in. to 0.03 in. depth of cut) for most aluminum alloys run at a feedrate of .005 inches per minute (IPM) to 0.02 IPM while finishing cuts (0.002 in. to 0.012 in. depth of cut) run at 0.002 IPM to 0.004 IPM.
6. As the softness of the material decreases, the cutting speed increases. Additionally, as the cutting tool material becomes stronger, the cutting speed increases.
7. Remember, for each thousandth depth of cut, the diameter of the stock is reduced by two thousandths.
Steel Iron Aluminum Lead
Figure 1: Increasing Cutting Speed Based on work material hardness
Carbon Steel High Speed Steel Carbide
Figure 2: Increasing Cutting Speed Based on Cutting tool hardness
Cutting Speeds:
A lathe work cutting speed may be defined as the rate at which a point on the work circumference travels past the cutting tool. Cutting speed is always expressed in meters per minute (m/min) or in feet per minute (ft/min.) industry demands that machining operations be performed as quickly as possible; therefore current cutting speeds must be used for the type of material being cut. If a cutting speed is too high, the cutting tool edge breaks down rapidly, resulting in time lost recondition the tool. With too slow a cutting speed, time will be lost for the machining operation, resulting in low production rates. Based on research and testing by steel and cutting tool manufacturers, see lathe cutting speed table below. The cutting speeds for high speed steel listed below are recommended for efficient metal removal rates. These speeds may be varied slightly to shift factors such as the condition of the machine, the type of work material and sand or hard spots in the metal. The RPM at which the lathe should be set for cutting metals is as follows:
To determine the RPM of the lathe while performing procedures on it:
Formula: RPM = (CuttingSpeed x 4) / Diameter
We first must find what the recommended cutting speed is for the material we are going to machine.
Learn to use the Machinery’s Handbook and other related sources to obtain the information you need.
EXAMPLE: How fast should a 3/8 inch drill be turning when drilling mild steel?
From our recommended cutting speed from our class handouts, use a cutting speed of 100 for mild steel.
(100 x 4) / .375 = 1066 RPM
What would the RPM be if we were turning a .375 diameter workpiece made out of mild steel on the lathe?
RPM = 100 X4 / 1.00 = 400 RPM
Recommended Cutting Speeds for Six Materials in RPM
These charts are for HSS tools. If using carbide, the rates may be increased.
Lathe Feed:
The feed of a lathe is the distance the cutting tool advances along the length of the work for every revolution of the spindle. For example, if the lathe is set for a .020 inch feed, the cutting tool will travel the length of the work .020 inch for every complete turn that work makes. The feed of a lathe is dependent upon the speed of the lead screw or feed rod. The speed is controlled by the change gears in the quick change gearbox.
Whenever possible, only two cut should be taken to bring a diameter cut. Since the purpose of a rough cut is to remove excess material quickly and surface finish is not too important. A coarse feed should be used. The finishing cut is used to bring the diameter to size and produce a good surface finish and therefore a fine feed should be used.
The recommended feeds for cutting various materials when using a high speed steel cutting tools listed in table below. For general purpose machining a .005 – .020 inch feed for roughing and a .012 to .004 inch feed for finishing is recommended.
To select the proper feed rate for drilling, you must consider several factors.
1. Depth of hole – chip removal
2. Material type – machinability
3. Coolant – flood, mist, brush
4. Size of drill
5. How strong is the setup?
6. Hole finish and accuracy
Feed Rates for Turning:
For general purpose machining, use a recommended feed rate of .005 – .020 inches per revolution for roughing and a .002 – .004 inches per revolution for finishing.
Feeds for Various Materials (using HSS cutting tool)
Setting speeds on a lathe:
The lathes are designed to operate at various spindle speeds for machining of different materials. There speeds are measured in RPM (revolutions per minute) and are changed by the cone pulleys or gear levels. One a belt-driven lathe, various speeds are obtained by changing the flat belt and the back gear drive. One the geared-head lathe speeds are changed by moving the speed levers into proper positions according to the RPM chart fastened to the lathe machine (mostly on headstock). While shifting the lever positions, place one hand on the faceplate or chuck, and form the face plate slowly by hand. This will enable the levers for engage the gear teeth without clashing. Never change speeds when the lathe is running on lathers equipped with variable speed drivers, the speed is changed by turning a dial of handle while he machine is running.
Setting feeds:
The feed of on lathe, or the distance the carriage will travel in on revolution of the spindle, depends on the speed of the feed rod or lead screw. This is controlled by the change gears in the quick-change gearbox. This quick change gearbox obtains its drive from the head stock spindle through the end gear train. A feeds and thread chart mounted on the front of the quick-change gearbox indicates the various feeds and metric pitches or thread per inch which may be obtained by setting levers to the positions indicated.
To set the feedrate for Acura Lathe:
Example:
1. Select the desired feedrate on the chart (See Figure 2)
2. Select federate of .007 – LCS8W (See Figure 2)
3. L = Select High/Low lever (See Figure 3)
4. C = Select Feed Ranges and change to C on this lever (See Figure 3)
5. S = Select Feed Ranges and change to S on this lever (See Figure 3)
6. 8 = Select Gear Box and change to 8 on this lever (See Figure 3)
7.W = Select Feed Ranges and change to W on this lever (See Figure 3) Before turning on the lathe, be sure all levers are fully engaged by turning the headstock spindle by hand, and see that the feed rod turns.
UNIT TEST
1. What is IMP and RPM?
2. What is the formula for Feedrate?
3. What would the RPM be if we were turning a 1.00” diameter workpiece made out of mild steel, using HSS cutting tool?
4. What would the RPM be if we were turning a 1.00” diameter workpiece made out of mild steel, using Carbide cutting tool?
5. The cutting speed for carbon steel and the workpiece diameter to be faced is 6.00”. Find the correct RPM.
6. A center drill has a 1/8” drill point. Find the correct RPM to use carbon steel.
7. If the cutting speed of aluminum is 300 sfm and the workpiece diameter is 4.00”, What is the RPM?
8. What is roughing and finishing federate for aluminum?
9. Please set the roughing cut feederate from figure 5.
10. Please set the finishing cut feederate from figure 5. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.2%3A_Unit_2%3A_Speed_and_Feed.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Describe different type chucks.
Chucks:
Some work pieces, because of their size and shape, cannot be held and machined between lathe centers. Lather chucks are used extensively for holding work for machining operations. The most commonly used lathe chucks are the three jaw universal, four jaw independent, and the collects chuck.
Three-jaw universal chuck:
Three-jaw universal chuck is used to hold round and hexagonal work. It grasps the work quickly and within a few hundredths of a millimeters or thousandths of an inch of accuracy, because the three jaws move simultaneously when adjusted by the chuck wrench. This simultaneous motion is cause by a scroll plate into which all here jaws fit. Three jaws chucks are made in various sizes from 1/8-16 inch in diameter. They are usually provided with two sets of jaws, one for outside chucking and the other for inside chucking.
Figure 1: Three-jaw universal chuck
Four-jaw independent chuck:
This four- jaw independent chuck has four jaws; each of which can be adjusted independently by a chance wrench. They are used to held wound, square, hexagonal, and irregular-shaped workpieces. The jaws can be reversed to hold work by the inside diameter.
Figure 2: Four-jaw independent chuck
Collect chuck:
The collect chuck is the most accurate chuck and is used for high precision work and small tools. Spring collects are available to hold round, square, or hexagon shaped workpieces. A adaptor is filled into the taper of the headstock spindle, and a hollow draw bar having an internal thread is instead in the opposite end of the headstock spindled. As the hand wheel and draw bar is revolved, it draws the collet into the tapered adaptor, causing the collet to tighten on the workpieces.
Figure 3: Collect chuck
The Jacob collect chuck has a wider range the spring collect chuck. Instead of a draw bar, it incorporates an impact-tightening hand wheel to close the collect on the workpiece. A set of II rubber flex collects, each capable of a range of almost 1/8 in, makes it possible to hold a wide range of work diameter. When the hand wheel is turned clockwise, the rubber flex collect is forced into a taper, causing it to tighten on the workpiece. When the hand wheel is turned counterclockwise, the collect opens and releases the workpiece.
Magnetic chucks:
A magnetic chucks are used to hold iron or steel parts that are too thin, or that may be damaged if held in a conventional chuck. These chucks are fitted to an adaptor mounted on the headstock spindle. Work id held lightly for aligning purposes by turning the chuck wrench approximately ¼. After the work has been turned
Faceplates:
A faceplates are used to hold work that is too large or of such a shape that it cannot be held in a chuck or between centers. Faceplates are equipped with several slots to permit the use of bolts to secure the work, so that the axis of the workpiece may be aligned with the lathe centers. When work is mounted off – center, a counterbalance should be fastened to the faceplate to prevent imbalance and the resultant vibrations when the lathe is in operation.
UNIT TEST
1. What are the most commonly used lathe chucks? Name three.
2. Describe three-jaw universal chuck.
3. Describe four-jaw universal chuck.
4. Describe Collect chuck.
5. Describe Jacobs collect chuck chuck.
6. Describe Magnetic chucks chuck.
7. Describe Faceplates | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.3%3A_Unit_3%3A_Chucks.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Describe the rough and finishing turning.
• Describe the turning shoulder.
• Describe the facing cut.
• Explain how to set up for center/spot drill.
• Explain how to set up for boring.
• Explain how to set up for knurling.
• Correctly set up a workpiece for parting/grooving.
• Determine the taper calculation.
• Correctly set up workpiece in a 4-jaw chuck.
Workpiece is generally machined on a lathe for two reasons: to cut it to size and to produce a true diameter. Work that must be cut to size and have the same diameter along the entire length of the workpiece involves the operation of parallel turning. Many factors determine the amount of materials which can be removed on a lathe. A diameter should be cut to size in two cuts: an roughing cut and finishing cut.
To have the same diameter at each end of the workpiece, the lathe centers must be in line.
To set an accurate depth of cut
Procedure:
1. Set the compound rest at 30 degrees.
2. Attach a roughing or finishing tool. Use a right-handed turning tool if feeding the saddle in the direction of the headstock.
3. Move the tool post to the left hand side of the compound rest and set the tool bit to right height center.
4. Set the lathe to the correct speed and feed for the diameter and type of material being cut.
5. Start the lathe and take a light cut about .005 inch and .250 inch long at the right hand end of the workpiece.
6. Stop the lathe, but do not move the crossfeed screw handle.
7. Move the cutting tool to the end of the workpiece (to the right side) by turning the carriage hand wheel.
8. Measure the work and calculate the amount of material to be removed.
9. Turn the graduated collar half the amount of material to be removed. For example, if .060 inch to be removed, the graduated collar should be turned in .030 inch, since the cut is taken off the circumference of the workpiece.
10. Remember, for each thousandth depth of cut, the diameter of the stock is reduced by two thousandths.
Rough Turning
The operation of rough turning is used to remove as much metal as possible in the shortest length of time. Accuracy and surface finish are not important in this operation. Therefore, max depth of .030 inch and a .020 to .030 inch feed is recommended. Workpiece is generally rough turned to within about .030 inch of the finished size in a few cuts as possible.
Procedure:
1. Set the lathe to the correct speed and feedrate for the type and size of the material being cut.
2. Adjust the quick change gear box for a .010 to .030 inch feed, depending on the depth of cut and condition of the machine.
3. For Example: .010
4. Move the tool holder to the left hand side of the compound rest and set the tool bit to right height to center.
5. Tighten the tool post securely to prevent the toolholder from moving during the machining operation.
6. Take a light trial cut at the right hand end of the workpiece for about .250 inch length.
7. Measure the workpiece and adjust the tool bit for the proper depth of cut.
8. Cut along for about .250 inch, stop the lathe and check the diameter for size. The diameter should be about .030 inch over the finish side.
9. Re-adjust the depth of cut, if necessary.
Finish Turning
Finish turning on a lathe, which follows rough turning , produces a smooth surface finish, and cuts the workpiece to an accurate size. Factors such as the condition of the cutting tool bit, the rigidity of the machine and workpiece and the lathe speed and feedrate, may affect the type of surface finish produced.
Procedure:
1. Check to see if the cutting edge of the tool bit is free from nicks, burns, etc. It is good practice to home the cutting edge before you take a finish cut.
2. Set the lathe to the recommended speed and feedrate. The feed rate used depends upon the surface finish required.
3. Take a light trial cut about .250 inch long at the right hand end of the work to produce a true diameter, set the cutting tool bit to the diameter and set the graduated collar to the right diameter.
4. Stop the lathe, measure the diameter.
5. Set the depth of cut for half the amount of material to be removed.
6. Cut along for .250 inch, stop the lathe and check the diameter.
7. Re-adjust the depth of cut, if necessary and finish turn the diameter. In order to produce the truest diameter possible, finish turn workpiece to the required size. Should it be necessary to finish a diameter by filming or polishing, never leave more than .002 to .003 inch for this operation.
Turning to a Shoulder
When turning more than one diameter on a workpiece. The change in diameter or step, is known as shoulder.
Three common types of shoulder:
1. Square
2. Filleted corner
3. Angular of Tapered
Procedure:
1. With a workpiece mounted in a lathe, lay out the shoulder position from the finished end of the workpiece. In case of filleted shoulders, all sufficient length to permit the proper radius to be formed on the finished shoulder.
2. Place the point of the tool bit at this mark and cut a small groove around the circumference to mark off the length.
3. With a turning tool bit, rough and finish turn the workpiece about .063 inch of the required length.
4. Set up an end facing tool. Chalk the small diameter of the workpiece, and bring the cutting tool up until it just removes the chalk mark.
5. Note the reading on the graduated collar of the cross feed handle.
6. Face square the shoulder, cutting to the line using hand feed.
7. For successive cuts, return the cross feed handle to the same graduated collar setting.
If a filleted corner is required, a tool bit having the same radius is used for finishing the shoulder. Angular or chamfered edges may be obtained by setting the cutting edge of the tool bit to the desired angle of chamfer and feeding it against the shoulder, or by setting the compound rest to the desired angle.
Facing
Workpieces to be machined are generally cut a little longer than required, and faced to the right length. Facing is an operation of machining the ends of a workpiece square with its axis. To produce a flat, square surface when facing, the lathe might be true.
The purpose of facing are:
• To provide a true, flat surface, square with the axis of the workpieces.
• To provide and accurate surface from which to take measurements.
• To cut the workpieces to the required length.
Figure 1. Facing Operation
Procedure:
1. Move the tool post to the left-hand side of the compound rest, and set the right hand facing tool bit to the right height of the lathe center point. The compound rest may be set at 30 degrees for accurate end facing.
2. Mount the workpiece in the chuck to face. Use a line center in the tail stuck or straight ruler if needed for true.
3. Insert a facing tool.
4. Position the tool slightly off from the part.
5. Set the facing tool bit pointing left at a 15-20 degree angle. The point of the tool bit must be closest to the workpiece and space must be left along the side.
6. Set the lathe to the correct speed and feed for the diameter and type of material being cut.
7. Before turning the machine on, turn the spindle by hand to make sure parts do not interfere with spindle rotation.
8. Start the lathe and bring the tool bit ad close to the lathe center as possible.
9. Move the carriage to the left, using the handwheel, until the small cut is started.
10. Feed the cutting tool bit inwards to the center by turning the cross feed handle. If the power feed cross feed is used for feeding the cutting tool, the carriage should be locked in position.
11. Repeat procedure 6,7 and 8 until the workpiece is cut to the correct length. 12. There will be a sharp edge on the workpiece after facing, which should be broken with a file.
To spot a workpiece
Spotting Tool bit is used to make a shallow, v-shaped hole in the center of the workpiece. Provides a guide for the drill to follow. A hole can be spotted quickly and fairly accurately by using a center drill. A spotting tool bit should be used for extreme accuracy.
Figure 2: Center/ Spot Tool
Procedure:
1. Mount workpiece true in a chuck.
2. Mount the drill chuck into the tailstock.
3. Ensure that the tang of the drill chuck is properly secured in the tailstock.
4. Move and lock the tailstock to the desired position.
5. Before turning the machine on, turn the spindle by hand to make sure parts do not interfere with spindle rotation.
6. Set the lathe to the proper speed for the type of material to be spot or center drill.
7. Start the hole using a center drill. 8. Spot the hole with a spotting or center drill tool bit.
Drilling
Figure 3. Drill
Procedure:
1. Mount the drill chuck into the tailstock.
2. Mount workpiece true in a chuck.
3. Check the tool stock center and make sure it is in line.
4. Ensure that the tang of the drill chuck is properly secured in the tailstock.
5. Move and lock the tailstock to the desired position.
6. Before turning the machine on, turn the spindle by hand to make sure parts do not interfere with spindle rotation.
7. Start the hole using a spotting or center drill tool bite.
8. When using a center drill, always use cutting fluid along with it.
9. A center drill doesn’t cut as easily as a drill bit would, as it has shallow flutes for added stiffness.
10. Drill past the entirety of the taper to create a funnel to guide the bit in.
11. Mount the drill in the tailstock spindle, in a drill chuck or in a drill holder.
12. Set the lathe to the proper speed the type of material to be drilled.
13. Start the lathe and drill to the desired depth according to the blueprint drawing, applying cutting fluid.
14. To gauge the depth of the hole, use the graduations on the tailstock spindle, or use a steel rule to measure the depth.
15. Use the peck drill operation to remove the chips and measure the depth of the hole.
16. When drilling, take off at most one or two drill bit diameters worth of material before backing off, clearing chips, and reapplying cutting fluid.
17. If the drill bit squeaks against the stock, apply more cutting fluid.
18. To remove the drill chuck from the tailstock, draw it back by around a quarter turn more than it will easily go.
19. Use a pin to press the chuck out of the collet.
Boring
Boring is an operation to enlarge and finish holes accurately. Truing of a hole by removing material from internal surfaces with a single point tool bit cutter. Special diameter holes, for which no drills are available, can be produced by boring.
Boring utilizes a single point cutting tool to enlarge a hole. This operation provides for more accurate and concentric hole, as opposed to drilling.
Since the cutter extends from the machine from a boring bar, the tool is not as well-supported, which can result in chatter. The deeper the boring operation, the worse the chatter. To correct this:
1. Reduce the spindle speed.
2. Increase the feed.
3. Apply more cutting fluid.
4. Shorten the overhang of the boring bar.
5. Grind a smaller radius on the tool’s nose.
Procedure:
1. Mount the workpiece in a chuck.
2. Face, spot and drill the hole on the workpiece.
3. Check to see if the boring bar has enough clearance.
• If the hole is too small for the boring bar, the chips will jam while machining and move the bar off-center.
4. Make sure that the point of the boring tool is the only part of the cutter than contacts the inner surface of the workpiece.
5. If the angle does not provide sufficient end relief, replace the cutter with one that has a sharper angle.
6. Position the boring bar so the point of the cutter is positioned with the centerline of the stock.
7. A tool that is not placed in line with the center of the workpiece will drag along the surface of stock, even if there is a sufficient end relief angle.
8. Select a boring bar as large as possible and have it extend beyond the holder only enough to clear the depth of the hole to be bored.
9. Mount the holder and boring tool bar with the cutter tool bit on the left hand side of the tool post and revolving the workpiece.
10. Set the boring tool bit to center.
• Note: Depending on the rigidity of the setup, the boring tool bit will have a tendency to spring downward as pressure is applied to the cutting edge. By setting the boring tool bit slightly above center, compensation has been made for the downward spring and the tool bit will actually be positioned on the exact center of the workpiece during machining operations.
11. Set the lathe to the proper cutting speed and feed. a. Note: For feedrate select a medium feed rate.
12. Apply lube to the hole before turning the machine on.
13. Turn the machine on and move the tool into the pre-drilled hole.
14. Start the lathe and slowly bring the boring tool until it touch the inside diameter of the hole.
15. Take a light cut (about .003 in.) and about -375 long.
16. Stop the lathe and measure the hole diameter, use a telescopic gauge or inside micrometer.
17. After measure the hole, determine the amount of material to be removed from the hole. Leave about .020 in a finish cut.
18. Start the lathe and take the roughing cut.
19. Feed the boring bar into the workpiece, taking off about .020 on each pass.
20. Bring the boring bar out once the desired depth has been reached.
21. Repeat steps 19 and 20 until the desired diameter of the inside hole has been attained.
22. After roughing cut is completed, stop the lathe and bring the boring tool bit out of the hole without moving the cross feed handle.
23. Set the depth of the finish cut and bore the hole to size. For a good surface finish, a fine feedrate is recommended.
24. On the last pass, stop at the desired depth and bring the cutter back towards the center of the stock. This will face the back of the hole.
25. Bring the boring bar out of the machine and stop the machine.
Figure 4. Boring on a lathe
Knurling
1. A knurl is a raised impression on the surface of the workpiece produced by two hardened rolls.
2. Knurls are usually one of two patterns: diamond or straight.
3. Common knurl patterns are fine, medium, or coarse.
4. The diamond pattern is formed by a right-hand and a left-hand helix mounted in a self-centering head.
5. Used to improve appearance of a part & provide a good gripping surface for levers and tool handles.
6. Common knurl patterns are fine, medium, or coarse.
7. The straight pattern, formed by two straight rolls, is used to increase the size of a part for press fits in light-duty applications.
8. Three basic types of knurling toolholders are used: the knuckle-joint holder, the revolving head holder, and the straddle holder.
9. Knurling works best on workpieces mounted between centers.
10. Knurls do not cut, but displace the metal with high pressure.
11. Lubrication is more important than cooling, so a cutting oil or lubricating oil is satisfactory.
12. Low speeds (about the same as for threading) and a feed of about .010 to .020 in. are used for knurling.
13. The knurls should be centered on the workpiece vertically & the knurl toolholder square with the work.
14. A knurl should be started in soft metal about half depth and the pattern checked.
15. Several passes may be required on a slender workpiece to complete a knurl because the tool tends to push it away from the knurl.
16. Knurls should be cleaned with a wire brush between passes.
Figure 5. Knurling
Procedure:
1. Mount the knurling tool into a tool holder and and adjust it to the exact centerline of the lathe spindle.
2. Position and secure the knurling tool 90 degrees to the surface of the knurled.
3. Move the lathe carriage by hand and locate the area on the workpiece to be knurled.
4. Rotate the knurling head to index to the correct set knurls.
5. Position the knurls to the right edge of work such that half of the knurl contacts the right edge of the workpiece.
6. Apply cutting oil to the work.
7. Turn the spindle to about 100 RPM and use the crossfeed handwheel to move the knurling tool into the work. This should be approximately 0.030 inches, or until knurls track and form a good pattern.
8. Engage the lathe power feed to move the carriage towards the headstock at a feedrate of 0.010 to 0.020 inches per revolution.
9. Apply oil as required and brush knurled area with a stiff brush to clean chips from knurl.
10. When the knurls reach the end of knurled area, reverse the carriage feed direction feed direction and feed knurls into the work another 0.005 to 0.010 inches.
11. Continue knurling back and forth until a sharp diamond develops.
Parting and Grooving on a Lathe
The purpose of parting and grooving:
There are times when you may want to cut a piece from the end of a workpiece, or you may want to cut a groove into a workpiece.
Grooving, commonly called recessing, undercutting, or necking, is often done at the end of a thread to permit full travel of the nut up to a shoulder or at the edge of a shoulder to ensure a proper fit of mating parts. There are three types of grooves: square, round, and u-shaped.
Rounded grooves are usually used where there is a strain on the part, and where a square corner would lead to fracturing of the metal.
To cut a Groove
Procedure:
1. Select a tool bit to the desired size and shape of the groove required.
2. Lay out the location of the groove.
3. Set the lathe to half the speed for turning.
4. Mount the workpiece in the lathe.
5. Set the tool bit to center height.
6. Slowly feed the tool bit into the workpiece using the cross feed handle.
7. Apply plenty of cutting oil to the point of the cutting tool. To ensure that the cutting will not blind in the groove. If chatter develops, reduce the spindle speed.
8. Stop the lathe and check the depth of groove.
9. Repeat procedures 6-7 until the work is cut to the correct depth.
Figure 6. Cutting a Groove
Parting
Cut off tools, often called parting tools, are used for cutting workpiece. There are three types of parting tools. The parting tool consists of a straight holder, left hand offset and right hand offset inserted blade are the most commonly used.
There are two common problems in parting, chattering and hugging in. A chattering occurs when the tool is not held solidly enough, any looseness in the tool, holder, or any part of the lathe itself makes cutting off difficult, uneven, and often impossible. Hugging in means the tool tends to dig into the workpiece tends to climb over the top of the cutting edge. This usually breaks off the tool bit or wrecks the workpiece. Hugging in is usually caused when the parting tool is set too high or too low.
• Parting tools are narrower but deeper than turning tools. Parting tools are used to create narrows grooves and cut off parts of the stock.
• The tool holder should barely clear the workpiece when the parting tool is installed.
• Make sure the parting tool is perpendicular to the axis of rotation.
• Ensure the tip of the tool rests at the same height as the center of the stock. Holding the tool against the face of the part may help with this.
• Set the tool’s height, lay it against the part’s face, and lock the tool in place. Remember to apply cutting fluid, especially when making a deep cut.
Figure 7. Parting
Procedure:
1. Mount the workpiece in the chuck with the part to be cut off as close to the chuck as possible.
2. Mount the parting tool on the left hand side of the compound rest with the cutting edge set on center.
3. Place the holder as close to the tool post as possible to prevent vibration and chatter.
4. Adjust the tool bit. The tool bit should extend from the holder a distance equal to little more than half the diameter of the workpiece. Adjust the revolution per minute (rpm) to about ⅔ the speed for turning.
5. Mark the location of the cut.
6. Move the cutting tool into position.
7. Start the lathe and slowly feed the parting tool into workpiece by hand. Grip the cross feed handle with both hands in order to feed steadily and uniformly. Apply plenty of cutting oil.
8. When the workpiece is about ¼ in, it is good practice to move the parting tool sideways slightly. This side motion cut a little wider to prevent the tool from jamming.
9. To avoid chatter, keep the tool cutting and apply cutting oil consistently during the operation. Feed slowly when the part is almost cut off.
10. Keep advancing the tool until it reaches the center of the workpiece. As you get close, the workpiece is suspendend by a thin stalk of metal.
11. The end of the workpiece that you cut off will generally have a pretty rough finish and a little stalk of metal protruding from the end. See figure 19 below.
12. The final step it to mount this piece in the chuck and make a facing cut to clean up the end. One problem with this step is that the chuck jaws can mar the finished workpiece. If you look carefully at figure 20 below you can actually see the imprint of the chuck jaws. To avoid this, you could wrap the workpiece in a thin strip of emory paper, or similar protective material, before clamping it.
Figure 8. Workpiece Cutoff Figure 9. Finished Workpiece
Alignment of Lathe Centers
To produce a parallel diameter when machining work between centers, it is important that is, the two lathe centers must be in line with each other and running true with the centerline of the lathe. If the center are not aligned, the work being machined will be tapered.
There are three methods to align lathe centers:
1. By aligning the centerlines on the back of the tail stock with each other. This is only a visual check and therefore not for accurate.
2. The trial cut method, where a small cut is taken from each end of the work and the diameter are measured with a micrometer.
3. Align Centers using a Dial Indicator.
Method 1. To align centers by adjusting the tailstock.
Procedure:
1. Loosen the tailstock clamp not or lever.
2. Loosen one of the adjusting screw on the left or right side, depending upon the direction the tail stock must be moved. Tighten the other adjusting screw until the line on the top half of the tail stock aligns exactly with the line on the bottom half.
3. Tighten the loosened adjusting screw to lock both halves of the tailstock in place.
4. Lock the tailstock clamp nut or lever.
Method 2. To align center by the trail cut method.
Procedure:
1. Take a light cut about .010 to a true diameter, from section A at the tailstock end of .250 inch long.
2. Stop the feed and note the reading on the graduated collar of the cross feed handle.
3. Move the cutting tool close to the headstock end.
4. Bring the cutting tool close to the same collar setting as step 1 (Section A).
5. Return the cutting tool to the same collar setting as step 1. (Section A)
6. Cut a .250 length at Section B and then stop the lathe.
7. Measure both diameters with a micrometer.
8. If both diameters are not the same size, adjust the tailstock either toward or away from the cutting tool one-half the difference of the two readings.
9. Take another light cut at Section A and B. Measure these diameters and adjust the tailstock, if required.
Method 3. To Align Centers using a Dial Indicator.
Procedure:
1. Clean the lathe and work centers and mount the dial indicator.
2. Adjust the test bar snuggly between centers and tighten the tailstock spindle clamp.
3. Mount a dial indicator on the tool post or lathe carriage. Be sure that the indicator plunger is parallel to the lathe bed and that the contact point is set on center.
4. Adjust the cross slide so that the indicator registers about .025 inch at the tailstock end.
5. Move the carriage by hand so the test indicator registers on the diameter at the headstock end and note the test indicator reading.
6. If both test indicator readings are not the same. Adjust the tailstock by the adjusting screw until the indicator registers the same reading at both ends.
Taper Calculations
To calculate the taper per foot (tpf). It is necessary to know the length of the taper, large and small diameter.
Figure 10. The main part of an inch taper
Formula:
Tpf = ((D-d) / length of taper) x 12
Example:
Tpf = ((1.25 – 1) / 3) x 12 = (.25 / 3) x 12 = 1 in.
Tailstock Offset Calculations
When calculating the tail stock offset, the taper per foot and total length of the workpiece must be known.
Figure 11. Dimension of a workpiece having a taper
Formula:
Tailstock offset = (tpf x total length of workpiece) / 24
Example:
1. Find tpf:
tpf = ((1.125 – 1) x 12) / 3 = (.125 x 12) / 3 = .50 in.
2. FInd the Tailstock offset:
Tailstock offset = (.5 x 6) / 24 = 3 / 24 = .125 in.
In some case where it is not necessary to find the taper per foot, the following simplified formula can be used.
Formula:
Tailstock Offset = (OL / TL) x ((D-d) / 2)
OL = Overall length of workpiece
TL = length of the tapered section
D = large diameter end
d = small diameter end
Example:
Tailstock Offset = (6 / 3) x ((1.125-1) / 2) = .125
Taper Turning
Using the compound rest to produce short or steep tapers. The tool bit must be fed in by hand, using the compound rest feed handle.
Cut a taper producer with Compound rest
Procedure:
1. Refer to the blueprint drawing for the amount of the taper required in degrees.
2. Loosen the compound rest lock screws.
3. Swivel the compound rest to the angle desired. (See first Picture)
4. Tighten the compound rest lock screws.
5. Adjust the tool bit on center and feed the cutting tool bit, using the compound rest feed screw.
6. Check the taper for size and fit.
Figure 12. Taper Turning Operation
True workpiece in a 4-jaw chuck
1. A dial or test indicator should be used whenever a machined diameter must be aligned to within a thousandths of an inch.
2. Procedure:
3. Insert the workpiece in the 4-jaw chuck and true it approximately, using either the chalk or surface gauge method.
4. Mount an indicator, in the tool post of the lathe.
5. Set the indicator spindle in a horizontal position with the contact point set to the center height.
6. Bring the indicator point against the workpiece diameter so that it registers about .020 and rotate the lathe spindle by hand.
7. As you revolve the lathe, note the highest and lowest reading on the dial indicator.
8. Slightly loosen the chuck jaw at the lowest reading, and tighten the jaw at the high reading until the work is moved half the difference between the two indicator readings.
Side 1. Left and Right Side
9. Continue to adjust only these two opposite jaws until the indicator registers the at both jaws. Disregard the indicator readings on the work between these two jaws.
10. Adjust the other set of opposite jaws in the same manner until the indicator register the same at any point on the workpiece circumference.
Side 2. Left and Right Side
11. Tighten all jaws evenly to secure the workpiece firmly.
12. Rotate the lathe spindle by hand and recheck the indicator reading.
UNIT TEST
1. The compound rest is set at what angle?
2. Explain the different between rough and finish turning.
3. Should the point of the tool be set above, or at the center of the spindle axis when taking a facing cut?
4. What is the purpose of facing?
5. Why do we spot drill a workpiece?
6. What is the purpose of boring?
7. Name three types of parting tools.
8. Name three methods to align lathe centers.
9. Calculate the offset for the taper if D=2, d=1, OL=6, and TL=3. The formula is:
Offset = (OL x (D-d)) / (2 x TL)
10. Please describe the producer for cut a taper. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.4%3A_Unit_4%3A_Turning.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Describe the tapping procedure.
• Determine the RPM for tapping.
• Describe the filling and polishing.
• Describe the advanced workholding.
Tapping
Tapping is the process of cutting a thread inside a hole so that a cap screw or bolt can be threaded into the hole. Also, it is used to make thread on nuts.
Tapping can be done on the lathe by power feed or by hand. Regardless of the method, the hole must be drilled with the proper size tap drill and chamferred at the end.
Tapping Procedures
Good Practices
Using Tap Guides
Tap guides are an integral part in making a usable and straight tap. When using the lathe or the mill, the tap is already straight and centered. When manually aligning a tap, be careful, as a 90° tap guide is much more accurate than the human eye.
Using Oil
When drilling and tapping, it is crucial to use oil. It keeps the bits from squealing, makes the cut smoother, cleans out the chips, and keeps the drill and stock from overheating.
Pecking
Pecking helps ensure that bits don’t overheat and break when using them to drill or tap. Peck drilling involves drilling partway through a part, then retracting it to remove chips, simultaneously allowing the piece to cool. Rotating the handle a full turn then back a half turn is common practice. Whenever the bit or tap is backed out, remove as many chips as possible and add oil to the surface between the drill or tap and the workpiece. Hand Tapping Procedure 1. Select drill size from chart. When choosing a tap size, this chart is the first place to look
Hand Tapping Procedure
1. Select drill size from chart.
When choosing a tap size, this chart is the first place to look.
Tap & Clearance Drill Sizes
2. If necessary, add chamfer to the hole before tapping. Chamfers and countersinks are additional features that are sometimes desired for screws. For best results, the speed of the spindle should be between 150 and 250 rpm.
3. Get a tap guide. The hole is now ready to tap. To do this, use the taps and guide blocks near the manual mills. The guide blocks will have several holes for different sized taps. Select the one closest to the size of the tap being used and place it over the drilled hole.
4. Tap the block. Peck tap using the tap wrenches. Apply gentle pressure while turning the wrench a complete turn in, then a half-turn out. Peck tap to the desired depth.
5. Complete the tap. If the tap does not go any further or the desired depth has been reached, release pressure on the tap; it has likely bottomed out. Remove the tap from the hole.
Applying any more pressure is likely to break the tap. The smaller the tap, the more likely it is to break.
Figure 1. Tap
Tapping Procedure for Lathe
Procedure:
1. Mount the workpiece in the chuck.
2. Face and center drill.
3. Select the proper tap drill for the tap to be used.
4. Example: ¼ – 20 unc used # 7 drill.
5. Set the lathe to the proper speed and drill with the tap to the required depth. Use plenty cutting fluid.
6. Note: the workpiece will rotate when tapping using the lathe power. Use a very slow spindle speed. (40 to 60 rpm) and plenty of cutting fluid.
7. Chamfer the edge of the hole.
Filing in a Lathe
A workpiece should be filled in a lathe only to remove a small amount of stock, to remove burns or round off sharp corners. Workpiece should always be turned to about .002 to .003 inch of size, if the surface is to be filed. Hold the file handle in the left hand to avoid injury when filing on the lathe, so that the arms and hands can be kept clear of the revolving chuck.
Procedure:
1. Set the spindle speed to about twice that used for turning.
2. Mount the workpiece in the chuck, lubricate, and adjust the dead center in the workpiece.
3. Move the carriage as far to the right side as possible and remove the tool post (if needed)
4. Disengage the lead screw and feed rod.
5. Select right file to be used.
6. Start the lathe.
7. Grasp the file handle in the left hand and support the file point with the right hand finger.
8. Apply light pressure and push the file forward to its full length. Release pressure on the return stake.
9. Move the file about half the width of the file for each stroke and continue filing, using 30 to 40 strokes per minute until the surface is finished.
Figure 2. Filing
When filing in a lathe, the following safety should be observed.
• Roll up sleeves.
• Do not use a file without a properly fitted handle.
• Remove watches and rings.
• Do not apply too much pressure to the file.
• Clean the file frequently with a file brush. Rub a little chalk into the file teeth to prevent clogging and facilitate cleaning.
Polishing in a Lathe
After the workpiece has been filed, the finish may be improved by polishing with abrasive cloth.
Procedure:
1. Select the collect type and grade of abrasive cloth, for the finish desired, use a piece about 6 to 8 inch long and 1 inch wide.
2. Set the lathe to run on high speed (about 800-1000 rpm).
3. Disengage feed rod and lead screw.
4. Lubricate and adjust the dead center.
5. Start the lathe.
6. Hold the abrasive cloth on the workpiece.
7. With the right hand, press the cloth firmly on the work while tightly holding the other end of the abrasive cloth with the left hand.
8. Move the cloth slowly back and forth along the workpiece.
Figure 3. Polishing
When polishing in a Lathe, the following safety should be observed:
1. Roll up sleeves.
2. Tuck in any loose clothing
For normal finishes, use 80 to 100 grit abrasive cloth. For better finishes, use a finnier grit abrasive cloth.
Advanced Workholding
Some parts may be irregular, calling for specialized tools to hold them properly before being machined.
1. The part cannot be placed into a collet or chuck when cutting on the entire outside diameter of the stock.
2. Parts with holes through it should be pressed onto a lathe arbor (a tapered shaft) and then clamped onto the arbor rather than the part itself.
3. If the hole is too large, using a lathe arbor will not sufficiently support the piece. Instead, use the outside jaws to grasp the inside diameter of the part.
4. Parts with complex geometries may need to be attached onto a faceplate that will be further installed onto the spindle.
LATHE WORKHOLDING:
The following table provides a quick comparison of the strengths and weaknesses of the different means of holding the workpiece on a lathe:
Method
Precision
Repeatability
Convenience
Notes
Collets
High
High
High
Fast, high precision, high repeatability, grips well, unlikely to mar workpiece, grip spread over a wide area. Expensive chucks and collets. Handles limited lengths. Workpiece must be round andmust fit nearly exactly to the collet size.
3-Jaw Chuck With Soft Jaws
High
High
High
For larger workpieces, 3 jaw chucks withsoftjawsare the norm in the CNC world.
3-Jaw Self-Centering Chuck with Hard Jaws
Low
Low
High
Common, cheap, simple. Low precision, low repeatability if you remove the workpiece and have to put it back.
4-Jaw Chuck
High
High
Medium
Can be time consuming to individually adjust the jaws, but will result in high precision. Can hold pieces offset for turning cams or eccentrics. Can hold irregular shapes and square or rectangular stock.
6-Jaw Self-Centering Chuck
Medium
Medium
High
Best for thin wall work or to grip finished edges of workpiece. Obviously good for hex stock.
Faceplate Turning
Varies w/ Setup
Medium
Low
Great for irregular shapes. Involves clamps like a milling setup. May need counterweights to keep things balanced.
Turning Between Centers
High
High
Low
Great precision, allows part to be put back between centers with very high repeatability.
Constant Face Turning
High
High
High
The modern alternative to turning between centers. Instead of using lathe dogs, which are kind of a nuisance to set up, the constant face system uses hydraulic or other force to grip and drive the spindle end.
Expanding Arbors
High
High
High
These work from the inside out rather than the outside in but are otherwise much like collets.
Method describes the particular technique or tooling to be used.
Precision describes how precisely the workpiece will be held, or how close to concentrically it will run with the spindle before taking any cuts.
Repeatability describes how easy it is to take the workpiece out and then get it back in precisely again.
UNIT TEST
1. What drill size to be used for ½ -20 tap?
2. What is the purpose of chamfer?
3. What is the best RPM for tapping?
4. What spindle speed do we set for filing?
5. What is the purpose of polishing?
6. What is the best grit abrasive cloth for normal finishes?
7. What type of work is best suited to three-jaw chucks?
8. What are the special characteristics of the three-jaw chuck?
9. Explain the different between three-jaw chuck and 4-jaw chuck.
10. What are the advantages and disadvantages of a collect chuck? | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.5%3A_Unit_5%3A_Tapping.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Determine the infeed depth.
• Describe how to cut a correct thread.
• Explain how to calculate the pitch, depth, and minor diameter, width of flat.
• Describe how to set the correct rpm.
• Describe how to set the correct quick change gearbox.
• Describe how to set the correct compound rest.
• Describe how to set the correct tool bit.
• Describe how to set both compound and crossfeed on both dials to zero.
• Describe the threading operation.
• Describe the reaming.
• Describe how to grind a tool bit.
Lathe Threading
Thread cutting on the lathe is a process that produces a helical ridge of uniform section on the workpiece. This is performed by taking successive cuts with a threading toolbit the same shape as the thread form required.
Practice Exercise:
1. For this practice exercise for threading, you will need a piece of round material, turned to an outside tread Diameter.
2. Using either a parting tool or a specially ground tool, make an undercut for the tread equal to its single depth plus .005 inch.
3. The formula below will give you the single depth for undertaking unified threads:
d = P x 0.750
Where d = Single Depth
P = Pitch
n = Number of threads per inch (TPI)
Infeed Depth = .75 / n
Thread Calculations
To cut a correct thread on the lathe, it is necessary first to make calculations so that the thread will have proper dimensions. The following diagrams and formulas will be helpful when calculating thread dimensions.
Example: Calculate the pitch, depth, minor diameter, and width of flat for a ¾-10 NC thread.
P = 1 / n = 1 / 10 = 0.100 in.
Depth = .7500 x Pitch = .7500 x .100 = .0750 in.
Minor Diameter = Major Diameter – (D + D) = .750 – (.075 + .075) = 0.600 in.
Width of Flat = P / 8 = (1 / 8) x (1/10) = .0125 in.
Procedure for threading:
1. Set the speed to about one quarter of the speed used for turning.
2. Set the quick change gearbox for the required pitch in threads. (Threads per inch)
Figure 1. Thread and Feed Chart
Figure 2. Setting Gearbox
3. Set the compound rest at 29 degrees to the right for right hand threads.
Figure 3. 29 Degrees
4. Install a 60 degree threading tool bit and set the height to the lathe center point.
Figure 4. 60 Degree Threading Tool
5. Set the tool bit and a right angles to the work, using a thread gage.
Figure 5. Using the Center gage to positionthe tool for machining Threads
6. Using a layout solution, coat the area to be threaded.
Figure 6. Layout
7. Move the threading tool up to the part using both the compound and the cross feed. Set the micrometer to zero on both dials.
Figure 7. Compound Figure 8. Cross Feed
8. Move cross feed to the back tool off the work, move carriage to the end of the part and reset the cross feed to zero.
Figure 9. End of the part and Cross feed to Zero
9. Using only the compound micrometer, feed in .001 to .002 inch.
Figure 10: Compound feed in .002 inch
10. Turn on the lathe and engage the half nut.
Figure 11 : On/Off Lever and Half Nut
11. Take a scratch cut on the part without cutting fluid. Disengage the half nut at the end of the cut, stop the lathe and back out the tool using the cross feed. Return the carriage to the starting position.
Figure 12. Starting Position
12. Using a screw pitch gage or a rule check the thread pitch. (Threads per inch)
Figure 13. Screw Pitch Gage Figure 14. Screw Pitch Gage(10)
13. Feed the compound in .005 to .020 inch for the first pass using cutting oil. As you get near the final size, reduce the depth of cut to .001 to .002 inch.
14. Continue this process until the tool is within .010 inch of the finish depth.
Figure 15. Threading operation
15. Check the size using a screw thread micrometer, thread gage, or using the three wire system.
Figure 16. Three wire measurement
16. Chamfer the end of the thread to protect it from damage.
Reaming
Reamers are used to finish drilled holes or bores quickly and accurately to a specified sized hole and to produce a good surface finish. Reaming may be performed after a hole has been drilled or bored to within 0.005 to 0.015 inch of the finished size since the reamer is not designed to remove much material.
The workpiece is mounted in a chuck at the headstock spindle and the reamer is supported by the tailstock.
The lathe speed for machine reaming should be approximately 1/2 that used for drilling.
Reaming with a Hand Reamer
The hole to be reamed by hand must be within 0.005 inch of the required finished size.
The workpiece is mounted to the headstock spindle in a chuck and the headstock spindle is locked after the workpiece is accurately setup. The hand reamer is mounted in an adjustable reamer wrench and supported with the tailstock center. As the wrench is revolved by hand, the hand reamer is fed into the hole simultaneously by turning the tailstock handwheel. Use plenty cutting fluid for reaming.
Reaming with a Machine Reamer
The hole to be reamed with a machine reamer must be drilled or bored to within 0.010 inch of the finished size so that the machine reamer will only have to remove the cutter bit marks. Use plenty cutting fluid for reaming.
Grind a Lathe Tool bit
Procedure:
1. Grip the tool bit firmly while supporting the hand on the grinder tool set.
2. Hold the tool bit at the proper angle to grind the cutting edge angle. At the same, tilt the bottom of the tool bit in towards the wheel and grind 10 degrees side relief or clearance angle on the cutting edge. The cutting edge should be about .5 inches long and should be over about ¼ the width of the tool bit.
3. While grinding tool bit, move the tool bit back and forth across the face of the grinding wheel. This accelerates grinding and prevents grooving the wheel.
4. The tool bit must be cooled frequently during the grinding operation by dip into the water. Never overheat a tool bit.
5. Grind the end cutting angle so that it form an angle a little less than 90 degrees with the side cutting edge. Hold the tool so that the end cutting edge angle and end end relief angle of 15 degrees are ground at the same time.
6. Check the amount of end relief when the tool bit is in the tool holder.
7. Hold the top of the tool bit at about 45 degrees to the axis of the wheel and grind the side rake about 14 degrees.
8. Grind a slight radius on the point of the cutting tool, being sure to maintain the same front and side clearance angle.
Grind front Grind side Grind radius
Cutting tool Materials
Lathe tool bits are generally made of four materials:
1. High speed steel
2. Cast alloys
3. Cemented Carbides
4. Ceramics
The properties that each of these materials possess are different and the application of each depends on the material being machined and the condition of the machine.
Lathe tool bits should possess the following properties.
1. They should be hard.
2. They should be wear resistant.
3. They should be capable of standing up to high temperatures developed during the cutting operation.
4. They should be able to withstand shock during the cutting operation.
Cutting tool Nomenclature
Cutting tools used on a lathe are generally single pointed cutting tools and although the shape of the tool is changed for various applications. The same nomenclature applies to all cutting tools.
Procedure:
1. Base: the bottom surface of the tool shank.
2. Cutting Edge: the leading edge of the tool bit that does the cutting.
3. Face: the surface against which the chip bears as it is separated from the work.
4. Flank: The surface of the tool which is adjacent to and below the cutting edge.
5. Nose: the tip of the cutting tool formed by the junction of the cutting edge and the front face.
6. Nose radius: The radius to which the nose is ground. The size of the radius will affect the finish. For rough cut, a 1/16 inch nose radius used. For finish cut, a 1/16 to ⅛ inch nose radius is used.
7. Point: The end of the tool that has been ground for cutting purposes.
8. Shank: the body of the tool bit or the part held in the tool holder.
9. Lathe Tool bit Angles and Clearances
Proper performance of a tool bit depends on the clearance and rake angles which must be ground on the tool bit. Although these angles vary for different materials, the nomenclature is the same for all tool bits.
• Side cutting edge angle: The angle which the cutting edge forms with the side of the tool shank. This angle may be from 10 to 20 degrees depending on the material being cut. If angle is over 30 degrees, the tool will tend to chatter.
• End cutting edge angle. The angle formed by the end cutting edge and a line at right angle to the centerline of the tool bit. This angle may be from 5 to 30 degrees depending on the type of cut and finish desired. For roughing cuts an angle of 5 to 15 degrees, angle between 15 and 30 degrees are used for general purpose turning tools. The larger angle permits the cutting tool to be swivelled to the left when taking light cuts close to the dog or chuck, or when turning to a shoulder.
• Side Relief (clearance) angle: The angle ground on the flank of the tool below the cutting edge. This angle may be from 6 to 10 degrees. The side clearance on a tool bit permit the cutting tool to advance lengthwise into the rotating work and prevent the flank from rubbing against the workpiece.
• End Relief (clearance) angle: the angle ground below the nose of the tool bit which permits the cutting tool to be fed into the work. This angle may be 10 to 15 degrees for general purpose cut. This angle must be measured when the tool bit is held in the tool holder. The end relief angle varies with the hardness and type of material and type of cut being taken. The end relief angle is smaller for harder materials, to provide support under the cutting edge.
• Side Rake Angle: The angle at which the face is ground away from the cutting edge. This angle may be 14 degrees for general purpose tool bits. Side rake centers a keener cutting edge and allows the chip to flow away quickly. For softer materials, the side rake angle is generally increased.
• Back (Top) Rake: The backward slope of the tool face away from the nose. This angle may be about 20 degrees and is provide for in the tool holder. Back rake permits the chips to flow away from the point of the cutting tool.
UNIT TEST
1. What is pitch for ¼-20 tap?
2. To what angle must the compound be turned for Unified Thread?
3. Explain why you swivel the compound in Question 2.
4. What is the depth of thread for UNF ½-20 screw?
5. How would you make a left-hand thread? This is not covered in the reading—think it out?
6. What Tool bit do we use for cutting thread?
7. Please describe Center Gage.
8. What do we use to check the thread pitch(Thread Per Inch)?
9. The first and final pass, how much do we feed the compound in?
10. Name four material that use to make Tool bits.
Chapter Attribution Information
This chapter was derived from the following sources.
• Lathe derived from Lathe by the Massachusetts Institute of Technology, CC:BY-NC-SA 4.0.
• Cutting Tool Terminology derived from Lathe Cutting Tools – Cutting Tool Shapes by the Wisconsin Technical College, CC:BY-NC 4.0.
• Cutting Tool Terminology derived from Cutter Types (Lathe) by the University of Idaho, CC:BY-SA 3.0.
• Centering derived from [Manual Lathes Document] | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/02%3A_Lathe_Machines/02.6%3A_Unit_6%3A_Lathe_Threading.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Identify Drill Press
• Understand the safety rules.
• Describe Tooling to be use.
• Describe Reaming a hole.
• Describe Drilling a hole procedure.
• Describe power feed and hand feed tapping procedure.
• Describe Dressing the Wheel procedures.
Description
Drilling machines, or drill presses, are primarily used to drill or enlarge a cylindrical hole in a workpiece or part. The chief operation performed on the drill press is drilling, but other possible operations include: reaming, countersinking, counterboring, and tapping.
The floor type drill press used in the Student Shop is a very common machine, found in both home and industrial workshops. This style drill press is composed of four major groups of assemblies: the head, table, column, and base.
The head contains the motor and variable speed mechanism used to drive the spindle. The spindle is housed within the quill, which can be moved up or down by either manual or automatic feed. The table is mounted on the column, and is used to support the workpiece. The table may be raised or lowered on the column, depending upon the machining needs. The column is the backbone of the drill press. The head and base are clamped to it, and it serves as a guide for the table. The cast-iron base is the supporting member of the entire structure.
Safety
1. Be familiar with the location of the start and stop switches. 2. The drill press table should be cleared of miscellaneous tools and materials. 3. Ensure that all drill bits are sharpened and chucks are in working condition. Any dull drill bits, battered tangs or sockets should not be used. 4. Never attempt to remove scraps from the table by hand. Use brushes or other proper tools. 5. Never attempt to perform maintenance on the machine without the power cord unplugged. 6. Never insert a chuck key into the chuck until the machine has been turned off and stopped completely. 7. Belts and pulleys should be guarded at all times. If any are frayed, immediately report to the instructor for replacement. 8. All workpieces should be secured by a vise or clamp before starting the machining. 9. If the workpiece moves while in the vise or clamp: • Do not attempt to hold the workpiece in place by hand. • Do not try to tighten the vise or clamp while the machine is turned on. • Turn the power off and wait for the machine to stop completely before re-tightening the vise or clamp. 10. Use the proper speed settings and drill type for the material to be machined. 11. When mounting a drill bit, it should be to the full depth and centered in the chuck. 12. Eliminate the possibility of the drill bit hitting the table by using a clearance block and by adjusting the feed stroke. 13. Always feed the bit slowly into the workpiece. If the hole to be drilled is deep, draw the bit back often to remove shavings. 14. Before leaving the drill press for any amount of time, the power should be turned off and machine should be at a complete stop. 15. In any unsafe condition or movement is observed on the drill press, report it to the instructor immediately. 16. Leave the drill press cleaned and tidy at all times.
04.1: Chapter 4: Bandsaw
OBJECTIVE
After completing this unit, you should be able to:
• Identify Bandsaw.
• Understand the safety rules.
• Describe operation of the Horizontal Bandsaw.
• Describe operation of the Vertical Bandsaw.
• Describe the Chop Saw.
• Explain saw blades selection.
• Describe the Tooth set.
• Explain Vise loading.
• Describe lubrication.
Bandsaw
There are two types of band saws available in the market – one is the horizontal band saw and the other is vertical band saw. Band saws have become fairly common in any machine shop and require no special skills to use. However, considering the nature of work involved, it is important that you familiarize yourself with the equipment and follow a few simple steps when using a band saw. Here are some simple instructions on how to safely use vertical band saws. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/03%3A_Drill_Presses/03.1%3A_Chapter_3%3A_Drill_Press.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Identify Surface Grinder.
• Identify Procedures.
• Describe Dressing the Wheel procedures.
• Describe the Ring Test.
• Describe replacing the Grinding Wheel.
• Describe procedure select the grinding wheel.
• List principal abrasives with their general areas of best use.
• List principal bond with the types of application where they are most used.
• Identify by type number and name , from unmarked sketches, or from actual wheels.
• Interpret wheel shape and size markings together with five basic symbols of a wheel specification into description of the grinding wheel.
• Given several standard , common grinding jobs, recommend the appropriate abrasive, approximate grit size, grade, and bond.
The Surface Grinder is mainly used in the finishing process. It is a very precise tool which uses a stationary, abrasive, rotating wheel to shave or finish a metallic surface which is held in place by a vise. This vise, which is part of a table, or carriage is moved back and forth under the abrasive wheel. The surface grinder can cut steel in pieces no bigger than 18” long by 6” high by 8” wide. The table of the grinder is also magnetic, which aids in holding the material still. These magnets can be toggled by means of a lever located on the front side of the grinder. This instrument has a maximum cut of .005 of an inch, and a minimum cut of .005 of an inch. The movement of the grinder can be an automatic, back and forth motion, or manually moved as required.
Safety Precautions
Besides regular machine shop safety rules, these are some tips on how to use this machine safely:
• Always wear safety glasses as this machine may send shavings in all directions.
• Always wait for the wheel to reach maximum speed before using it, as there may be
• If you have long hair, you should keep it tied back, so that it does not get caught in the machine.
• Never strike the wheel against the material as this could cause faults in the wheel, which may result in a loss of integrity and it may fly apart.
• Always make sure that the guard is in place over the grinding wheel, as this protects the user from the shavings that are removed from the material.
• Always make sure the material is securely fastened in place.
• Always make sure the magnetic table is clean before placing material on it, as shavings may scratch your material or even cause the material to slide wheel you are using the grinder.
• Ensure that the grinder has a start/stop button within easy reach of the operator.
• Check the grinding wheel before mounting it. Make sure it is properly maintained and in good working order.
• Follow the manufacturer’s instructions for mounting grinding wheels.
• Keep face of the wheel evenly dressed.
• Ensure that the wheel guard covers at least one half of the grinding wheel.
• File off any burrs on the surface of work that is placed on the magnetic chuck.
• Clean the magnetic chuck with a cloth and then wipe with the palm of your hand.
• Place a piece of paper slightly larger than workpiece in the center of chuck.
• Position work on the paper and turn on the power to the magnetic chuck.
• Check that the magnetic chuck has been turned on by trying to remove work from the chuck.
• Check that the wheel clears the work before starting the grinder.
• Run a new grinding wheel for about one minute before engaging the wheel into the work.
• Wait for the wheel to reach maximum speed before using it as there may be unseen faults in the wheel.
• Stand to one side of the wheel before starting the grinder.
• Turn off coolant before stopping the wheel to avoid creating an out-of-balance condition.
• Keep the working surface clear of scraps, tools and materials.
• Keep the floor around the grinder clean and free of oil and grease.
• Use an appropriate ventilation exhaust system to reduce inhalation of dusts, debris, and coolant mists. Exhaust systems must be designed and maintained appropriately.
• Follow lockout procedures when performing maintenance work.
Procedure for Use
• The first step in using the surface grinder, is to make sure that the material you wish to shape can be used in the grinder. Soft materials such as aluminum or brass will clop up the abrasive wheel and stop it from performing effectively, and it will then have to be cleaned. This process is explained in the Maintenance section. The maximum size of a material that the grinder can machine is 18” long by 8” wide by 6” high.
• The next step is to make sure the material is secured. This is done by use of a vice, and then by engaging the magnetic clamp. Once the material is secure, it must be manually positioned under the abrasive wheel. This is done by turning the longitude and latitude wheels located on the front of the grinder. The abrasive wheel itself can be moved slightly to get the material in the perfect position.
• Then the machine may be started. It should reach maximum speed before you try to use it for the safety reasons. If the wheel is working properly, manually used when very precise work needs to be done.
Figure 1. Chevalier Surface Grinder
Dressing the Wheel
1. Place the diamond wheel dresser onto the bed.
2. Keep the diamond dresser ¼ of an inch to the left of the center of the wheel.
3. Lock the dresser onto the bed by turning the magnetic chuck on.
4. Turn on the machine power by turning the switch to the “ON” position. Then press the green button to start the spindle.
5. Move the grinding wheel down using the vertical table handwheel until it barely makes contact with the dresser.
6. Turn the machine off after making contact with the dresser.
7. Turn the machine on again. While the wheel is spinning, lower the grinding wheel down in the Z direction until it makes a small plume of dust.
8. Once the small plume of dust has been made, make one pass back and forward along the Y-axis. Stop the machine when the dresser has made on pass back and forward.
9. When stopping the machine, make sure that the dresser is about ½ inches away from the wheel.
10. Check the wheel to see if it is clean. If not, repeat steps 8 and 9.
Figure 2. Dressing the wheel
Ring Test
Grinding wheels must be inspected and “ring-tested” before they are mounted to ensure that they are free from cracks or other defects. Wheels should be tapped gently with a light, nonmetallic instrument. A stable and undamaged wheel will give a clear metallic tone or “ring.”
Performing the ring test:
Make sure the wheel is dry and free of sawdust or other material that could deaden the sound of the ring.
You will need a hard plastic or hard wood object, such as the handle of a screwdriver or other tool, to conduct the test. Use a wood mallet for heavier tools. Do not use metal objects.
1. Suspend the wheel on a pin or a shaft that fits through the hole so that it will be easy to turn, but do not mouth the wheel on the grinder. If the wheel is too large to suspend, stand it on a clean, hard surface.
2. Imagine a vertical plumb line up the center of the wheel.
3. Tap the wheel about 45 degrees on each side of the vertical line, about one or two inches from the wheel’s edge. (Large wheels may tapped on the edge rather than the side of the wheel.)
4. Turn the wheel 180 degrees so that the bottom of the wheel is now on top.
5. Tap the wheel about 45 degrees on each side of the vertical line again.
6. The wheel passes the test if it gives a clear metallic tone when tapped at all four points. If the wheel sounds dead at any of the four points, it is cracked. Do not use it.
Replacing the Grinding Wheel
1. Open the wheel case. If the wheel case is very tight, this may require a pair of brace wrench, wrench and a rubber mallet.
2. Remove the metal plate on top by loosening the screws that are holding it to the wheel case.
Figure 3. Remove metal plate and wheel case
3. Behind the wheel, on the spindle, there is a hole. Insert the brace wrench on the right side into the back of the spindle. The brace wrench should be able to fit into the hole.
Figure 4. Brace wrench into hole Figure 5. Remove the grinding wheel
4. Insert the wrench into the two holes in the front of the wheel. When loosening the wheel from the wheel spindle, turning right will loosen and turning left will tighten.
5. Hit the wishbone-shaped wrench with a rubber mallet to loosen the wheel. 6. To put a new grinding wheel on, reverse the procedure. Turning the wishbone-shaped wrench to the left will tighten it. When installing the wheel, make sure that the wrench is on the left side, not on the right side. Turn the wishbone-shaped wrench by hand, and when no longer possible, use the rubber mallet. 7. Remove the wrench from the back of the spindle. 8. Screw the plate back on top of the wheel case. 9. Close the wheel case, and tighten the knob. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/05%3A_Surface_Grinders/05.1%3A_Chapter_5%3A_Surface_Grinder.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Correctly harden a piece of tool steel and evaluate your work.
• Correctly temper the hardened piece of the tool steel and evaluate your work.
• Describe the proper heat treating procedures for other tool steels.
Safety
The following procedures are suggested for a safe heat treating operation.
1. Wear heat-resistant protective clothing, gloves, safety glasses, and a face shield to prevent exposure to hot oils, which can burn skin.
2. Before lighting the furnace, make sure that air switches, exhaust fans, automatic shut-off valves, and other safety precautions are in place.
3. Make sure that there is enough coolant for the job. Coolant will absorb heat given off by the metal as it is cooling, but if there is insufficient coolant, the metal will not cool at the optimal speed.
4. Make sure that there is sufficient ventilation in the quenching areas in order to maintain desired oil mist levels.
5. When lighting the furnace, obey the instructions that have been provided by the manufacturer.
6. During the process of lighting an oil or gas-fired furnace, do NOT stand directly in front of it.
7. Make sure that the quenching oil is not contaminated by water. Explosions can be results of moisture coming into contact with the quenching oil.
8. Before taking materials out of the liquid carburizing pot, make sure that the tongs are not wet and that they are the correct tongs for the job.
9. Make sure that an appropriate fungicide or bacterial inhibitor has been mixed into the quenching liquid.
10. When quench tanks are not being used, always cover them.
11. Use a nonflammable absorbent to clean leaks and oil spills. This should be done immediately.
12. If possible, keep tools, baskets, jigs, and work areas free from oil contamination.
13. Before breaks and before moving on to the next task, wash your hands thoroughly.
14. If any skin trouble is shown or suspected, report to your instructor and get medical help.
15. Fumes from the molten carburizing salt bath should not be inhaled, because carbon monoxide is a product of the carburizing process.
16. Make sure there is good ventilation in the work area.
17. Be on the lookout for contamination from pieces of carburized metal.
18. Do not take oil-soaked clothes or equipment to areas where there are food or beverages.
19. Do not take food or beverages where oils are either being used or stored.
Procedure
The first important thing to know when heat treating a steel is its hardening temperature. Many steels, especially the common tool steels, have a well established temperature range for hardening. O-1 happens to have a hardening temperature of 1450 – 1500 degrees Fahrenheit.
To begin the process:
1. Safety first. Heat treating temperatures are very hot. Dress properly for the job and keep the area around the furnace clean so that there is no risk of slipping or stumbling. Also, preheat the tongs before grasping the heated sample part.
2. Preheat the furnace to 1200 degrees Fahrenheit.
3. When the furnace has reached 1200 degrees Fahrenheit, place the sample part into the furnace. Place the sample part into the center of the oven to help ensure even heating. Close and wait.
4. Once the sample part is placed in the furnace, heat it to 1500 degrees Fahrenheit. Upon reaching this temperature, immediately begin timing the soak for 15 minutes to an hour (soak times will very depending on steel thickness).
Table 1: Approximate Soaking Time for Hardening, Annealing and Normalizing Steel
Thickness Of Metal (inches)
Time of heating to required Temperature (hr)
Soaking time (hr)
up to 1/8
.06 to .12
.12 to .25
1/8 to 1/4
.12 to .25
.12 to .25
1/4 to 1/2
.25 to .50
.25 to .50
1/2 to 3/4
.50 to .75
.25 to .50
3/4 to 1
.75 to 1.25
.50 to .75
1 to 2
1.25 to 1.75
.50 to .75
2 to 3
1.75 to 2.25
.75to 1.0
3 to 4
2.25to 2.75
1 to 1.25
4 to 5
2.75 to 3.50
1 to 1.25
5 to 8
3.50 to 3.75
1 to 1.50 | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/06%3A_Heat_Treating/06.1%3A_Chapter_6%3A_Heat_Treating.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Perform a Rockwell Test
• Perform a Brinell Test
Beyond verifying our in shop heat treatment, testing hardness is sometime necessary for production work as well. Even though it’s bad planning, occasionally a job arrives at our machine shop with an unknown alloy or maybe its composition is known but the hardness isn’t. It is possible to use a file to roughly test the machinability of that metal, but the best way to select cutter types, speed, and feeds is a true hardness measurement.
Brinell: Testing hardness by reading the diameter of a ball penetrator mark.
Rockwell: Testing hardness by reading a penetrator depth.
The Rockwell Hardness Test
The Rockwell is a widely accepted method for both soft and hard metals. This system gauges malleability by measuring the depth a pointed probe known shape and size will penetrate into the material given an exact amount of force upon it. Due to Rockwell’s range it is the most popular test in tooling and small production shops and training labs.
Rockwell Numbers:
There are several different scales within the Rockwell system. We’ll use the Rockwell C scale, correctly used on the hardened steel. The C scale can be said to start at 0(annealed Steel) and run up to 68, harder than a HSS tool bit, near that of a carbide tool. It is symbolized by a larger R with the scale subscript.
RC
The two step Rockwell test:
Step 1. Calibrate Load
The test object is set upon the lower anvil such that it’s stable and won’t move when pressed down from above. Next, a cone-shaped diamond penetrator is brought into contact then driven into metal to a predetermined of 20lbs. That cause the conical point to sinks into the metal from 0.003 to 0.006 inch. This is the initial calibration load. At that time, a large dial indicator is rotated to read zero.
Step 2.
Test Load Then with the calibration pressure upon the penetrator and indicator set to zero, a second addition 20lbs test load is added. As the diamond sink farther, its added depth is translated to dial, but in an inverse relationship. The deeper the diamond penetrates, the softer the metal tests, therefore the lower number that must appear on the dial face. Inversely, when the point can’t go very deep, the metal is hard and registers higher on the dial face.
The Rockwell Method
The Rockwell method measures the permanent depth of indentation produced by a force/load on an indenter.
1. Prepare the sample.
2. Place the test sample on the anvil.
3. A preliminary test force (commonly referred to as preload or minor load) is applied to a sample using a diamond indenter.
4. This load represents the zero or reference position that breaks through the surface to reduce the effects of surface finish. After the preload, an additional load, called the major load, is applied to reach the total required test load.
5. This force is held for a predetermined amount of time (dwell time: 10-15 seconds) to allow for elastic recovery.
6. This major load is then released and the final position is measured against the position derived from the preload, the indentation depth variance between the preload value, and the major load value. This distance is converted to a hardness number.
The Brinell Hardness Test
The Brinell Hardness test is very similar to the Rockwell system in that a penetrator is forced into the sample, however, here the measured gauge is the diameter of the dent made by penetration of a hard steel ball of known size, into the work-piece surface. Hardened tool steel balls are used for testing softer material, while a carbide penetrator ball is used to test harder metals.
Due to the upper hardness, limiting factor of Brinell ball, this test it correctly used as a test of soft to medium hard metals.
The Brinell scale numbers:
The scale runs from 160 for annealed steel up to approximately 700 for very hard steel.
The Brinell hardness test is an alternative way to test the hardness of metals and alloys.
1. Prepare the sample.
2. Place the test sample on the anvil.
3. Move the indenter down into position on the part surface.
4. A minor load is applied and a zero reference position is established.
5. The major load is applied for a specified time period (10 to 15 seconds) beyond zero.
6. The major load is released, leaving the minor load applied.
7. Follow the process to determine the Brinell hardness of an aluminum sample.
1. Press the indenter into the sample using an accurately controlled test force.
2. Maintain the force for a specific dwell time (usually 10 to 15 seconds).
3. After the dwell time is complete, remove the indenter, leaving a round indent in the sample.
4. The size of the indent is determined optically by measuring two diagonals of the round indent using either a portable microscope or one that is integrated with the load application device.
5. The Brinell hardness number is a function of the test force divided by the curved surface area of the indent. The indentation is considered to be spherical, with a radius equal to half the diameter of the ball. The average of the two diagonals is used in the following formula to calculate the Brinell hardness.
BHN=F2D(D-D2- d2)
Unit Test:
1. Please lists five heat treating safety.
2. What is first important thing to know when heat treating a steel?
3. What is the soak time for 1 to 2” Thickness Of Metal?
4. Please explain Soak time.
5. After the soak time is complete, what is the next step?
6. To temper the sample part it must be placed into the furnace at what temperature?
7. Please explain Austenitize and Quench.
8. What is a Air-cool?
9. Please explain The Rockwell Method.
10. Please explain The Brinell Hardness Test.
Chapter Attribution Information
This chapter was derived from the following sources.
• Heat Treating derived from Heat Treatment of Plain Carbon and LowAlloy Steels by the Massachusetts Institute of Technology, CC:BY-NC-SA 4.0.
• Brinell Hardness Test Equation derived from Brinell Hardness by CORE-Materials Resource Finder, CC:BY. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/06%3A_Heat_Treating/06.2%3A_Unit_2%3A_Hardness_Testing.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Apply 5S in any Machine shop.
• Describe Kaizen Concept.
• Describe Implementing Lean Manufacturing.
Lean 5S:
“5S” is a method of workplace organization that consists of five words: Sort, Set in order, Shine, Standardize, and Sustain. All of these words begin with the letter S. These five components describe how to store items and maintain the new order. When making decisions, employees discuss standardization, which will make the work process clear among the workers. By doing this, each employee will feel ownership of the process.
Phase 0: Safety
It is often assumed that a properly executed 5S program will improve workplace safety, but this is false. Safety is not an option; it’s a priority.
Phase 1: Sort
Review all items in the workplace, keeping only what is needed.
Phase 2: Straighten
Everything should have a place and be in place. Items should be divided and labeled. Everything should be arranged thoughtfully. Employees should not have to bend over repetitively. Place equipment near where it is used. This step is a part of why lean 5s is not considered “standardized cleanup”.
Phase 3: Shine
Make sure that the workplace is clean and neat. By doing this, it will be easier to be aware of where things are and where they should be. After working, clean the workspace and return everything to its former position. Keeping the workplace clean should be integrated into the daily routine.
Phase 4: Standardize
Standardize work procedures and make them consistent. Every worker should be aware of what their responsibilities are when following the first three steps.
Phase 5: Sustain
Assess and maintain the standards. The aforementioned steps should become the new norm in operation. Do not gradually revert to the old ways. When taking part of the new procedure, think of ways to improve. Review the first four steps when new tools or output requirements are presented.
Kaizen
While the lean 5S process focuses on the removal of waste, Kaizen focuses on the practice of continuous improvement. Like lean 5S, Kaizen identifies three main aspects of the workplace: Muda (wastes), Mura (inconsistencies) and Muri (strain on people & machines). However, the Kaizen step-by-step process is more extensive that the lean 5S process.
The Kaizen process overview:
1. Identify a problem.
2. Form a team.
3. Gather information from internal and external customers, and determine goals for the project.
4. Review the current situation or process.
5. Brainstorm and consider seven possible alternatives.
6. Decide the three best alternatives of the seven.
7. Simulate and evaluate these alternatives before implementation.
8. Present the idea and suggestions to managers.
9. Physically implement the Kaizen results and take account of the effects.
Lean manufacturing improves as time goes one, so it is important to continue education about maintaining standards. It is crucial to change the standards and train workers when presented with new equipment or rules.
Lean
Think of a maintenance department as serving internal customers: the various departments and workers in the company.
Lean is different from the traditional western, mass production model that relies on economies of scale to create profits. The more you make the cheaper the product will become, the greater the potential profit margin. It is based on predictions of customer needs, or creating customer needs. It has difficulty dealing with unusual changes in demand.
Lean production responds to proven customer demand. Pull processing – the customer pulls production. In a mass system the producer pushes product onto the market, push processing.
Building a long-term culture that focuses on improvement.
Respect for workers better trained and educated, more flexible
Lean is a philosophy that focuses on the following:
1. Meeting customer needs
2. Continuous, gradual improvement
3. Making continuously better products
4. Valuing the input of workers
5. Taking the long term view
6. Eliminating mistakes
7. Eliminating waste
Wastes: using too many resources (materials, time, energy, space, money, human resources, poor instructions)
Wastes:
1. Overproduction
2. Defects
3. Unnecessary processing
4. Waiting (wasting time)
5. Wasting human time and talent
6. Too many steps or moving aroundExcessive transportation
7. Excessive inventory
Lean production includes working with suppliers, sub contractors, and sellers to stream line the whole process.
The goal is that production would flow smoothly avoiding costly starts and stops.
The idea is called just in time “produce only what is needed, when it is needed, and only in the quantity needed.” Production process must be flexible and fast.
Inventory = just what you need
In mass production = just in case. Extra supplies and products are stored just in case they are needed.
Terminology:
Process simplification – a process outside of the flow of production
Defects – the mass production system does inspection at the end of production to catch defects before they are shipped. The problem is that the resources have already been “spent” to make the waste product” Try to prevent problems immediately, as they happen, then prevent them. Inspection during production, at each stage of production.
Safety – hurt time is waste time
Information – need the right information at the right time (too much, too little, too late)
Principles:
Poka-yoke – mistake proof determining the cause of problems and then removing the cause to prevent further errors
Judgment errors – finding problems after the process
Informative inspections – analyzing data from inspections during the process
Source inspections – inspection before the process begins to prevent errors.
MEAN LEAN
One of the terms applied to a simply cost cutting, job cutting interpretation of Lean is Mean Lean. Often modern manager think they are doing lean without understanding the importance of workers and long term relationships.
Reliability Centered Maintenance
Reliablity centered maintenance is a system for designing a cost effective maintenance program. It can be a detailed complex, computer, statistically driven, but at its basics it is fairly simple. Its ideas can be applied to designing and operating a PM system, and can also guide your learning as you do maintenance, troubleshooting, repair and energy work.
These are core principles of RCM. These nine fundamental concepts are:
• Failures happen.
• Not all failures have the same probability
• Not all failures have the same consequences
• Simple components wear out, complex systems break down
• Good maintenance provides required functionality for lowest practicable cost
• Maintenance can only achieve inherent design reliability of the equipment
• Unnecessary maintenance takes resources away from necessary maintenance
• Good maintenance programs undergo continuous improvement.
Maintenance consists of all actions taken to ensure that components, equipment, and systems provide their intended functions when required.
An RCM system is based on answering the following questions:
1. What are the functions and desired standards of performance of the equipment?
2. In what ways can it fail to fulfil its functions? (Which are the most likely failures? How likely is each type of failure? Will the failures be obvious? Can it be a partial failure?)
3. What causes each failure?
4. What happens when each failure occurs? (What is the risk, danger etc.?)
5. In what way does each failure matter? What are the consequences of a full or partial failure?
6. What can be done to predict or prevent each failure? What will it cost to predict or prevent each failure?
7. What should be done if a suitable proactive task cannot be found (default actions) (no task might be available, or it might be too costly for the risk)?
Equipment is studied in the context of where when and how it is being used
All maintenance actions can be classified into one of the following categories:
• Corrective Maintenance – Restore lost or degraded function
• Preventive Maintenance – Minimizes opportunity for function to fail
• Alterative Maintenance – Eliminate unsatisfactory condition by changing system design or use
Within the category of preventive maintenance all tasks accomplished can be described as belonging to one of five (5) major task types:
• Condition Directed – Renew life based on measured condition compared to a standard
• Time Directed – Renew life regardless of condition
• Failure Finding – Determine whether failure has occurred
• Servicing – Add/replenish consumables
• Lubrication – Oil, grease or otherwise lubricate
We do maintenance because we believe that hardware reliability degrades with age, but that we can do something to restore or maintain the original reliability that pays for itself.
RCM is reliability-centered. Its objective is to maintain the inherent reliability of the system or equipment design, recognizing that changes in inherent reliability may be achieved only through design changes. We must understand that the equipment or system must be studied in the situation in which it is working.
Implementing Lean Manufacturing
Analyze each step in the original process before making change
Lean manufacturing main focuses is on cost reduction and increases in turnover and eliminating activities that do not add value to the manufacturing process. Basically what lean manufacturing does is help companies to achieve targeted production, as well as other things, by introducing tools and techniques that are easy to apply and maintain. What these tools and techniques are doing is reducing and eliminating waste, things that are not needed in the manufacturing process.
Manufacturing engineers set out to use the six-sigma DMAIC (Design, Measure, Analyze, Improve, Control) methodology—in conjunction with lean manufacturing—to meet customer requirements related to the production of tubes.
Manufacturing engineers were charged with designing a new process layout of the tube production line. The objectives for project were including:
• Improved quality
• Decreased scrap
• Delivery to the point of use
• Smaller lot sizes
• Implementation of a pull system
• Better feedback
• Increased production
• Individual Responsibility
• Decreased WIP
• Dine flexibility
Before making changes, the team analyze each step in the original layout of the tube production line process.
1. There try to understand the original state process, identify the problem area, unnecessary step and non value added.
2. After mapping the process, the lean team collected data from the Material Review Board (MRB) bench to measure and analyze major types of defects . To better understand the process, the team also did a time study for 20 days period production run.
In the original state, the tube line consisted of one operator and four operations, separated into two stations by a large table using a push system. The table acted as a separator between the second and third operation.
The first problem discovered was the line’s unbalanced . The first station was used about 70% of the time. Operators at the second station were spending a lot of their time waiting between cycle times. By combining stations one and two, room for improvement became evident with respect to individual responsibility, control of inventory by the operator, and immediate feedback when a problem occurred. The time study and the department layout reflect these findings.
A second problem was recognized. Because of the process flow, the production rate did not allow the production schedule to be met with two stations. Because operators lost track of machine cycles, machines were waiting for operator attention. Operators also tried to push parts through the first station—the bottleneck operation in the process—and then continued to manufacture the parts at the last two operations. Typically, long runs of WIP built up, and quality problems were not caught until a lot number of defective pieces were produced.
The original state data were taken from the last 20 days before the change. The teams analyze each step in the original and making changes. The findings of the time study on the original process provided the basis for reducing cycle time, balancing the line, designing the using Just In Time kanbans and scheduling, improve quality, decrease lot size and WIP , and improve flow. The new process data were taken starting one month after implementation. This delay gave the machine operators an opportunity to train and get to with the new process layout system.
With the U shaped cell design; The parts meet all the customer requirement. Table in the original process was removed ,almost eliminating WIP. With the reducing WIP and increasing production.
Some of the concepts used to improve the process included total employee involvement (TEI), smaller lot sizes, scheduling, point of use inventory, and improved layout. All employees and supervisors in the department were involved in all phases of the project. Their ideas and suggestions were incorporated in the planning and implementation process to gain wider acceptance of the changes to the process. Smaller lot sizes were introduced to minimize the number of parts produced before defects were detected. Kanbans were introduced (in the form of material handling racks) to control WIP and to implement a pull system. And the cell layout decreased travel between operations.
Operators were authorized to stop the line when problems arose. In the original-state , the operators were still continue running parts when a operation was down. With kanban
control, the layout eliminated the ability to store WIP, requiring the operator to shut down the entire line. The cell layout provides excellent opportunities for improving communication between operators about problems and adjustments, to achieve better quality.
Day-to-day inspection of the original-state process the operators spent a lot of time either waiting for material-handling person, or performing as a material handling. With the U-shaped cell, delivery to the point of use is more better for the operator. The operator places boxes of raw material on six moveable roller carts, where it’s easily to get. The six boxes are enough to last a 24-hr period.
To reduce setup times, tools needed for machine repair and adjustments are located in the cell. The screws are not standardized; tools are set up in order of increasing size to quickly identify the proper tool.
For three months the process was monitored to verify that it was in control. Comparison of time studies from the original-state and the implemented layout demonstrated an increase in production from 300 to 514 finished products per shift. The new layout eliminated double handling between the second and third operations, as well as at the packing step. It also reduced throughout time by making it easier to cycle all four operations in a pull-system order. Customer demand was met by two shifts, which reduced the labor cost.
The results of the redesign are as follows:
• WIP decreased by 97%
• Production increased 72%
• Scrap was reduced by 43%
• Machine utilization increased by 50%
• Labor utilization increased by 25%
• Labor costs were reduced by 33%
• Sigma level increased from 2.6 to 2.8
This project yielded reduced labor and scrap costs, and allowed the organization to do a better job of making deliveries on time, while allowing a smaller finished-goods inventory. Daily production numbers and single-part cycle time served as a benchmark for monitoring progress towards the goal. Although the sigma level increase , the 43% reduction in defects, 97% reduction in WIP, and production increase of 72% contributed to the project objective.
Implementing lean is a never ending process; this is what continuous improvement is all
about. When you get one aspect of lean implemented, it can always be improved. Don’t get hung up on it, but don’t let things slip back to the starting point. There will always be time to go back and refine some of the processes.
Before Lean Manufacturing was implemented at Nypro Oregon Inc., we would operate using traditional manufacturing. Traditional manufacturing consists of producing all of a given product for the marketplace so as to never let the equipment idle. These goods them need to be warehoused or shipped out to a customer who may not be ready for them. If more is produced than can be sold, the products will be sold at a deep discount (often a loss) or simply scrapped. This can add up to an enormous amount waste. After implementing Lean Manufacturing concepts, our company uses just in time. Just in time refers to producing and delivering good in the amount required when the customer requires it and not before. In lean Manufacturing, the manufacture only produces what the customer wants, when they want it. This often a much more cost effective way of manufacturing when compared to high priced, high volume equipment.
Unit Test:
1. What is 5S?
2. Please Explain each “S” of the 5S.
3. Please Explain Kaizen concept.
4. What is the Pull processing?
5. What is the Poka-yoke?
6. What is the six-sigma DMAIC?
7. What is the objectives for a new process layout of the tube production line?
8. Before making changes, The Manufacturing engineers team do what first?
9. Please lists the results of the redesign.
10. The key to implementing lean new idea or concept is to do what?
CHAPTER ATTRIBUTION INFORMATION
This chapter was derived from the following sources.
• Lean 5S derived from Lean Manufacturing by various authors, CC:BY-SA 3.0.
• Kaizen derived from A Kaizen Based Approach for Cellular Manufacturing System Design: A Case Study by VirginiaTech, CC:BY-SA 4.0.
• Kaizen (image) derived from A Kaizen Based Approach for Cellular Manufacturing System Design: A Case Study by VirginiaTech, CC:BY-SA 4.0. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/07%3A_Lean_Manufacturing/07.1%3A_Chapter_7%3A_Lean_Manufacturing.txt |
Unit 1: Introduction to CNC
What is CNC? CNC is a Computer Numerical Control. CNC is the automation of machine tools that are operated by precisely programmed commands encoded and played by a computer as opposed to controlled manually via handwheels or levers.
In modern CNC systems, end-to-end component design is highly automated using Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) programs. The series of steps needed to produce any part is highly automated and produces a part that closely matches the original CAD design.
In the CNC machines the role of the operators is minimized. The operator has to merely feed the program of instructions in the computer, load the required tools in the machine, and rest of the work is done by the computer automatically. The computer directs the machine tool to perform various machining operations as per the program of instructions fed by the operator.
The CNC technology can be applied to wide variety of operations like drafting, assembly, inspection, sheet metal working, etc. But it is more prominently used for various metal machining processes like turning, drilling, milling, shaping, etc. Due to the CNC, all the machining operations can be performed at the fast rate resulting in bulk manufacturing becoming quite cheaper.
How It Works
The CNC machine comprises of the computer in which the program is fed for cutting of the metal of the job as per the requirements. All the cutting processes that are to be carried out and all the final dimensions are fed into the computer via the program. The computer thus knows what exactly is to be done and carries out all the cutting processes. CNC machine works like the Robot, which has to be fed with the program and it follows all your instructions.
You don’t have to worry about the accuracy of the job; all the CNC machines are designed to meet very close accuracies. In fact, these days for most of the precision jobs CNC machine is compulsory. When your job is finished, you don’t even have to remove it, the machine does that for you and it picks up the next job on its own. This way your machine can keep on doing the fabrication works all the 24 hours of the day without the need of much monitoring, of course you will have to feed it with the program initially and supply the required raw material.
Since the earliest days of production manufacturing, ways have been sought to increase dimensional accuracy as well as speed of production. Simply put, numerical control is a method of automatically operating a manufacturing machine based on a code of letter, numbers, and special characters. As they developed, application of digital computers control of manufacturing equipment was realized. Computers were soon used to provide direct control of machine tools. The integrated circuit led to small computers used to control individual machines, and the computer numerical control (CNC) era was born. This Computer Numerical Control era has become so sophisticated it is the preferred method of almost every phase of
precision manufacturing, particularly machining. Precision dimensional requirements, mainstay of the machining processes, are ideal candidates for use of computer control systems. Computer numerical control now appears in many other types of manufacturing processes. A distinct advantage of computer control of machine tools is rapid, high-precision positioning of workpiece and cutting tools.
Today, manual machine tools have been largely replaced by Computer numerical Control(CNC) machine tools. The machine tool are controlled electronically rather than by hand. CNC machine tools can produce the same part over and over again with very little variation. Modern CNC machines can position cutting tools and workpieces at traverse feed rates of several hundred inches per minute, to an accuracy of .0001”. Once programming is complete and tooling is set up, they can run day or night, week after week, without getting tried, with only routine service and cutting tool maintenance. These are obvious advantage over manual machine tools, which need a great deal of human interaction in order to do anything. Cutting feed rates and spindle speeds may be optimized through program instructions. Modern CNC machine tools have turret or belt toolholders and some can hold more than 150 tools. Tool change take less than 15 seconds.
Computer Numerical Control machine are highly productive. They are also expensive to purchase, set up, and maintain. However, the productivity advantage can easily offset this cost if their use is properly managed. A most important advantage of CNC is ability to program the machine to do different jobs. Tool selection and changing under program control is extremely productive, with little time wasted applying a tool to the job.
A program developed to accomplish a given task may be used for a short production run of one, or a few parts. The machine may then be set up for a new job and used for long production runs of hundreds or thousands of production units. It can be interrupted, used for the original job or another new job, and quickly returned to the long production run. This makes the CNC machine tool extremely versatile and productive. Computer-aided design(CAD), has become the preferred method of product design & development. The connection between CAD & CNC was logical. A computer part design can go directly to program used to develop CNC machine control information. A CNC manufacturing machine can then make the part. The computer is extremely useful for assisting the CNC programmer in developing a program to manufacture a specific part. Computer-aided manufacturing, or CAM, systems are now the industry standard for programming. When CAD, CAM & CNC are blended, the greatest capability emerges, producing parts extremely difficult or impossible to make by manual methods.
CNC motion is based on the Cartesian coordinate system. A CNC machine cannot be successfully operated without an understanding of the how coordinate systems are defined in CNC machine and how the systems work together.
To fully understand numerical control programming you must understand axes and coordinates. Think of a part that you would have make. You could describe it to someone else by its geometry. For example, the part you have make is a 5 inch by 8 inch rectangle. All parts can be described in this fashion. Any point on the machined part, such as a pocket to be cut or a hole to be drilled, can be described in term of its position. The system that allows us to do this, called the Cartesian Coordinate or rectangular coordinate system. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.1%3A_Chapter_8%3A_CNC.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Understand the cartesian coordinate system.
• Understand the Cartesian coordinates of the plane.
• Understand the Cartesian coordinates of three-dimensional space.
• Understand the four Quadrants.
• Explain the difference between polar and rectangular coordinated.
• Identify the programmable axes on a CNC machining.
THE CARTESIAN COORDINATE SYSTEM
Cartesian coordinates allow one to specify the location of a point in the plane, or in three-dimensional space. The Cartesian coordinates or rectangular coordinates system of a point are a pair of numbers (in two-dimensions) or a triplet of numbers (in three-dimensions) that specified signed distances from the coordinate axis. First we must understand a coordinate system to define our directions and relative position. A system used to define points in space by establishing directions(axis) and a reference position(origin). A coordinate system can be rectangular or polar.
Just as points on the line can be placed in one to one correspondence with the real number line, so points in plane can be placed in one to one correspondence with pairs of real number line by using two coordinate lines. To do this, we construct two perpendicular coordinate line that intersect at their origins; for convenience. Assign a set of equally space graduations to the x and y axes starting at the origin and going in both directions, left and right (x axis) and up and down (y axis) point along each axis may be established. We make one of the number lines vertical with its positive direction upward and negative direction downward. The other number lines horizontal with its positive direction to the right and negative direction to the left. The two number lines are called coordinate axes; the horizontal line is the x axis, the vertical line is the y axis, and the coordinate axes together form the Cartesian coordinate system or a rectangular coordinate system. The point of intersection of the coordinate axes is denoted by O and is the origin of the coordinate system. See Figure 1.
Figure 1
It is basically, Two Real Number Lines Put Together, one going left-right, and the other going up-down. The horizontal line is called x-axis and the vertical line is called y-axis.
The Origin
The point (0,0) is given the special name “The Origin”, and is sometimes given the letter “O”.
Real Number Line
The basis of this system is the real number line marked at equal intervals. The axis is labeled (X, Y or Z). One point on the line is designated as the Origin. Numbers on one side of the line are marked as positive and those to the other side marked negative. See Figure 2.
Figure 2. X-axis number line
Cartesian coordinates of the plane
A plane in which a rectangular coordinate system has been introduced is a coordinate plane or an x-y-plane. We will now show how to establish a one to one correspondence between points in a coordinate plane and pairs of real number. If A is a point in a coordinate plane, then we draw two lines through A, one perpendicular to the x-axis and one perpendicular to the y-axis. If the first line intersects the x-axis at the point with coordinate x and the second line intersects the y-axis at the point with coordinate y, then we associate the pair (x,y) with the A( See Figure 2). The number a is the x-coordinate or abscissa of P and the number b is the y-coordinate or ordinate of p; we say that A is the point with coordinate (x,y) and denote the point by A(x,y). The point (0,0) is given the special name “The Origin”, and is sometimes given the letter “O”.
Abscissa and Ordinate:
The words “Abscissa” and “Ordinate” … they are just the x and y values:
• Abscissa: the horizontal (“x”) value in a pair of coordinates: how far along the point is.
• Ordinate: the vertical (“y”) value in a pair of coordinates: how far up or down the point is. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.2%3A_Unit_2%3A_CNC_Machine_Tool_Programmable_Axes_and_Position_Dimensioning_Systems.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Understand the Vertical Milling Center Machine Motion.
• Understand the Machine Home Position.
• Understand the CNC Machine Coordinates.
• Understand the Work Coordinate System.
• Understand the Machine and Tool Offsets.
• Set Tool Length Offset for each tool.
VMC Machine Motion
CNC machines use a 3D Cartesian coordinate system. Figure 10. shows a typical Vertical Milling Center (VMC). Parts to be machined is fastened to the machine table. This table moves in the XY-Plane. As the operator faces the machine, the X-Axis moves the table left-right. The Y-Axis moves the table forward-backward. The machine column grips and spins the tool. The column controls the Z-axis and moves up-down.
Figure 1. VMC Machine Motion
CNC Machine Coordinates
The CNC Machine Coordinate System is illustrated in Figure 11. The control point for the Machine Coordinate System is defined as the center-face of the machine spindle. The Origin point for the machine coordinate system is called Machine Home. This is the postion of the center-face of the machine spindle when the Z-axis is fully retracted and the table is moved to its limits near the back-left corner.
Figure 2. VMC Machine Coordinate System (At Home Position)
As shown in Figure 12, when working with a CNC, always think, work, and write CNC programs in terms of tool motion, not table motion. For example, increasing +X coordinate values move the tool right in relation to the table (though the table actually moves left). Likewise, increasing +Y coordinate values move the tool towards the back of the machine (the table moves towards the operator). Increasing +Z commands move the tool up (away from the table).
About Machine Home Position
When a CNC machine is first turned on, it does not know where the axes are positioned in the work space. Home position is found by the Power On Restart sequence initiated by the operator by pushing a button on the machine control after turning on the control power.
The Power On Restart sequence simply drives all three axes slowly towards their extreme limits (-X, +Y, +Z). As each axis reaches its mechanical limit, a microswitch is activated. This signals to the control that the home position for that axis is reached. Once all three axes have stopped moving, the machine is said to be “homed”. Machine coordinates are thereafter in relation to this home position.
Work Coordinate System
Obviously it would be difficult to write a CNC program in relation to Machine Coordinates. The home position is far away from the table, so values in the CNC program would be large and have no easily recognized relation to the part model. To make programming and setting up the CNC easier, a Work Coordinate System (WCS) is established for each CNC program.
The WCS is a point selected by the CNC programmer on the part, stock or fixture. While the WCS can be the same as the part origin in CAD, it does not have to be. While it can be located anywhere in the machine envelope, its selection requires careful consideration.
• The WCS location must be able to be found by mechanical means such as an edge finder, coaxial indicator or part probe.
• It must be located with high precision: typically plus or minus .001 inches or less.
• It must be repeatable: parts must be placed in exactly the same position every time.
• It should take into account how the part will be rotated and moved as different sides of the part are machined.
For example, Figure 13 shows a part gripped in a vise. The outside dimensions of the part have already been milled to size on a manual machine before being set on the CNC machine.
The CNC is used to make the holes, pockets, and slot in this part. The WCS is located in the upper-left corner of the block. This corner is easily found using an Edge Finder or Probe. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.3%3A_Unit_3%3A_Vertical_Milling_Center_Machine_Motion.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Identify the programs list instructions.
• Understand the Program Format
• Describe Letter Address Commands codes
• Describe Special Character Code Definitions.
• Understand the G & M Codes.
CNC programs list instructions to be performed in the order they are written. They read like a book, left to right and top-down. Each sentence in a CNC program is written on a separate line, called a Block. Blocks are arranged in a specific sequence that promotes safety, predictability and readability, so it is important to adhere to a standard program structure.
The blocks are arranged in the following order:
• Program Start
• Load Tool
• Spindle On
• Coolant On
• Rapid to position above part
• Machining operation
• Coolant Off
• Spindle Off
• Move to safe position
• End program
The steps listed above represent the simplest type of CNC program, where only one tool is used and one operation performed. Programs that use multiple tools repeat steps two through nine for each.
Table 3 and Table 4 in section G & M Codes show the most common G and M codes that should be memorized if possible.
Like any language, the G-code language has rules. For example, some codes are modal, meaning they do not have to be repeated if they do not change between blocks. Some codes have different meanings depending on how and where there are used.
While these rules are covered in this chapter, do not concern yourself with learning every nuance of the language. It is the job of the job of the CAD/CAM software Post Processor to properly format and write the CNC program.
Program Format
The program in Table 1: below machines a square contour and drills a hole.
Block
Description
Purpose
%
O1234
(T1 0.25 END MILL)
G17 G20 G40 G49 G80 G90
Start of program.
Program number (Program Name).
Tool description for operator.
Safety block to ensure machine is in safe mode.
Start Program
T1 M6
S9200 M3
Load Tool #1.
Spindle Speed 9200 RPM, On CW.
Change Tool
G54
M8
G00 X-0.025 Y-0.275
G43 Z1.H1
Z0.1
G01 Z-0.1 F18.
Use Fixture Offset #1.
Coolant On.
Rapid above part.
Rapid to safe plane, use Tool Length Offset #1.
Rapid to feed plane.
Line move to cutting depth at 18 IPM.
Move to Position
G41 Y0.1 D1 F36.
Y2.025
X2.025
Y-0.025
X-0.025
G40 X-0.4
G00 Z1.
CDC Left, Lead in line, Dia. Offset #1, 36 IPM.
Line move.
Line move.
Line move.
Line move.
Turn CDC off with lead-out move.
Rapid to safe plane.
Machine Contour
M5
M9
(T2 0.25 DRILL)
T2 M6
S3820 M3
Spindle Off.
Coolant Off.
Tool description for operator.
Load Tool #2.
Spindle Speed 3820 RPM, On CW.
Change Tool
M8
X1. Y1.
G43 Z1.H2
Z0.25
Coolant On.
Rapid above hole.
Rapid to safe plane, use Tool Length Offset 2.
Rapid to feed plane.
Move to Position
G98 G81 Z-0.325 R0.1 F12.
G80
Z1.
Drill hole (canned) cycle, Depth Z-.325, F12.
Cancel drill cycle.
Rapid to safe plane.
Drill Hole
M5
M9
G91 G28 Z0
G91 G28 X0 Y0
G90
M30
%
Spindle Off.
Coolant Off.
Return to machine Home position in Z.
Return to machine Home position in XY.
Reset to absolute positioning mode (for safety).
Reset program to beginning.
End Program.
End Program
Letter Address Commands codes
The command block controls the machine tool through the use of letter address commands. Some are used more than once, and their meaning changes based on which G-code appears in the same block.
Codes are either modal, which means they remain in effect until cancelled or changed, or non-modal, which means they are effective only in the current block. As you can see, many of the letter addresses are chosen in a logical manner (T for tool, S for spindle, F for feed rate, etc.).
The table below lists the most common Letter Address Commands codes.
Table 2: Letter Address Commands Codes
Variable
Description
Definitions
A
Absolute or incremental position ofAaxis (rotational axis around X axis)
A,B,C – 4th/5th Axis Rotary Motion
Rotation about the X, Y or Z-axis respectively.The angle is in degrees and up to three decimal places precision.
G01 A45.325B90.
B
Absolute or incremental position of B axis (rotational axis around Y axis)
Same as A
C
Absolute or incremental position of C axis (rotational axis around Z axis)
Same as B
D
Defines diameter or radial offset used for cutter compensation
Used to compensate for tool diameter wear and deflection.D is accompanied by an integer that isthe same as the tool number (T5 uses D5,etc). No decimal point is used. Itis always usedin conjunction with G41 or G42 and a XY move (never an arc). When called, the control reads the register and offsets the tool path left (G41) or right (G42) by the value in the register.
G01 G41 X2.D1
E
Precision feed rate for threading on lathes
F
Defines feed rate
Sets the feed rate when machining lines, arcs or drill cycles.Feed rate can be in Inches per Minute (G94 mode) or Inverse Time (G93 mode). Feed rates can be up to three decimal placesaccuracy (for tap cycles) and require a decimal point.
G01 X2.Y0. F30.
G
Address for preparatory commands
G commands often tell the control what kind of motion is wanted (e.g., rapid positioning, linear feed, circular feed, fixed cycle) or what offset value to use.
G02 X2.Y2.I.50J0.
H
Defines tool length offset;
Incremental axis corresponding to C axis (e.g., on a turn-mill)
This code calls a tool length offset (TLO) register on the control. The control combines the TLO and Fixture Offset Z values to know where the tool is in relation to the part datum.It is always accompanied by an integer (H1, H2,etc), G43, and Z coordinate.
G43 H1 Z2.
I
Defines arc size inX axisfor G02 or G03 arc commands.
Also used as a parameter within some fixed cycles.
For arc moves (G2/G3), this is the incremental X-distance from the arc start point to the arc center. Certain drill cycles also use I as an optional parameter.
G02 X.5 Y2.500I0.J0.250
J
Defines arc size inY axisfor G02 or G03 arc commands.
Also used as a parameter within some fixed cycles.
For arc moves (G2/G3), this is the incremental Y-distance from the arc start point to the arc center. Certain drill cycles also use J as an optional parameter.
G02 X.5 Y2.500 I0.J0.250
K
Defines arc size inZ axisfor G02 or G03 arc commands.
Also used as a parameter within some fixed cycles, equal to L address.
For anarcmove (G2/G3) this is the incremental Z-distance from the arc start point to the arc center. In the G17 plane, this is the incremental Z-distance for helical moves. Certain drill cycles also use J as an optional parameter.
G18 G03 X.3 Z2.500 I0.K0.250
L
Fixed cycle loop count;
Specification of what register to edit using G10
Fixed cycle loop count: Defines number of repetitions (“loops”) of a fixed cycle at each position.Assumed to be 1 unless programmed with another integer.Sometimes the K addressis usedinstead of L. With incremental positioning (G91), a series of equally spaced holes can be programmed as a loop rather than as individual positions.G10 use:Specificationof what register to edit (work offsets, tool radius offsets, tool length offsets, etc.).
M
Miscellaneous function
Always accompanied by an integer that determines its meaning.Only one M-codeis allowedin each block of code. Expanded definitions of M-codes appear later in this chapter.
M08
N
Line (block) number in program;
System parameter number to be changed using G10
Block numbers can make the CNC program easier to read. They are seldom required for CAD/CAM generated programs with no subprograms. Because they take up controlmemorymost 3D programs do not use block numbers. Block numbers are integers up to five characters long with no decimal point. They cannot appear before the tape start/end character (%) and usually do not appear before a comment only block.
N100 T02 M06
O
Program name
Programs are stored on the control by their program number. Thisis an integer thatis preceded by the letter O and has no decimal places.
O1234 (Exercise 1)
P
Serves as parameter address for various G and M codes
Dwell (delay) in seconds.Accompanied by G4 unless used within certain drill cycles.
G4 P.1
Q
Peck increment in canned cycles
The incremental feed distance per pass in a peck drill cycle.
G83 X2.000 Y2.000 Z-.625 F20.R.2 Q.2 P9.
R
Defines size of arc radius or defines retract height in canned cycles
Arcs can be defined using the arc radius R or I,J,Kvectors. IJK’s are more reliable than R’s so itis recommendedto use them instead. Ris also usedby drill cycles as the return plane Z value.
G83 Z-.625 F20.R.2 Q.2 P9.
S
Defines speed, either spindle speed or surface speed depending on mode
Spindle speed in revolutions per minute (RPM). It is an integer value with no decimal, and always used in conjunction with M03 (Spindle on CW) or M04 (Spindle on CCW).
S2500M03
T
Tool selection
Selects tool. It is an integer value always accompanied by M6 (tool change code).
T01 M06
U
Incremental axis corresponding to X axis (typically only lathe group A controls)
Also defines dwell time on some machines.
In these controls, X and U obviate G90 and G91, respectively. On these lathes, G90 is instead a fixed cycle address for roughing.
V
Incremental axis corresponding to Y axis
Until the 2000s, the V address was very rarely used, because most lathes that used U andWdidn’thave a Y-axis, so they didn’t use V. (Green et al 1996 did not even list V in their table of addresses.) That is still often the case, although the proliferation of live lathe tooling and turn-mill machining has made V address usage less rare than it used to be (Smid2008 shows an example).
W
Incremental axis corresponding to Z axis (typically only lathe group A controls)
In these controls, Z and W obviate G90 and G91, respectively. On these lathes, G90 is instead a fixed cycle address for roughing.
X
Absolute or incremental position ofX axis.
Coordinate data for the X-axis. Up to four places after the decimalare allowedand trailing zeros are not used. Coordinates are modal, so there is no need to repeat them in subsequent blocks if they do not change.
G01 X2.250F20.
Y
Absolute or incremental position of Y axis
Coordinate data for the Y-axis.
G01 Y2.250 F20.
Z
Absolute or incremental position of Z axis
Coordinate data for the Z-axis.
Special Character Code Definitions
The following is a list of commonly used special characters, their meaning, use, and restrictions.
% – Program Start or End
All programs begin and end with % on a block by itself. This code is called tape rewind character (a holdover from the days when programs were loaded using paper tapes).
( ) – Comments
Comments to the operator must be all caps and enclosed within brackets. The maximum length of a comment is 40 characters and all characters are capitalized.
(T02: 5/8 END MILL)
/ – Block Delete
Codes after this character are ignored if the Block Delete switch on the control is on.
/ M00
; – End of Block
This character is not visible when the CNC program is read in a text editor (carriage return), but does appear at the end of every block of code when the program is displayed on the machine control.
N8 Z0.750 ;
G & M Codes
G&M Codes make up the most of the contents of the CNC program. The definition of each class of code and specific meanings of the most important codes are covered next.
G-Codes
Codes that begin with G are called preparatory words because they prepare the machine for a certain type of motion.
Table 3: G-Code
Code
Description
G00
Rapid motion.Used to position the machine for non-milling moves.
G01
Line motion at a specified feed rate.
G02
Clockwise arc.
G03
Counterclockwise arc.
G04
Dwell.
G28
Return to machine home position.
G40
Cutter Diameter Compensation (CDC) off.
G41
Cutter Diameter Compensation (CDC) left.
G42
Cutter Diameter Compensation (CDC) right.
G43
Tool length offset (TLO).
G54
Fixture Offset #1.
G55
Fixture Offset #2.
G56
Fixture Offset #3.
G57
Fixture Offset #4.
G58
Fixture Offset #5.
G59
Fixture Offset #6.
G80
Cancel drill cycle.
G81
Simple drill cycle.
G82
Simple drill cycle with dwell.
G83
Peck drill cycle.
G84
Tap cycle.
G90
Absolute coordinate programming mode.
G91
Incremental coordinate programming mode.
G98
Drill cycle return to Initial point (R).
G99
Drill cycle return to Reference plane (last Z Height)
M-Codes
Codes that begin with M are called miscellaneous words. They control machine auxiliary options like coolant and spindle direction. Only one M-code can appear in each block of code.
Table 4: M-Codes
Code
Description
M00
Program stop.Press Cycle Start button to continue.
M01
Optional stop.
M02
End of program.
M03
Spindle on Clockwise.
M04
Spindle on Counterclockwise.
M05
Spindle stop.
M06
Change tool.
M08
Coolant on.
M09
Coolant off.
M30
End program and press Cycle Start to run it again.
Select G-Code Definitions (Expanded)
G00 – Rapid Move
This code commands the machine to move as fast as it can to a specified point. It is always used with a coordinate position and is modal. Unlike G01, G00 does not coordinate the axes to move in a straight line. Rather, each axis moves at its maximum speed until it is satisfied. This results in motion as shown in Figure 18, below.
G 0 0 X 0 . Y0. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.4%3A_Unit_4%3A_CNC_Language_and_Structure.txt |
OBJECTIVE
After completing this unit, you should be able to:
• Understand the CNC Operation.
• List the steps to set up and operate a CNC mill.
• Identify the location and purpose of the operating controls on the Haas CNC Mill control.
• Start and home a CNC machine.
• Load tools into tool carousel.
• Set Tool Length Offsets.
• Set Part Offsets.
• Load a CNC program into the machine control.
• Dry run
• Safely run a new CNC program.
• Adjust offsets to account for tool wear and deflection.
• Shut down a CNC machine correctly.
Overview of CNC Setup and Operation
CNC machine setup and operation follows the process below:
1. Pre-Start
2. Start/Home
3. Load Tools
4. Mount Remove Part into the vise
5. Set Tool Length Offsets Z
6. Set Part Offset XY
7. Load CNC Program
8. Dry Run
9. Run Program
10. Adjust Offsets as Needed
11. Shut Down
1. Pre-Start
Before starting the machine, check to ensure oil and coolant levels are full. Check the machine maintenance manual if you are unsure about how to service it. Ensure the work area is clear of any loose tools or equipment. If the machine requires an air supply, ensure the compressor is on and pressure meets the machine requirements.
2. Start/Home
Turn power on the machine and control. The main breaker is located at the back of the machine. The machine power button is located in the upper-left corner on the control face.
3. Load Tools
Load all tools into the tool carousel in the order listed in the CNC program tool list.
4. Mount the Part in the Vise
Place the Part to be machine in the vise and tighten.
5. Set Tool Length Offsets
Set Tool Length Offsets For each tool used in the order listed in the CNC program, jog the Tools to the top of the part and then set the TLO.
6. Set Part Offset XY
Once the vise or other Part is properly installed and aligned on the machine, set the fixture offset to locate the part XY datum.
7. Load CNC Program
Load your CNC program into CNC machine control using USB flash memory, or floppy disk.
8. Dry Run
Run the program in the air about 2.00 in. above the part .
9. Run Program
Run the program, using extra caution until the program is proven to be error-free.
10. Adjust Offsets as Required
Check the part features and adjust the CDC or TLO registers as needed to ensure the part is within design specifications.
11. Shut Down
Remove part from the vise and tools from the spindle, clean the work area, and properly shut down the machine. Be sure to clean the work area and leave the machine and tools in the location and condition you found them.
UNIT TEST
1. Please list the CNC setup and operation process steps.
2. Describe each process.
08.6: Unit 6: Haas Control
OBJECTIVE
After completing this unit, you should be able to:
• Identify the Haas Control.
• Identify the Keyboard.
• Describe Start/Home Machine procedure.
• Describe Door Override procedure.
• Describe Load Tools procedure.
• Describe Tool Length Offset (TLO) for each tool.
• Verify part zero offset(XY) using MDI.
• Describe the setting tool offset.
• Verify Tool Length offset using MDI.
• Describe the procedure of load CNC program.
• Describe the procedure of save CNC program.
• Explain how to run CNC program.
• Describe the use of cutter diameter compensation.
• Describe the shut down program.
Haas Control
The Haas control is shown in Figures 18 and 19. Familiarize yourself with the location of buttons and controls. Detailed instructions on the following pages show how to operate the control. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.5%3A_Unit_5%3A_CNC_Operation.txt |
Learning OBJECTIVEs
After completing this unit, you should be able to:
• Describe save image.
• Describe the load image to Mastercam.
• Describe scaling and dragging or translate image.
• Describe Mastercam setup, stock setup, and Tool setting.
• Describe creating Toolpaths.
Mastercam
Raster To Vector Images Conversion:
This Worksheet walks you through how to import Raster images into Mastercam. Mastercam is a vector software, where most images for the internet and other sources are Raster Images. Raster images are made up of thousand of pixels of differing color, where vector images are image of line that use mathematical formulas to determine their shape. In this activity you will learn how to convert Raster images to Vector images that Mastercam can use.
1. Using the internet search for your desired Image. Logos or images with sharp color changer work best, try to avoid pictures taken with a camera. .jpg, .gif, .bmp are the file extension currently supported by Mastercam.
2. Try to get the larges image possible. When Using Google examine the images sizing information located under the image. The Larger the number the better.
3. Then navigate to the raw picture by clicking on the image.
4. Click on See full size image.
5. RIGHT CLICK on the image and select Save Image As…
6. Save your raster image to a location that you know.
7. Launch Mastercam 2017
8. Once it loads Press ALT-C or Run User Application (under Setting) a window will open called Chooks. This is a collection of add on file to mastercam.
9. Navigate and click on Rast2Vec.dli
10. It many ask you of you want to keep current Geometry. Then, Click Yes
11.Navigate to your saved image and click on it.
12. The Black White conversion window will open. Slide the Threshold slider in the Black White conversion dialog box until the image on right shows the desired amount of detail.
13. When you like what you see. Then, click OK.
14. Rast2Vec window will popup no modification should be needed. Then, click OK.
15. When the Adjust Geometry opens. Then, click OK.
16. Your image should now be on the screen.
17. Click Yes to exit Rast2Vec.
Scaling and Dragging or Translate your image
After you have imported a image it often not in the right location or the proper size that you need. You need to Scale it up or down and Drag it to the position that you would like.
Scaling:
1. From The ToolBar >Xform>Scale
2. Then Select all the line that part of your image. The Line you select should turn yellow. You can use a window or select each line individually. If you pick a line you do not what just pick it again and it should change back form yellow.
3. Once you have selected your entire image then click on the Green Ball.
4. The Scale window will open. Change To MOVE and to PERCENTAGE, and then adjust the percent up or down until your image is the size you would like. After typing the new percentage number hit ENTER on the keyboard. This will preview that size change. Once the size is what you are looking for. Then, click OK.
Dragging or Translate
1. From the Tool Bar>xfrom>Drag or Translate
2. Once again select all the line that part of your image.
3. Once you have selected your entire image then click on the Green Ball.
4. Change from Copy To Move.
5. Click near your Image in the graphics screen. As you move your mouse the image will move.
6. Once the image is in the proper place LEFT Click again and it will place the image.
MASTERCAM SETUP
1. Turn on the Operation Manager window by pressing Alt-O
2. Click on Machine Type – Mill, then select the HASS 3X MINI MILL – TOOLROM.MMD-5. You should see the HASS 3X MINI pop up in the Operation Manager window.
STOCK SETUP
1. Click on the plus sign next to Properties to see the drop down list and click on stock setup. The Machine Group Properties dialog box should appear. Check the box next to Display to activate it, then click on the Bounding Box button. The Bounding Box dialog box should appear. Confirm that X, Y AND Z are all be set to zero then click OK.
2. Change the value of Z to stock you are using, then click OK. On an isometric view, you should see your part go from 2D to 3D with the image on the top surface.
TOOL SETTING
1. Click on Tool Setting. Enter a program number….Feed Calculation should be “From Tool.” Under Toolpath Configuration, check “Assign tool number sequentially” and “Warn of duplicate tool numbers.” Under Advanced Operation, check the “Override defaults with modal values” box and then check all three selections below it. Under Sequence #, change the start number to 10. Select the material by clicking on the Select button. Click on the drop down arrow associated with Source and select “Mill – library.” From the list, select “ALUMINUM inch – 6061” then click OK. Click OK to exit the Machine Group Properties dialog box.
CREATING TOOLPATHS
1. Choose Toolpath – Contour. The chaining dialog box should appear. Select Window to choose your engraving elements, then click anywhere to establish an approximate start point (selection will change to yellow). Click OK to bring up the 2D Toolpaths – Contour dialog box.
2. Toolpath Type should automatically be set to Contour.
3. Click on Tool (below Toolpath Type) then click on Select Library Tool button. A Tool Selection dialog box will appear. To limit the list to a particular type of tool (ball end mills which will use for engraving), Click on Filter then select/de-select tool types so that only Endmill2 Sphere is highlighted. Click OK. Choose the 1/32 Ball Endmill followed by click OK. Change federate to 5.0 and Spindle Speed to 4000 then click OK.
4. Under Cut Parameters, Compensation Type should be off.
5. Under Lead In/Lead Out, Uncheck the Lead In/Out box as will not be using this feature.
6. Under Linking Parameters, Clearance should be set to 0.5, Retract should be 0.1, feed Plane should be 0.1, Top of stock should be O.0 and Depth should be -0.015. Then click OK.
7. Run Verify Selected Operations in the Operations Manager to see the toolpath.
UNIT TEST
1. What software is Mastercam is?
2. Please explain the Threshold Slider.
3. Lists file supported by Mastercam.
4. Describe scaling and dragging or translate image.
5. Describe stock setup.
6. Describe the tool setting.
7. Explain how to create Toolpaths. | textbooks/workforce/Manufacturing/Book%3A_Manufacturing_Processes_4-5_(Virasak)/08%3A_CNC/08.7%3A_Unit_7%3A_Mastercam.txt |
Interpreting metal fab drawings is a course that introduces the principles of interpretation and application of industrial fabrication drawings. Basic principles and techniques of metal fabrication are introduced by planning and construction of fixtures used in fabrication from drawings. Basic tools and equipment for layout fitting of welded fabrications are utilized. Covers the use and application of the AWS welding symbols. This course will utilize blueprints and welding symbols and will apply them in classroom and in shop as practical assignments.
The largest reason for understanding this information is to communicate between all parties involved. This could include the welder, engineer, quality control, as well as many more. This is a universal language that provides clear instructions for a quality part.
American Welding Society has created a detailed publication (Standard Symbols for Welding, Brazing, and Nondestructive Examination AWS A2.4) that gives an extended amount of knowledge in this language.
Below is an overview of what the elements of a welding symbol may or may not include. The welding symbol provides a visualization for the welder or those involved to be able to apply the applicable weld to the work piece.
1.02: Blue Print Review
Blue Print Reading Review
Understanding blue prints is a vital skill in the metals industry. Whether it be as a structural welder, pipe fitter, or quality control, this is the language that is universal for all individuals involved in a project.
This chapter is here to recap some line types as well as visualizing blue prints and plans.
Line Types
1. Object Line
A visible line is a thick line, without breaks, that indicates all edges and visible surfaces of an object. An object line may also be called a visible line.
2. Hidden Line
A hidden line is a medium weight line, made of short dashes, to show edges, surfaces and corners which cannot be seen. Sometimes they are used to make a drawing easier to understand. Often they are omitted.
3. Section Line
Section lines are used on a drawing to show how an object would look if it were sectioned, or cut apart, to give a better picture of shape or internal construction. Section lines are very thin (size), and are usually drawn at an angle of 45 degrees. They show the cut surface of an object in a sectional view. Sections and section lines will be explained in BPR 7.
4. Center Line
Center lines are used to indicate the centers of holes, arcs, and symmetrical objects. They are very thin (size), long-short-long kinds of lines. (More detail needed?)
5. Dimension Line
Dimension lines are thin lines with a break for entering a measurement of some sort. The ends of the lines will also have an arrow head pointing to an extension line (below.)
6. Extension Line
Extension lines are also thin lines, showing the limits of dimensions. Dimension line arrowheads touch extension lines.
7. Leader Line
Leaders are more thin lines used to point to an area of a drawing requiring a note for explanation.
Visualizing parts
As a welder it is imperative that you can visualize the part you are building from the print. This is not always an easy task. Prints may have a variety of parts or are not detailed as well as they should be but being able to distinguish sides and faces can help decipher these harder prints.
Surface identification quiz | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.01%3A_Interpreting_metal_fab_drawings.txt |
Terminology of joints may play a large role in communication with supervisors and others working on the same weldment. Understanding the concepts associated with joint design as well as identification of parts is critical for any metal worker.
Joint Types
There are 5 basic joint types used in the metal fabrication field:
Butt Joint: A joint type in which the butting ends of one or more work pieces are aligned in approximately the same plane. A butt joint has a possibility of many prepared faces. We will talk about these in subsequent chapters. Represented is a square groove butt joint.
Possible Welds for a Butt Joint:
Square Groove
Bevel Groove
V Groove
J Groove
U Groove
Flare Bevel Groove
Flare Vee Groove
Flanged Edge
Scarf (Brazed Joint)
Corner Joint: being one of the most popular welds in the sheet metal industry the corner joint is used on the outer edge of the piece. This weld is a type of joint that comes together at right angles between two metal parts to form an L. These are common in the construction of boxes, box frames and similar fabrications. There are variations of a corner joint, shown is a closed corner joint.
Possible welds for a corner joint:
Fillet
Edge Flange
Corner Flange
Bevel Groove
V Groove Flare Bevel
Flare V Groove
J Groove
U Groove
Square Groove
Seam
Spot
Projection
Slot
Plug
Lap Joint: A joint between two overlapping members in parallel planes.
Possible welds for lap joint:
Fillet
Bevel Groove
Square Groove
Flare V groove
J Groove
Plug
Slot
Spot
Projection
Seam
*Braze
Tee Joint: A joint which are two pieces of metal are perpendicular to each other. This is one of the most common joint that will be encountered in the metal fabrication industry.
Possible welds for a Tee joint:
Fillet
Bevel Groove
Square Groove
Flare V groove
J Groove
Plug
Slot
Spot
Projection
Seam
Edge Joint: A joint formed by uniting two edges or two surfaces (as by welding) especially making a corner.
Possible welds for edge joints:
Square Groove
Bevel Groove
V Groove
Edge
J Groove
U Groove
Flare Bevel Groove
Flare V Groove
Corner Flange
Edge Flange
Seam
Joints may be left as cut but some may have a prepared surface that is determined by the engineer, designer or welder. This is commonly seen with a Butt weld when the members to be joined are larger in thickness. This will be visited in subsequent chapters.
Joint terminology
Outside of specific joint types there are some terms that will play a role in deciding the correct procedure for welding or prepping the member(s) of a weldment.
Joint Root- This is the area which is in closest proximity to another member making the joint. This could be viewed as a line, area, or a point depending on the view in front of you.
Groove Face- The surface within the groove the weld may be applied to. This can be measured at an angle from the surface of the part to the root edge.
Root Edge- This is a root face that has no width (land) to it. In GTAW it is commonly referred to as a knife edge preparation.
Root Face- This part of the prepped member is the portion of the groove face that is also within the joint root. This is commonly called a flat or land in the industry. This is usually a predetermined size even though the size is not always called out. If you take the overall thickness of the member and subtract the groove depth you will be left with the root face depth.
Bevel Angle- an angle between the bevel of a member and a perpendicular plane in relation to the surface. This may be only equal to half of a groove angle if the opposite joining member is also prepped. If only a single member is prepped this is also considered the groove angle.
Depth of Bevel- the distance from the surface of the base metal to the root edge or beginning of the root face.
Groove Angle- an included angle of the groove between work pieces. If both members are prepared this angle is from groove face to groove face. This dimension is shown in degrees above or below the welding symbol depending on if it is arrow side or other side designation.
Groove Radius-This pertains specifically to J or U groove welds as these have a radius most commonly specific by machining.
Root Opening- a gap between two joining members.
Joint Root Examples. Some may be shown with hatching.
Groove Face, Root Edge, Root Face examples. Some are shown with or without hatching to better depict a visual.
Shown below are examples of bevel angle, groove angle, depth of bevel, groove radius, and root opening. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.03%3A_Joint_types_and_Terminology.txt |
Understanding Weld vs. Welding symbol
A weld symbol is not the same as a welding symbol. The weld symbol specifies the type of weld to be applied to a part. The welding symbol is made of several parts including the reference line, arrow, and weld symbol when required. The symbols in this book are a representation of what weld and welding symbols look like. There are specific design requirements when used in accordance to a blueprint.
Reference Line and Arrow
There are two parts that make up the main body of a welding symbol. These include the reference line and the arrow. The horizontal line that makes up the main body is called the reference line. This is the anchor to which all the other welding symbols are tied. The information that is pertinent for making the weld are place on the reference line in specific places. The arrow connects the reference line to the joint in which the weld(s) is (are) to be made. There are several combinations of the reference line and arrow, but the reference line will always be placed in a horizontal position. This symbol is also always read from the left to the right. If you have been around blueprints the arrow may look a lot like a leader line. They are not the same thing so be mindful when reviewing welding symbols.
The reference line may include what is called the tail. This looks like the letter V turned sideways. Such as this >. This tail gives an area to write specifics about what weld process, welding procedure, and even material specifications are required for that specific welding symbol.
Arrow vs Other Side
Placement of the weld will depend on placement of the symbol above or below the reference line. If the symbol has been placed above the reference line this is calling for the other side. If it is placed below the reference line this is calling for the arrow side. Other and arrow side define exactly what it is calling for. If the arrow is pointing to the right side of the joint the weld will be placed on the right side if an arrow side weld is called out. If the arrow is pointing at the right side of the joint and another side weld is called then the weld will be applied to the left side of the part.
If a weld is to be placed on the other side of a joint a symbol will be placed above the reference line.
If a weld is to be placed on the arrow side of a joint the symbol will be placed below the reference line.
It is very important to understand the difference of these two sides as it could finish a product or if done incorrectly send it back to be reworked in order to get the correct outcome. The concept behind this may seem very simple at the moment but as we work through this book and start adding more elements to the welding symbol it may become more taxing on the thought process. The most important thing to do is break it down piece by piece and truly understand what the symbol is asking/ requiring for the weld.
At times there may be multiple reference lines. If this is the case it is important to remember the order of which the welding is to occur. The reference line nearest the arrow will be the first operation, followed by the second, and so on until all operations are complete.
Symbol Fundamental Quiz
Label the image below with the appropriate letter (arrow/other) designation.
Letter ______ designates the Arrow side of the joint.
Letter ______ designates the Other side of the joint.
Letter ______ designates the Arrow side of the joint.
Letter ______ designates the Other side of the joint. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.04%3A_Basics_of_Welding_Symbols.txt |
These are symbols that are added onto a weld symbol to give further instruction or knowledge of the final product.
Weld all around
This symbol is simply a circle at the junction of the arrow and the reference line. This indicates that on a part that a weld could be completed around an entire joint. A common place you could see this is the junction of a square or round tube and a plate.
Field Weld
The symbol is a flag that will be at the junction of the arrow and reference line. This indicates that the part while may be assembled in a shop setting, its final welding procedures will be completed in the field during installation. The flag itself should point in the direction of the tail (end of reference line.) It can also be placed above or below the reference line for an other or arrow side specification.
Melt Through
Melt through is most commonly associated with a groove weld. This is indicating you are achieving 100% penetration with a root reinforcement. This symbol can also be seen when sheet metal is being welded and there is an implied melt through in seams and joints.
Melt through may include a finishing contour as well as finishing method as parts are often cleaned up before they are sent to be painted, powder coated, or put into service.
Consumable Insert
A consumable insert symbol is used when an insert is used within a welded joint that becomes part of the weld. These are commonly specified in shape, size, and material. The symbol is placed on the opposite side of the groove weld symbol. The consumable insert class must be placed in the tail of the welding symbol. This class is defined by the American Welding Society.
Backing
A backing strip is specified by placing a rectangle on the opposite side of the reference line from the groove weld symbol. If the backing must be removed after the welding operation has been finished, the letter R must be placed inside of the backing symbol. Material to be used as well as the dimensions must be placed in the tail of the symbol or somewhere on the drawing.
Spacer
A spacer may be used on a double groove weld. In this case both the top and bottom are prepped and a spacer is added to the middle of the groove. The symbol is a rectangle the breaks the reference line. A note on the print or in the tail can specify material and dimensions.
Weld contour
There are 3 symbols used for specifying the contour finish of a weld. This is the final appearance of a weld as it goes into service. These are commonly associated with fillet welds but sometimes will present themselves with groove welds. If a “flat” symbol is used with a groove weld it will then be called a flush contour rather than flat.
Contour symbols may also include yet another element which will show to the right of the fillet weld symbol above the contour. These are letter designations which will be for finishing methods.
C- Chipping
G- Grinding
H- Hammering
M- Machining
P- Planishing
R- Rolling
U- Unspecified
Shown is a fillet weld to be applied to the other side with a flat contour by grinding. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.05%3A_Supplementary_Welding_Symbols.txt |
Fillet welds are one of the most common weld types in the industry. This weld is used when the joint has two members coming together to form an intersection of commonly 90 degrees. These welds can be applied on varying angles but this would be the most prominent.
A fillet weld symbol can be used with an arrow side (below reference line) other side (above reference line) significance or on both sides (both sides of the reference line.) When a fillet weld is required on both sides of the reference line it is called a double fillet weld. The vertical leg of the symbol will always be placed to the left regardless of which way the arrow is pointing.
Fillet welds may have a size associated with them. This size is called out on the left side of the symbol before the vertical side. The size is indicating the leg length of the weld. If a single size is called out this is specifying that weld should have equal leg sizes. If it is an equal leg fillet weld it is not common to dimension it on the print as shown below for demonstration purpose.
If a double fillet weld is called out the size will be shown for both sides of the joint, depending on the part these welds could vary in size so it is necessary to provide this information.
There are times when an unequal leg fillet weld is called out. In this situation the part must be dimensioned in order to apply the correct leg size to the right member being welded. This may include only one of the two leg lengths. If there were no indication of which leg is which the part could be welded incorrectly.
Sometimes fillet welds will not be shown with a size but will rather have a note in the tail of the symbol that gives required information for size. This is common when fillet welds will all be the same size.
In the case of the length of a weld, this may or may not have a dimension associated with it. If the weld does not have a dimension the weld will be the continuous length of the joint. Whether the part is 2” or 60” long if it has no dimension the weld will run the length of the joint. A weld may be applied only to a specific length of a joint. This must be shown in the weld symbol to communicate the information between individuals. The weld length will be provided on the right side of the fillet weld symbol.
This shows a 6” fillet weld to be applied to the arrow side.
There may be times where a length is given on a part and the location of the weld will be given with a dimension in order to achieve the correct location.
Hatching lines may be used to indicate the length of a weld instead of using a dimension on the weld symbol itself.
There are instances when a weld may change direction because of part geometry. If this happens it will be called out using multiple arrows off of one reference line.
When a weld is not required to be continuous it is common to apply an intermittent weld. This means that there are gaps between the termination of one weld and the start of the next. These are called weld segments. They are commonly referred to as skip welds in the industry.
When using an intermittent weld there is a call out specified for the length of the weld and also the pitch that is to be applied. When this is shown on the right side of the symbol and it is called as the length of segment a hyphen and then the pitch of the welds. The pitch of the weld is measured as center to center of the next segment. (Ex. 1-2)
At times that there are welds on both sides of the joint and they are intermittent, this now becomes chain intermittent welding. This can be seen on long sections of a tee joint that isn’t under a large amount of stress.
When an intermittent fillet is not a chain weld it will then be called out as a staggered intermittent fillet weld. The welds will be placed on both sides of the joint but it will be offset with one another. This offset shows on the reference line as well. It could be staggered in either direction on the reference line. Dimensions of these welds must be specified on both sides of the reference line.
If the weld is only to be intermittent on one side and a continuous weld on the other the symbol must be dimensioned individually.
Not all pitch will be the same, or necessarily common space. You must be able to calculate the spacing between weld stops and weld starts in order to apply the correct welds to the specification. An easy way to find this distance is simply subtract the pitch from the weld length (segment.)
7 inches (pitch) – 3 inches (length of segment) = 4 inches (spacing in between welds)
At times there will be a mixture of continuous welding as well as intermittent welding. If this occurs the spacing between these segments will all be the same.
If this combination occurs welding symbols should specify continuous and intermittent on the same side of the joint. These are also often dimensioned.
Fillet Weld Quiz
Write down the corresponding information with each letter and specify what it is. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.06%3A_Fillet_Weld_Symbols.txt |
A groove weld will be used when to parts come together in the same plane. These welds will be applied in a butt joint and may have a preparation or not before welding. This is the reason there are several types of groove welding symbols.
The symbols for these grooves are nearly identical to the symbols that represent them.
When a weld is to be applied to only one side of a joint it will be called a single groove weld. For example below is a welding symbol of a single V-Groove weld on the other side. All single groove welds should be considered complete joint penetration (CJP) unless otherwise specified.
If a weld is to be applied to both sides of the joint this is called a double groove weld. For example below is a welding symbol of a double bevel groove weld.
The theory behind the single groove weld and double groove weld translate to all of the groove weld symbols. It would be redundant to recreate all of these images.
In some cases you will see a jog in the arrow. This is called a break in the arrow that will designate which side of the joint will be required to have the preparation done to it. For example if a single bevel is to be applied to the left side of the joint a broken arrow will be pointing specifically at that side of the joint.
If there was not an indicating arrow the welder or fitter would choose which side should be prepared according to their knowledge. This could be an issue if an engineer has specific needs for the part or weld.
Quiz
Draw the symbol representing the below groove and name it (don’t forget to specify which side of the joint is prepared):
Draw the symbol for a V groove on the other side below:
Groove weld dimensioning
There are several dimensions that may be added to a groove weld if it is needed. This can include a groove angle, root opening, a groove radius, depth of the groove preparation, and groove weld size. There are times that this information may not be included at all. This would mean that it is the welder’s discretion as to how the part will be prepared and welded.
Groove angle is shown in degrees and will include all of the groove, if it is a V Groove it will be a dimension from one groove face to the other. This can be confused with bevel angle. Bevel angle is only one half of a V groove. This dimension is shown within the weld symbol itself. There is a possibility for two different angles if you are applying to a double groove weld. The arrow and other side do not have to necessarily match in angles.
A groove weld is the most common weld to have a root opening. This is a gap that is predetermined to have between two members to be welded. There is not always a root opening and this dimension can be omitted from the welding symbol. It is common to put a root opening on a part to ensure complete penetration or even melt through. The melt through symbol is included in supplementary welding symbols.
Grooves that associate with U and J preparations are a rather special weld. These welds if done to correct standards are machined with a specific radius of groove as well as root face. These dimensions must be shown in a detail or section view that is noted in the tail of the welding symbol.
The preparation of the groove may be called out for how deep you are to prepare the part. This is called the depth of groove. V- Grooves, j- grooves, and u- grooves are the most commonly sized welds for depth. Although this does not mean it cannot be applied to others. The dimension will be shown to the left of the weld symbol.
As we start adding more elements the symbols get fairly complicated looking. The easiest thing to do is slow down and look at each individual piece and apply it to what we have learned. For example the below weld is a single V-groove weld on the other side. This weld has a ½ inch groove depth, 1/16inch root opening, and 90 degree groove angle.
When using a groove depth that is not the full depth of the part we leave a flat area in the root. This area is called the root face. A more common term you will hear is the land. In the diagram above we have a ½ inch groove depth and we have a ¾ inch part. This leaves us a ¼ inch root face.
Often associated with a groove weld is going to be the weld size. This weld size is the depth of penetration you will be getting when applying the weld. When a weld is applied we should be melting into the root of the part so our weld should be larger in dimension than the preparation of the joint. This dimension will show to the left of the weld symbol. When paired with a groove depth the weld size will be within parentheses. If no weld size is shown the weld should be complete joint penetration.
In the case of a groove that shows a depth of groove preparation but the weld size does not show. The weld shall not be less than the depth of preparation. If you did not perform a weld that was at least this size you will not have completed adequate fusion or the weld will not fill the groove.
There are times when dimensions will not be shown on grooves. If the joint is symmetrical then the weld shall be complete joint penetration. This is easily pictured with a double v-Groove.
The above image shown is a double V-Groove weld. There is no groove depth shown so by welder’s discretion the parts are prepped to ¼ inch on both sides to create a symmetrical joint.
When working with a double groove that has the same dimensions on both sides it is required that dimensions are shown on both sides of the reference line. This is important because if one dimension is left off there will be an unknown size and this may compromise the weld.
There are also times that a weld is not required to penetrate the depth of the groove. The easiest way to accomplish this would be to place a weld size dimension to the left of the weld symbol that is of smaller size than the material thickness.
There may be a weld applied to both sides in order to get penetration through the groove thickness without preparation of the part. This is going to be limited to smaller thickness of material depending on the process that is used for the welding.
The two flare type grooves including a bevel and a Vee are going to be very common when working with sheet metal and also if there are welds being made to tubing that may have a large radius on the corners. This is fairly common in tubing that is ¼” and above in thickness. If working with sheet metal it is common to make a joint type of this type in order to fuse the parts together. Instead of using filler the material that is making the flare bevel may have a leg of an 1/8” or so and it will make up for the filler.
When using either of these symbols it is important to know the difference between the preparation of groove depth as well as the weld size. Similar to a regular bevel or vee the preparation of groove depth is going to be to the left of the weld symbol and also to the left of the weld size which will be shown in parenthesis. Length can be added in a dimension to the right of the weld symbol.
Back, Backing weld, Surfacing weld
A back or backing symbol is the same for both, you must look in the tail for further information to distinguish between them.
A back weld is when a weld is made in the groove of a joint and followed by a weld applied to the root side. This is most commonly used to insure complete penetration on CJP grooves. The Back weld is usually applied after the root has been ground or gouged out to make sure that the weld is made to sufficient material. When trying to remember the difference between a back and backing weld, you must always go back in order to do a back weld.
A backing weld is made on the root side of a groove in order to ensure that the weld that is going to be made in the groove does not melt through the backside. This may also help ensure CJP.
Below is a representation of a backing weld.
Below is a representation of a back weld.
There are times where the tail will be omitted from the drawing and in the tail there will be a note that may say which order the welds are to be made. It may be as simple as “other side weld made first” or may include true terms such as “other side bevel groove welded before back weld on arrow side.”
Surfacing welds
Surfacing welds are made by single or multiple passes to parts for a variety of reasons. These may include buildup of worn material, hard facing a part, or increasing part dimensions. This symbol may be on the arrow side of a joint only. It is important that the arrow points specifically where the surfacing shall be added.
These welds may include a thickness of weld which will be located to the left of the weld symbol and may also show a length to the right of the symbol. With this type of weld it will more than likely have a detail view with dimensions for the welding.
When a surfacing weld may need multiple layers this may be shown in a note on the blueprint or it could also be determined by the reference lines. There are times where there may be more than one reference line which gives it an order of operation. For example if you think of a backing weld this would be listed on the reference line which is closest to the arrow, the groove weld would be placed on the second reference line.
To show this in surfacing welds it may ask for a specific size for the first layer of buildup and then a different size for the second or subsequent layers. If there is a change in direction this may be shown in the tail of a multi reference welding symbol.
A surfacing weld will run the entire length of the part unless there is a dimension, note, or other designating it is not full. This also plays a part when welding a shaft or other round object. With a round object rather than a longitudinal (long dimension) or lateral (short dimension) of the part you may see axial (the length of the shaft) or circumferential (around the shaft.) When a weld will be done to a shaft or other round part this must be called out or an incorrect procedure may be applied. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.07%3A_Groove_Welding_Symbols.txt |
Plug welds are a round weld that is made inside of an existing hole most commonly in one piece of metal, welding that piece to another member. The plug weld symbol is a rectangle with a diameter symbol placed to the left of the symbol as well as the number associated with that diameter.
Some drawings will not indicate the hole in the print so the use of dimensions come in to play when locating where a plug weld will be executed. The location will be indicated by a centerline through the part.
Above is indicating a ½” plug weld offset 1” from the edge to the center of the weld.
Some plug welds may include a countersink of the hole of the plug weld. This is called the included angle of countersink. This angle is shown below the rectangle of the symbol itself or if the plug weld is to be on the other side it will be placed above the weld symbol. When figuring sizing of the hole remember that the diameter will be the narrow of the hole at the base of the weld.
Without a countersink included it will be necessary to follow shop standards and procedures to dictate what this needs to be, if any angle. Most shops have a procedure in place for tasks that will be done often. If it is needed it may be listed on a welding procedure for the plug welds that are being completed.
If a number of plug welds are needed there will be yet another element added to the symbol. This will be a number that is surrounded in parentheses, such as (6) for example.
When applying a plug weld it is important to know the depth of fill that is required. If the plug weld should fill the hole provided then the symbol will be left empty. This means there will be no dimension inside of the rectangle. If the hole should be filled only so much then this will be placed inside of the rectangle. This dimension will be in a fraction and indicates the amount in inches the hole will be filled, not the necessarily how much the hole will be filled.
Another element that can be added to this weld symbol may be the pitch (spacing) for multiple welds. This is located to the right of the symbol and is a number representing the center to center spacing for weld location.
Plug welds may have a contour symbol which will be added below the symbol or countersink angle if on the arrow side and above if it is on the other side of the reference line. There are many types of contours and finishing designations, these are covered in supplementary welding symbols.
This symbol represents:
Plug Weld
Arrow Side
½ inch in diameter
1/8” amount of fill
45 degree included angle of countersink
Flat contour
Finished by Machining
Slot Weld Symbol
The slot weld symbol is the same that is used for plug welds. The symbol will not show a diameter symbol before the size however. The size of the weld will be the slot width instead. This is shown to the left of the symbol just as it is shown in plug welds.
The length of the slot weld will be presented to the right of the symbol. This may also include a pitch showing the center to center spacing of the slot welds. If there is a pitch there will be a number of slot welds provided in parenthesis under the symbol on the arrow side or above the symbol on an other side weld.
The drawing must show the orientation of the slot welds as to not confuse direction along the part. The above image shows the slots with a vertical orientation to the part versus a horizontal layout as shown below.
A slot weld can include any number of elements, these are very similar to the plug weld symbol that was just explained.
These can include:
Arrow or other side
Size (width)
Length of slot
Pitch
Depth of fill
Number of welds required
Contour
Finish
Make no mistake on the fill of a plug or slot weld fill. There is a possibility of having a fillet weld inside of a hole versus actually filling the hole for a plug weld. This could also be mistakenly done on a slot weld.
Plug and Slot Quiz
Write down all information regarding the below Welding Symbols. | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.08%3A_Plug_Weld_symbols.txt |
Spot Weld
The spot weld symbol is simply a circle that may be placed above, below, or centered on the reference line. When the symbol is centered on the reference line this indicates that there is no side significance. When there is no side significance this can commonly be applied using a resistance spot welder which is used widely in sheet metal work.
A spot weld is simple a weld applied to the surface of one member that has enough heat input to melt into the material that is creating the faying surface. This is done with no prior preparation to the parts.
An example of arrow side spot weld and a no side significance resistance spot weld below.
The size of a spot weld is going to be placed to the left side of the welding symbol. This number indicates the diameter of said spot weld at the faying surface. The faying surface is where two parts are placed on top of each other at close proximity.
The number of spot welds required will be added in parenthesis above or below the symbol depending on location of the symbol. If it is centered on the reference line the placement of required welds could be placed above or below the symbol.
Pitch can be added to the spot weld symbol as well. This will be presented to the right of the symbol.
When a pitch is used this is stating that this will be continued across the full length of the part. For example if the part is 20 inches long you would be applying welds every 2 inches using the above symbol for the length of that 20 inch part. If the spot welding will not be covering the full length of the part this will need to be shown with dimension lines on the print in order to communicate this information properly.
Full length call out:
There are times when instead of using a diameter dimension the call out will be for shear strength. This is how resistant something is to shearing. This can be called out in pound-force (lbf) or if the blueprint is in metric it would call for Newton’s (N).
(500 lbf specifies that the part will be able to resist shearing to a minimum of 500 lbf.
It may be specified what process will be used to achieve the weld and this will be put into the tail. Common processes for this would be resistance spot welding, and gas tungsten arc welding. The reasoning behind these are there could be no added filler used with the weld so there will be less of a chance for lack of fusion. Many other processes may be used as long as the effects of the weld are known and still acceptable for the outcome of the weld.
A contour may be added to the spot symbol in order to ensure that the surface is flush as if no weld has taken place. This will go into further detail in supplementary welding symbols.
For an example below is an arrow side weld with a flush contour by grinding.
Seam Weld
The seam weld uses a similar process as a spot weld but in an elongated fashion. There is no preparation like a plug or slot weld, rather the weld projects through the top surface and melts into the other member by means of heat input. The symbol is similar but it carries two parallel lines through it.
An example of a seam weld:
Seam welds will have a size or shear strength associated with the welding symbol commonly. This number will go to the left of the welding symbol. A size is an indication of width of the bead. Shear strength is the same as a spot weld and is the amount of pound-force the weld can take minimum per 1 inch of weld.
Length can be added to the right side of the symbol to indicate how long the weld to be made is.
An additional element can be a pitch if it is needed for applying several welds. This will be added to the right side of the weld symbol after the length with a hyphen.
Seam welds can also have elements as spot welds do such as a process associated in the tail as well as a contour. The contour is shown above or below the symbol depending on the way the symbol is on the reference line.
The next image shows a weld call out for a Seam weld on the arrow side. ½ inch in width with 2.5” segments and a 5.5” pitch. All intermittent welds (pitch) are made in a lengthwise pattern unless there is a detail on the print that says otherwise.
Stud Welds
Stud welds are a common practice in many shops. This process often uses a stud welder which is sometimes a standalone or handheld unit. These welds require the symbol to be on the arrow side only of a joint. The elements of size, pitch, and amount of stud welds are placed in the same locations as spot and seam welds.
Symbol
Added elements
The above weld is calling for six ½” diameter stud welds placed at a 4” center to center spacing.
Studs come in all sorts of sizes, shapes, and varieties. For example there are studs for concrete anchors, threaded bolt patterns, tapped studs to use as a bolt, insulation hangers, and even hard faced studs to replace hard facing a part.
Spot, Stud, Seam Quiz
In the space below draw a symbol for the following:
3/16” spot weld on the arrow side, ground flush, a pitch of 2”, and 8 total welds.
1” Stud welds on the arrow side, 2” pitch, 20 total studs.
Resistance seam weld with no side significance, 8” pitch, 16” length.
1/4” stud welds on the arrow side with a pitch of 2”. If the part is 20” long and the first stud is placed 1” from the edge how many studs are required? | textbooks/workforce/Manufacturing/Interpretation_of_Metal_Fab_Drawings_(Moran)/1.09%3A_Spot_Seam_Stud_Welding_Symbols.txt |