Saturday, 31 May 2014

10 Worst Computer Viruses of All Time


10 Worst Computer Viruses

Laptop Image Gallery
There's nothing quite like finding out your computer has a serious virus.

­­Computer viruses can be a nightmare. Some can wipe out the information on a hard drive, tie up traffic on a computer network for hours, turn an innocent machine into a zombie and replicate and send themselves to other computers. If you've never had a machine fall victim to a computer virus, you may wonder what the fuss is about. But the concern is understandable -- according to Consumer Reports, computer viruses helped contribute to $8.5 billion in consumer losses in 2008 [source: MarketWatch]. Computer viruses are just one kind of online threat, but they're arguably the best known of the bunch.
Computer viruses have been around for many years. In fact, in 1949, a scientist named John von Neumann theorized that a self-replicated program was possible [source: Krebs]. The computer industry wasn't even a decade old, and already someone had figured out how to throw a monkey wrench into the figurative gears. But it took a few decades before programmers known as hackers began to build computer viruses.
While some pranksters created virus-like programs for large computer systems, it was really the introduction of the personal computer that brought computer viruses to the public's attention. A doctoral student named Fred Cohen was the first to describe self-replicating programs designed to modify computers as viruses. The name has stuck ever since.
­In the good­ old days (i.e., the early 1980s), viruses depended on humans to do the hard work of spreading the virus to other computers. A hacker would save the virus to disks and then distribute the disks to other people. It wasn't until modems became common that virus transmission became a real problem. Today when we think of a computer virus, we usually imagine something that transmits itself via the Internet. It might infect computers through e-mail messages or corrupted Web links. Programs like these can spread much faster than the earliest computer viruses.

We're going to take a look at 10 of the worst computer viruses to cripple a computer system. Let's start with the Melissa virus.
In the spring of 1999, a man named David L. Smith created a computer virus based on a Microsoft Word macro. He built the virus so that it could spread through e-mail messages. Smith named the virus "Melissa," saying that he named it after an exotic dancer from Florida [source: CNN].
Rather than shaking its moneymaker, the Melissa computer virus tempts recipients into opening a document with an e-mail message like "Here is that document you asked for, don't show it to anybody else." Once activated, the virus replicates itself and sends itself out to the top 50 people in the recipient's e-mail address book.
The virus spread rapidly after Smith unleashed it on the world. The United States federal government became very interested in Smith's work -- according to statements made by FBI officials to Congress, the Melissa virus "wreaked havoc on government and private sector networks" [source: FBI]. The increase in e-mail traffic forced some companies to discontinue e-mail programs until the virus was contained.
After a lengthy trial process, Smith lost his case and received a 20-month jail sentence. The court also fined Smith $5,000 and forbade him from accessing computer networks without court authorization [source: BBC]. Ultimately, the Melissa virus didn't cripple the Internet, but it was one of the first computer viruses to get the public's attention.

Flavors of Viruses

In this article, we'll look at several different kinds of computer viruses. Here's a quick guide to what we'll see:
  • The general term computer virus usually covers programs that modify how a computer works (including damaging the computer) and can self-replicate. A true computer virus requires a host program to run properly -- Melissa used a Word document.
  • A worm, on the other hand, doesn't require a host program. It's an application that can replicate itself and send itself through computer networks.
  • Trojan horses are programs that claim to do one thing but really do another. Some might damage a victim's hard drive. Others can create a backdoor, allowing a remote user to access the victim's computer system.
Next, we'll look at a virus that had a sweet name but a nasty effect on its victims.

A year after the Melissa virus hit the Internet, a digital menace emerged from the Philippines. Unlike the Melissa virus, this threat came in the form of a worm -- it was a standalone program capable of replicating itself. It bore the name ILOVEYOU.
The ILOVEYOU virus initially traveled the Internet by e-mail, just like the Melissa virus. The subject of the e-mail said that the message was a love letter from a secret admirer. An attachment in the e-mail was what caused all the trouble. The original worm had the file name of LOVE-LETTER-FOR-YOU.TXT.vbs. The vbs extension pointed to the language the hacker used to create the worm: Visual Basic Scripting [source: McAfee].
According to anti-virus software producer McAfee, the ILOVEYOU virus had a wide range of attacks:
  • It copied itself several times and hid the copies in several folders on the victim's hard drive.
  • It added new files to the victim's registry keys.
  • It replaced several different kinds of files with copies of itself.
  • It sent itself through Internet Relay Chat clients as well as e-mail.
  • It downloaded a file called WIN-BUGSFIX.EXE from the Internet and executed it. Rather than fix bugs, this program was a password-stealing application that e-mailed secret information to the hacker's e-mail address.
Who created the ILOVEYOU virus? Some think it was Onel de Guzman of the Philippines. Filipino authorities investigated de Guzman on charges of theft -- at the time the Philippines had no computer espionage or sabotage laws. Citing a lack of evidence, the Filipino authorities dropped the charges against de Guzman, who would neither confirm nor deny his responsibility for the virus. According to some estimates, the ILOVEYOU virus caused $10 billion in damages [source: Landler].

Now that the love fest is over, let's take a look at one of the most widespread viruses to hit the Web.
The Klez virus marked a new direction for computer viruses, setting the bar high for those that would follow. It debuted in late 2001, and variations of the virus plagued the Internet for several months. The basic Klez worm infected a victim's computer through an e-mail message, replicated itself and then sent itself to people in the victim's address book. Some variations of the Klez virus carried other harmful programs that could render a victim's computer inoperable. Depending on the version, the Klez virus could act like a normal computer virus, a worm or a Trojan horse. It could even disable virus-scanning software and pose as a virus-removal tool [source: Symantec].
Shortly after it appeared on the Internet, hackers modified the Klez virus in a way that made it far more effective. Like other viruses, it could comb through a victim's address book and send itself to contacts. But it could also take another name from the contact list and place that address in the "From" field in the e-mail client. It's called spoofing -- the e-mail appears to come from one source when it's really coming from somewhere else.
Spoofing an e-mail address accomplishes a couple of goals. For one thing, it doesn't do the recipient of the e-mail any good to block the person in the "From" field, since the e-mails are really coming from someone else. A Klez worm programmed to spam people with multiple e-mails could clog an inbox in short order, because the recipients would be unable to tell what the real source of the problem was. Also, the e-mail's recipient might recognize the name in the "From" field and therefore be more receptive to opening it.

Antivirus Software

It's important to have an antivirus program on your computer, and to keep it up to date. But you shouldn't use more than one suite, as multiple antivirus programs can interfere with one another. Here's a list of some antivirus software suites:
  • Avast Antivirus
  • AVG Anti-Virus
  • Kaspersky Anti-Virus
  • McAfee VirusScan
  • Norton AntiVirus
Several major computer viruses debuted in 2001. In the next section, we'll take a look at Code Red.
The Code Red and Code Red II worms popped up in the summer of 2001. Both worms exploited an operating system vulnerability that was found in machines running Windows 2000 and Windows NT. The vulnerability was a buffer overflow problem, which means when a machine running on these operating systems receives more information than its buffers can handle, it starts to overwrite adjacent memory.
The original Code Red worm initiated a distributed denial of service (DDoS) attack on the White House. That means all the computers infected with Code Red tried to contact the Web servers at the White House at the same time, overloading the machines.

A Windows 2000 machine infected by the Code Red II worm no longer obeys the owner. That's because the worm creates a backdoor into the computer's operating system, allowing a remote user to access and control the machine. In computing terms, this is a system-level compromise, and it's bad news for the computer's owner. The person behind the virus can access information from the victim's computer or even use the infected computer to commit crimes. That means the victim not only has to deal with an infected computer, but also may fall under suspicion for crimes he or she didn't commit.

While Windows NT machines were vulnerable to the Code Red worms, the viruses' effect on these machines wasn't as extreme. Web servers running Windows NT might crash more often than normal, but that was about as bad as it got. Compared to the woes experienced by Windows 2000 users, that's not so bad.
Microsoft released software patches that addressed the security vulnerability in Windows 2000 and Windows NT. Once patched, the original worms could no longer infect a Windows 2000 machine; however, the patch didn't remove viruses from infected computers -- victims had to do that themselves.


Another virus to hit the Internet in 2001 was the Nimda (which is admin spelled backwards) worm. Nimda spread through the Internet rapidly, becoming the fastest propagating computer virus at that time. In fact, according to TruSecure CTO Peter Tippett, it only took 22 minutes from the moment Nimda hit the Internet to reach the top of the list of reported attacks [source: Anthes].
The Nimda worm's primary targets were Internet servers. While it could infect a home PC, its real purpose was to bring Internet traffic to a crawl. It could travel through the Internet using multiple methods, including e-mail. This helped spread the virus across multiple servers in record time.
The Nimda worm created a backdoor into the victim's operating system. It allowed the person behind the attack to access the same level of functions as whatever account was logged into the machine currently. In other words, if a user with limited privileges activated the worm on a computer, the attacker would also have limited access to the computer's functions. On the other hand, if the victim was the administrator for the machine, the attacker would have full control.
The spread of the Nimda virus caused some network systems to crash as more of the system's resources became fodder for the worm. In effect, the Nimda worm became a distributed denial of service (DDoS) attack.

Phoning it In

Not all computer viruses focus on computers. Some target other electronic devices. Here's just a small sample of some highly portable viruses:
  • CommWarrior attacked smartphones running the Symbian operating system (OS).
  • The Skulls Virus also attacked Symbian phones and displayed screens of skulls instead of a home page on the victims' phones.
  • RavMonE.exe is a virus that could infect iPod MP3 devices made between Sept. 12, 2006, and Oct. 18, 2006.
  • Fox News reported in March 2008 that some electronic gadgets leave the factory with viruses pre-installed -- these viruses attack your computer when you sync the device with your machine [source: Fox News].
Next, we'll take a look at a virus that affected major networks, including airline computers and bank ATMs.

In late January 2003, a new Web server virus spread across the Internet. Many computer networks were unprepared for the attack, and as a result the virus brought down several important systems. The Bank of America's ATM service crashed, the city of Seattle suffered outages in 911 service and Continental Airlines had to cancel several flights due to electronic ticketing and check-in errors.
The culprit was the SQL Slammer virus, also known as Sapphire. By some estimates, the virus caused more than $1 billion in damages before patches and antivirus software caught up to the problem [source: Lemos]. The progress of Slammer's attack is well documented. Only a few minutes after infecting its first Internet server, the Slammer virus was doubling its number of victims every few seconds. Fifteen minutes after its first attack, the Slammer virus infected nearly half of the servers that act as the pillars of the Internet [source: Boutin].
The Slammer virus taught a valuable lesson: It's not enough to make sure you have the latest patches and antivirus software. Hackers will always look for a way to exploit any weakness, particularly if the vulnerability isn't widely known. While it's still important to try and head off viruses before they hit you, it's also important to have a worst-case-scenario plan to fall back on should disaster strike.

A Matter of Timing

Some hackers program viruses to sit dormant on a victim's computer only to unleash an attack on a specific date. Here's a quick sample of some famous viruses that had time triggers:
  • The Jerusalem virus activated every Friday the 13th to destroy data on the victim computer's hard drive
  • The Michelangelo virus activated on March 6, 1992 -- Michelangelo was born on March 6, 1475
  • The Chernobyl virus activated on April 26, 1999 -- the 13th anniversary of the Chernobyl meltdown disaster
  • The Nyxem virus delivered its payload on the third of every month, wiping out files on the victim's computer
Computer viruses can make a victim feel helpless, vulnerable and despondent. Next, we'll look at a virus with a name that evokes all three of those feelings.

The MyDoom (or Novarg) virus is another worm that can create a backdoor in the victim computer's operating system. The original MyDoom virus -- there have been several variants -- had two triggers. One trigger caused the virus to begin a denial of service (DoS) attack starting Feb. 1, 2004. The second trigger commanded the virus to stop distributing itself on Feb. 12, 2004. Even after the virus stopped spreading, the backdoors created during the initial infections remained active [source: Symantec].

Later that year, a second outbreak of the MyDoom virus gave several search engine companies grief. Like other viruses, MyDoom searched victim computers for e-mail addresses as part of its replication process. But it would also send a search request to a search engine and use e-mail addresses found in the search results. Eventually, search engines like Google began to receive millions of search requests from corrupted computers. These attacks slowed down search engine services and even caused some to crash [source: Sullivan].

MyDoom spread through e-mail and peer-to-peer networks. According to the security firm MessageLabs, one in every 12 e-mail messages carried the virus at one time [source: BBC]. Like the Klez virus, MyDoom could spoof e-mails so that it became very difficult to track the source of the infection.

Oddball Viruses

Not all viruses cause severe damage to computers or destroy networks. Some just cause computers to act in odd ways. An early virus called Ping-Pong created a bouncing ball graphic, but didn't seriously damage the infected computer. There are several joke programs that might make a computer owner think his or her computer is infected, but they're really harmless applications that don't self-replicate. When in doubt, it's best to let an antivirus program remove the application.
Next, we'll take a look at a pair of viruses created by the same hacker: the Sasser and Netsky viruses.

Sometimes computer virus programmers escape detection. But once in a while, authorities find a way to track a virus back to its origin. Such was the case with the Sasser and Netsky viruses. A 17-year-old German named Sven Jaschan created the two programs and unleashed them onto the Internet. While the two worms behaved in different ways, similarities in the code led security experts to believe they both were the work of the same person.

The Sasser worm attacked computers through a Microsoft Windows vulnerability. Unlike other worms, it didn't spread through e-mail. Instead, once the virus infected a computer, it looked for other vulnerable systems. It contacted those systems and instructed them to download the virus. The virus would scan random IP addresses to find potential victims. The virus also altered the victim's operating system in a way that made it difficult to shut down the computer without cutting off power to the system.
The Netsky virus moves through e-mails and Windows networks. It spoofs e-mail addresses and propagates through a 22,016-byte file attachment [source: CERT]. As it spreads, it can cause a denial of service (DoS) attack as systems collapse while trying to handle all the Internet traffic. At one time, security experts at Sophos believed Netsky and its variants accounted for 25 percent of all computer viruses on the Internet [source: Wagner].

Sven Jaschan spent no time in jail; he received a sentence of one year and nine months of probation. Because he was under 18 at the time of his arrest, he avoided being tried as an adult in German courts.
So far, most of the viruses we've looked at target PCs running Windows. But Macintosh computers aren't immune to computer virus attacks. In the next section, we'll take a look at the first virus to commit a Mac attack.

Maybe you've seen the ad in Apple's Mac computer marketing campaign where Justin "I'm a Mac" Long consoles John "I'm a PC" Hodgman. Hodgman comes down with a virus and points out that there are more than 100,000 viruses that can strike a computer. Long says that those viruses target PCs, not Mac computers.
For the most part, that's true. Mac computers are partially protected from virus attacks because of a concept called security through obscurity. Apple has a reputation for keeping its operating system (OS) and hardware a closed system -- Apple produces both the hardware and the software. This keeps the OS obscure. Traditionally, Macs have been a distant second to PCs in the home computer market. A hacker who creates a virus for the Mac won't hit as many victims as he or she would with a virus for PCs.

But that hasn't stopped at least one Mac hacker. In 2006, the Leap-A virus, also known as Oompa-A, debuted. It uses the iChat instant messaging program to propagate across vulnerable Mac computers. After the virus infects a Mac, it searches through the iChat contacts and sends a message to each person on the list. The message contains a corrupted file that appears to be an innocent JPEG image.

The Leap-A virus doesn't cause much harm to computers, but it does show that even a Mac computer can fall prey to malicious software. As Mac computers become more popular, we'll probably see more hackers create customized viruses that could damage files on the computer or snarl network traffic. Hodgman's character may yet have his revenge.
We're down to the end of the list. What computer virus has landed the number one spot?

campaign where Justin "I'm a Mac" Long consoles John "I'm a PC" Hodgman. Hodgman comes down with a virus and points out that there are more than 100,000 viruses that can strike a computer. Long says that those viruses target PCs, not Mac computers.
For the most part, that's true. Mac computers are partially protected from virus attacks because of a concept called security through obscurity. Apple has a reputation for keeping its operating system (OS) and hardware a closed system -- Apple produces both the hardware and the software. This keeps the OS obscure. Traditionally, Macs have been a distant second to PCs in the home computer market. A hacker who creates a virus for the Mac won't hit as many victims as he or she would with a virus for PCs.
But that hasn't stopped at least one Mac hacker. In 2006, the Leap-A virus, also known as Oompa-A, debuted. It uses the iChat instant messaging program to propagate across vulnerable Mac computers. After the virus infects a Mac, it searches through the iChat contacts and sends a message to each person on the list. The message contains a corrupted file that appears to be an innocent JPEG image.
The Leap-A virus doesn't cause much harm to computers, but it does show that even a Mac computer can fall prey to malicious software. As Mac computers become more popular, we'll probably see more hackers create customized viruses that could damage files on the computer or snarl network traffic. Hodgman's character may yet have his revenge.
We're down to the end of the list. What computer virus has landed the number one spot?
Professor Adi Shamir of the Weizmann Institute of Sciences in Israel is the leader of the Anti-Spyware Coalition.
The latest virus on our list is the dreaded Storm Worm. It was late 2006 when computer security experts first identified the worm. The public began to call the virus the Storm Worm because one of the e-mail messages carrying the virus had as its subject "230 dead as storm batters Europe." Antivirus companies call the worm other names. For example, Symantec calls it Peacomm while McAfee refers to it as Nuwar. This might sound confusing, but there's already a 2001 virus called the W32.Storm.Worm. The 2001 virus and the 2006 worm are completely different programs.
The Storm Worm is a Trojan horse program. Its payload is another program, though not always the same one. Some versions of the Storm Worm turn computers into zombies or bots. As computers become infected, they become vulnerable to remote control by the person behind the attack. Some hackers use the Storm Worm to create a botnet and use it to send spam mail across the Internet.
Many versions of the Storm Worm fool the victim into downloading the application through fake links to news stories or videos. The people behind the attacks will often change the subject of the e-mail to reflect current events. For example, just before the 2008 Olympics in Beijing, a new version of the worm appeared in e-mails with subjects like "a new deadly catastrophe in China" or "China's most deadly earthquake." The e-mail claimed to link to video and news stories related to the subject, but in reality clicking on the link activated a download of the worm to the victim's computer [source: McAfee].
Several news agencies and blogs named the Storm Worm one of the worst virus attacks in years. By July 2007, an official with the security company Postini claimed that the firm detected more than 200 million e-mails carrying links to the Storm Worm during an attack that spanned several days [source: Gaudin]. Fortunately, not every e-mail led to someone downloading the worm.
Although the Storm Worm is widespread, it's not the most difficult virus to detect or remove from a computer system. If you keep your antivirus software up to date and remember to use caution when you receive e-mails from unfamiliar people or see strange links, you'll save yourself some major headaches.




Master Windows 8.1

The only thing you need to know to master Windows 8.1

by: Odubanjo Bolarinwa

Windows 8 / 8.1 is a dramatic departure from the traditional Windows interface, and it can be overwhelming to find what you're looking for unless you know this one trick.
Windows
Windows has been pretty much the same for decades... until Windows 8. With Windows 8 / 8.1, there was a very dramatic overhaul of the Windows user interface. Most of the familiar tools and features still exist, but they're buried in places that are hard to find, which can make Windows 8 a very frustrating experience. However, you only need to know one tip -- a simple trick that you can learn in about five seconds -- to master getting around in Windows 8.

The Holy Grail of Windows 8 navigation is the Search function. Search is king. You don't need to know where anything is, and you don't have to struggle to figure out where Microsoft hid it. A simple search will find anything you need as fast as you can type the query.

I'm old enough to remember thinking questions like, "Who was President of the United States in 1845?" and having to find a friend whose family had invested in a set of encyclopedias so I could look it up. Even with the reference material available, finding information was often tedious and difficult, because you had to know where to look for it without any way of knowing how the publisher choose to organize and categorize the information.
That was before the internet, and -- more importantly -- the evolution of search. When my children want to know the capital of Romania, or whether or not the Chicago Cubs have ever won a World Series, that information is accessible almost instantly through a web search using tools like Google or Bing. The best part is that search has matured to the point where you don't need to phrase the query based on a specific syntax. You can simply type the question in natural language, and your results will magically appear.

That same powerful capability is built into Windows 8 / 8.1. The Charms bar appears if you swipe from the right on a touchscreen device or hover the mouse pointer at the upper right corner of the display. At the top of the Charms bar is the Search charm. With Windows 8.1 Update 1, Microsoft added a Search icon directly on the Start Screen at the upper right corner so you don't even have to open the Charms bar. What many users don't realize is that you don't need either of those. When you're on the Windows 8 Start Screen, you can simply start typing, and it will automatically initiate the Search function.

For example, if you want to uninstall an application from Windows, you can right-click the Windows icon to find the Control Panel, and locate the option the old-fashioned way. The easier way, however, is to simply type "remove program" from the Start Screen (or in a Search query if you choose to open Search first). The first two results that appear will take you directly to where you need to be to either remove Windows 8 apps or uninstall traditional Windows software from the system.

What makes the Windows 8 Search functionality even better is that it's a universal search -- you aren't limited to Windows tools and features. You can type "What is the capital of Romania?" and Windows 8 Search will provide results that direct you to that information on the web. You can type the name of an Excel file or keywords from a Word document you know you saved, and those items will appear at the top of the results.

Search is the only tool you need to know to master Windows 8. Not only will it assist you when navigating Windows 8, but once you get comfortable using it as your default way of finding things, it will help you be more productive and work more efficiently.
How do you use Windows built-in Search function? What, if anything, do you think is more important to know about Windows 8 / 8.1? Share your experience and thoughts in the discussion thread below.

Wednesday, 28 May 2014

How to fix a computer

How to fix a computer that won't start

A computer that won't start is frustrating, but the problem is often easy to fix. The steps you take to troubleshoot the problem depend on your symptoms. Click the statement that best describes your problem to find a possible solution:

You log on by clicking your user accounts, but then you can't open any programs

After you click your user account or type your password, immediately press the Shift key and hold it until your desktop and taskbar are visible. Holding down the Shift key stops programs from loading automatically, and it is probably one of these programs that is causing your problem. Once you are able to log on successfully, you can change the programs that run automatically and remove the program that is causing the problem.

Your computer displays the Windows logo, but fails before you can log on

Sometimes Windows begins to load but then stops responding during the startup process. In most cases, the problem is a new piece of hardware, a new program, or a corrupted system file. Follow the instructions below to troubleshoot the problem. Try to start your computer after each step. Continue to the next step only if Windows continues to fail during startup.

To troubleshoot startup problems

  1. Restart your computer. Immediately after the screen goes blank for the first time, press the F8 key repeatedly. The Windows Advanced Options menu appears. If the menu does not appear, restart your computer and try again. Use the cursor keys on your keyboard (your mouse will not work) to select Last Known Good Configuration, and then press Enter. Windows XP attempts to start.
  2. If you recently installed new hardware, shut down your computer and disconnect the hardware. Then, restart Windows XP and troubleshoot your hardware to get it working properly.
  3. Restart your computer and press F8 again. This time, choose Safe Mode and press Enter. Windows XP attempts to start in Safe Mode, which does not automatically start programs and hardware, and displays very primitive graphics. If Windows XP starts successfully in Safe Mode, you can remove any programs or updates you have recently installed. Then, restart your computer normally.

You see "Non-system disk or disk error," or a similar message

The "Non-system disk or disk error" message means that your computer could not find Windows. Follow the steps below and try starting your computer after each step. Continue to the next step only if Windows continues to fail during the startup process.

To troubleshoot disk errors

  1. Your computer might be trying to load Windows from removable media rather than from the hard disk inside your computer. Remove any floppy disks, CDs, DVDs, USB flash drives, digital cameras, and memory cards.
  2. A portion of your hard disk may be corrupted. You might be able to fix the problem by performing a repair installation of Windows XP.
  3. Your hard disk may have failed. If your hard disk has failed, it will need to be replaced. After you have replaced your hard disk, you should restore your files from a backup.

Your computer stops immediately after you turn it on or displays nothing on your monitor

If your computer displays an error message within a few seconds of starting, you probably have a hardware configuration problem. If you see the Windows logo, you need to troubleshoot startup problems. If you see a "Non-system disk or disk error" message, you need to troubleshoot disk errors. If you don't even see the startup screen, you likely have a hardware problem. Follow these steps to troubleshoot a hardware problem that prevents your computer from starting to load Windows. After each step, restart your computer and attempt to load Windows. Continue to the next step only if Windows continues to fail to load.

To troubleshoot hardware problems

  1. If your computer beeps when you start it but does not display anything on your monitor:
    • Disconnect and reconnect your monitor from your computer.
    • Verify that your monitor's power cord is connected and that your monitor is turned on.
    • If possible, connect your monitor to a different computer to make sure that your monitor works properly.
    • If your monitor works but your computer beeps and displays nothing, your video adapter has probably failed. If your computer is under warranty, contact your computer manufacturer for support. If your computer is not under warranty, and you are comfortable opening your computer's case and replacing internal hardware, purchase and install a compatible replacement video adapter. Otherwise, contact a service center for assistance. While replacing a part is a nuisance and may be costly, your documents, pictures, and e-mail should be safe and will be available when your computer is fixed.
  2. If you see an error message that indicates that a keyboard is not present or a key is stuck, turn off your computer and reconnect your keyboard. If the problem continues, replace your keyboard.
  3. Sometimes your computer won't start because your computer is not compatible with a hardware accessory. If you have recently added a new hardware accessory, turn your computer off, remove the accessory, and restart your computer.
  4. Remove all hardware accessories except your keyboard, mouse, and monitor. If your computer starts successfully, shut down Windows, turn off your computer, and add one hardware accessory. Then, restart your computer. If your computer fails to start, the hardware accessory you most recently added is causing a problem. Remove the hardware and contact the hardware vendor for support. You can reconnect other hardware accessories.
  5. You may have a loose connector. Turn off your computer, remove all connectors from the outside of your computer, and then carefully push the connectors back in. Look for stray wires, bent pins, and loosely fitting connectors.
  6. If you are comfortable opening your computer's case, shut down your computer, unplug the power, and open your computer’s case. Remove and reconnect all cables. Remove and reconnect all cards inside your computer, including your computer’s memory chips. Reassemble your computer before attempting to start it again.
  7. If your computer still doesn't start, your motherboard, processor, memory, or graphics card may have developed a problem. While failed hardware can be frustrating, your documents, pictures, and email should be safe and will be there when your computer is fixed.

Your computer does not turn on

If your computer does not turn on—you press the power button and no lights appear, and there are no beeps or other sounds—you should:
  • Verify that your computer's power cord is connected.
  • Unplug your computer and connect a different electrical device (such as a lamp, a fan, or a radio) into the same electrical outlet. If the device does not work, the problem is the electrical outlet, not the computer.

You need to perform a repair installation of Windows XP

Performing a repair installation of Windows XP can fix many serious startup problems. While you should not lose any of your important documents, you might lose settings, and you will need to reinstall many updates.
Before performing a repair installation of Windows XP, you should have both your Windows XP CD and your product key available.

To perform a repair installation of Windows XP

  1. Insert your Windows XP CD into your computer.
  2. Restart your computer. If prompted, press a key to start from the CD-ROM.
  3. When the Welcome to Setup page appears, press Enter on your keyboard.
  4. On the Windows XP Licensing Agreement page, read the licensing agreement. Press the Page Down key to scroll to the bottom of the agreement. Then, press F8.
  5. When prompted, press R to have Windows XP attempt to repair Windows by reinstalling important Windows components.
    The repair and reinstallation process might take more than an hour. Eventually, Setup prompts you to answer questions just as if you were installing Windows XP for the first time.

COMPUTER

Computer

Odubanjo Bolarinwa 
 
Curled from Wikipedia, the free encyclopedia
"Computer technology" and "Computer system" redirect here. For the company, see Computer Technology Limited. For other uses, see Computer (disambiguation) and Computer system (disambiguation).
Computer
Acer Aspire 8920 Gemstone.jpgColumbia Supercomputer - NASA Advanced Supercomputing Facility.jpgIntertec Superbrain.jpg
2010-01-26-technikkrempel-by-RalfR-05.jpgThinking Machines Connection Machine CM-5 Frostburg 2.jpgG5 supplying Wikipedia via Gigabit at the Lange Nacht der Wissenschaften 2006 in Dresden.JPG
DM IBM S360.jpgAcorn BBC Master Series Microcomputer.jpgDell PowerEdge Servers.jpg
A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.
Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.
In World War II, mechanical analog computers were used for specialized military applications. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1]
Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.

Etymology

The first use of the word “computer” was recorded in 1613 in a book called “The yong mans gleanings” by English writer Richard Braithwait I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number. It referred to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[3]

History

Rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were invented in the 17th century. The first recorded use of the word "computer" is also from the 17th century, applied to human computers, people who performed calculations, often as employment. The first computer devices were conceived of in the 19th century, and only emerged in their modern form in the 1940s.

First general-purpose computing device

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",[4] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[5][6]
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Early analog computers

Sir William Thomson's third tide-predicting machine design, 1879-81
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[7]
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[8]
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.

The modern computer age begins

The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[9] On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[10] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

The first electromechanical computers

Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.
Early digital computers were electromechanical - electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[11]
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer.[12][13] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[14] Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[15] The Z3 was probably a complete Turing machine.

The introduction of electronic programmable computers with vacuum tubes

Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[7] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[16] the first "automatic electronic digital computer".[17] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[18]
Colossus was the first electronic digital programmable computing device, and was used to break German ciphers during World War II.
During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[18] He spent eleven months from early February 1943 designing and building the first Colossus.[19] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[20] and attacked its first message on 5 February.[18]
Colossus was the world's first electronic digital programmable computer.[7] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process.[21][22]
ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for the United States Army.
The US-built ENIAC[23] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[24]

Stored program computers eliminate the need for re-wiring

Three tall racks containing electronic circuit boards
A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[18] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic Calculator’ was the first specification for such a device. John von Neumann at the University of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[7]
Ferranti Mark 1, c. 1951.
The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[25] It was designed as a testbed for the Williams tube the first random-access digital storage device.[26] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[27] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[28] Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[29] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [30] and ran the world's first regular routine office computer job.

Transistors replace vacuum tubes in computers

The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[31] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[32] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[33][34]

Integrated circuits replace transistors

The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[35]

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[36] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[37] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”[38][39] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[40] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[41] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[42]

Mobility and the growth of smartphone computers

With the continued miniaturization of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 1990s.[citation needed] The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones run on a variety of operating systems and are rapidly becoming the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.[43]

Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England
This section applies to most common RAM machine-based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:
      mov No. 0, sum     ; set sum to 0
      mov No. 1, num     ; set num to 1
loop: add num, sum    ; add num to sum
      add No. 1, num     ; add 1 to num
      cmp num, #1000  ; compare num to 1000
      ble loop        ; if num <= 1000, go back to 'loop'
      halt            ; end of program. stop running 

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[44]

Bugs

Main article: Software bug
The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer
Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[45]
Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[46]

Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[47] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.

Programming language

Main article: Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[48]

Higher-level languages

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[49] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Program design

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.

Components

Video demonstrating the standard components of a "slimline" computer
A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Control unit

Main articles: CPU design and Control unit
Diagram showing how a particular MIPS architecture instruction would be decoded by the control system
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[50] Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[51]
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.

Arithmetic logic unit (ALU)

Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic.[52]
The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[53] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

Main article: Computer data storage
Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[54]
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)

Main article: Input/output
Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[55] Devices that provide input or output to the computer are called peripherals.[56] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[57]
One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[58]
Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Multiprocessing

Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily.
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[59] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

Networking and the Internet

Main articles: Computer networking and Internet
Visualization of a portion of the routes on the Internet
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[60]
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[61] The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Computer architecture paradigms

There are many types of computer architectures:
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[62]
Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

Misconceptions

Main articles: Human computer and Harvard Computers
Women as computers in NACA High Speed Flight Station "Computer Room"
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[63] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[64] Any device which processes information qualifies as a computer, especially if the processing is purposeful.

Required technology

Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors made of photolithographed semiconductors.
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.