Windows XP shortcut keys

General keyboard shortcuts
• CTRL+C (Copy)
• CTRL+X (Cut)
• CTRL+V (Paste)
• CTRL+Z (Undo)
• DELETE (Delete)
• SHIFT+DELETE (Delete the selected item permanently without placing the item in the Recycle Bin)
• CTRL while dragging an item (Copy the selected item)
• CTRL+SHIFT while dragging an item (Create a shortcut to the selected item)
• F2 key (Rename the selected item)
• CTRL+RIGHT ARROW (Move the insertion point to the beginning of the next word)
• CTRL+LEFT ARROW (Move the insertion point to the beginning of the previous word)
• CTRL+DOWN ARROW (Move the insertion point to the beginning of the next paragraph)
• CTRL+UP ARROW (Move the insertion point to the beginning of the previous paragraph)
• CTRL+SHIFT with any of the arrow keys (Highlight a block of text)
• SHIFT with any of the arrow keys (Select more than one item in a window or on the desktop, or select text in a document)
• CTRL+A (Select all)
• F3 key (Search for a file or a folder)
• ALT+ENTER (View the properties for the selected item)
• ALT+F4 (Close the active item, or quit the active program)
• ALT+ENTER (Display the properties of the selected object)
• ALT+SPACEBAR (Open the shortcut menu for the active window)
• CTRL+F4 (Close the active document in programs that enable you to have multiple documents open simultaneously)
• ALT+TAB (Switch between the open items)
• ALT+ESC (Cycle through items in the order that they had been opened)
• F6 key (Cycle through the screen elements in a window or on the desktop)
• F4 key (Display the Address bar list in My Computer or Windows Explorer)
• SHIFT+F10 (Display the shortcut menu for the selected item)
• ALT+SPACEBAR (Display the System menu for the active window)
• CTRL+ESC (Display the Start menu)
• ALT+Underlined letter in a menu name (Display the corresponding menu)
• Underlined letter in a command name on an open menu (Perform the corresponding command)
• F10 key (Activate the menu bar in the active program)
• RIGHT ARROW (Open the next menu to the right, or open a submenu)
• LEFT ARROW (Open the next menu to the left, or close a submenu)
• F5 key (Update the active window)
• BACKSPACE (View the folder one level up in My Computer or Windows Explorer)
• ESC (Cancel the current task)
• SHIFT when you insert a CD-ROM into the CD-ROM drive (Prevent the CD-ROM from automatically playing)
• CTRL+SHIFT+ESC (Open Task Manager)
Dialog box keyboard shortcuts
If you press SHIFT+F8 in extended selection list boxes, you enable extended selection mode. In this mode, you can use an arrow key to move a cursor without changing the selection. You can press CTRL+SPACEBAR or SHIFT+SPACEBAR to adjust the selection. To cancel extended selection mode, press SHIFT+F8 again. Extended selection mode cancels itself when you move the focus to another control.
• CTRL+TAB (Move forward through the tabs)
• CTRL+SHIFT+TAB (Move backward through the tabs)
• TAB (Move forward through the options)
• SHIFT+TAB (Move backward through the options)
• ALT+Underlined letter (Perform the corresponding command or select the corresponding option)
• ENTER (Perform the command for the active option or button)
• SPACEBAR (Select or clear the check box if the active option is a check box)
• Arrow keys (Select a button if the active option is a group of option buttons)
• F1 key (Display Help)
• F4 key (Display the items in the active list)
• BACKSPACE (Open a folder one level up if a folder is selected in the Save As or Open dialog box)
Microsoft natural keyboard shortcuts
• Windows Logo (Display or hide the Start menu)
• Windows Logo+BREAK (Display the System Properties dialog box)
• Windows Logo+D (Display the desktop)
• Windows Logo+M (Minimize all of the windows)
• Windows Logo+SHIFT+M (Restore the minimized windows)
• Windows Logo+E (Open My Computer)
• Windows Logo+F (Search for a file or a folder)
• CTRL+Windows Logo+F (Search for computers)
• Windows Logo+F1 (Display Windows Help)
• Windows Logo+ L (Lock the keyboard)
• Windows Logo+R (Open the Run dialog box)
• Windows Logo+U (Open Utility Manager)

Accessibility keyboard shortcuts
• Right SHIFT for eight seconds (Switch FilterKeys either on or off)
• Left ALT+left SHIFT+PRINT SCREEN (Switch High Contrast either on or off)
• Left ALT+left SHIFT+NUM LOCK (Switch the MouseKeys either on or off)
• SHIFT five times (Switch the StickyKeys either on or off)
• NUM LOCK for five seconds (Switch the ToggleKeys either on or off)
• Windows Logo +U (Open Utility Manager)
Windows Explorer keyboard shortcuts
• END (Display the bottom of the active window)
• HOME (Display the top of the active window)
• NUM LOCK+Asterisk sign (*) (Display all of the subfolders that are under the selected folder)
• NUM LOCK+Plus sign (+) (Display the contents of the selected folder)
• NUM LOCK+Minus sign (-) (Collapse the selected folder)
• LEFT ARROW (Collapse the current selection if it is expanded, or select the parent folder)
• RIGHT ARROW (Display the current selection if it is collapsed, or select the first subfolder)
Shortcut keys for Character Map
After you double-click a character on the grid of characters, you can move through the grid by using the keyboard shortcuts:
• RIGHT ARROW (Move to the right or to the beginning of the next line)
• LEFT ARROW (Move to the left or to the end of the previous line)
• UP ARROW (Move up one row)
• DOWN ARROW (Move down one row)
• PAGE UP (Move up one screen at a time)
• PAGE DOWN (Move down one screen at a time)
• HOME (Move to the beginning of the line)
• END (Move to the end of the line)
• CTRL+HOME (Move to the first character)
• CTRL+END (Move to the last character)
• SPACEBAR (Switch between Enlarged and Normal mode when a character is selected)
Back to the top
Microsoft Management Console (MMC) main window keyboard shortcuts
• CTRL+O (Open a saved console)
• CTRL+N (Open a new console)
• CTRL+S (Save the open console)
• CTRL+M (Add or remove a console item)
• CTRL+W (Open a new window)
• F5 key (Update the content of all console windows)
• ALT+SPACEBAR (Display the MMC window menu)
• ALT+F4 (Close the console)
• ALT+A (Display the Action menu)
• ALT+V (Display the View menu)
• ALT+F (Display the File menu)
• ALT+O (Display the Favorites menu)
MMC console window keyboard shortcuts
• CTRL+P (Print the current page or active pane)
• ALT+Minus sign (-) (Display the window menu for the active console window)
• SHIFT+F10 (Display the Action shortcut menu for the selected item)
• F1 key (Open the Help topic, if any, for the selected item)
• F5 key (Update the content of all console windows)
• CTRL+F10 (Maximize the active console window)
• CTRL+F5 (Restore the active console window)
• ALT+ENTER (Display the Properties dialog box, if any, for the selected item)
• F2 key (Rename the selected item)
• CTRL+F4 (Close the active console window. When a console has only one console window, this shortcut closes the console)
Remote desktop connection navigation
• CTRL+ALT+END (Open the Microsoft Windows NT Security dialog box)
• ALT+PAGE UP (Switch between programs from left to right)
• ALT+PAGE DOWN (Switch between programs from right to left)
• ALT+INSERT (Cycle through the programs in most recently used order)
• ALT+HOME (Display the Start menu)
• CTRL+ALT+BREAK (Switch the client computer between a window and a full screen)
• ALT+DELETE (Display the Windows menu)
• CTRL+ALT+Minus sign (-) (Place a snapshot of the entire client window area on the Terminal server clipboard and provide the same functionality as pressing ALT+PRINT SCREEN on a local computer.)
• CTRL+ALT+Plus sign (+) (Place a snapshot of the active window in the client on the Terminal server clipboard and provide the same functionality as pressing PRINT SCREEN on a local computer.)


Microsoft Internet Explorer navigation
• CTRL+B (Open the Organize Favorites dialog box)
• CTRL+E (Open the Search bar)
• CTRL+F (Start the Find utility)
• CTRL+H (Open the History bar)
• CTRL+I (Open the Favorites bar)
• CTRL+L (Open the Open dialog box)
• CTRL+N (Start another instance of the browser with the same Web address)
• CTRL+O (Open the Open dialog box, the same as CTRL+L)
• CTRL+P (Open the Print dialog box)
• CTRL+R (Update the current Web page)
• CTRL+W (Close the current window)

Computer Fundamentals

Introduction To Computers

• Definition:
• Its an electronic Device that is used for information Processing.
• Computer.. Latin word.. compute
• Calculation Machine
• A computer system includes a computer, peripheral devices, and software
• Accepts input, processes data, stores data, and produces output
• Input refers to whatever is sent to a Computer system
• Data refers to the symbols that represent facts, objects, and ideas
• Processing is the way that a computer manipulates data
• A computer processes data in a device called the central processing unit (CPU)
• Memory is an area of a computer that holds data that is waiting to be processed, stored, or output
• Storage is the area where data can be left on a permanent basis
• Computer output is the result produced by the computer
• An output device displays, prints or transmits the results of processing

Computer
Performs computations and makes logical decisions
Millions / billions times faster than human beings
Computer programs
Sets of instructions for which computer processes data
Hardware
Physical devices of computer system
Software
Programs that run on computers

• Capabilities of Computers
• Huge Data Storage
• Input and Output
• Processing

• Characteristics of Computers
• High Processing Speed
• Accuracy
• Reliability
• Versatility
• Diligence
History Of Computers
• Before the 1500s, in Europe, calculations were made with an abacus
Invented around 500BC, available in many cultures (China, Mesopotamia, Japan, Greece, Rome, etc.)

• In 1642, Blaise Pascal (French mathematician, physicist, philosopher) invented a mechanical calculator called the Pascaline

• In 1671, Gottfried von Leibniz (German mathematician, philosopher) extended the Pascaline to do multiplications, divisions, square roots: the Stepped Reckoner

None of these machines had memory, and they required human intervention at each step

• In 1822 Charles Babbage (English mathematician, philosopher), sometimes called the “father of computing” built the Difference Engine

• Machine designed to automate the computation (tabulation) of polynomial functions (which are known to be good approximations of many useful functions)
– Based on the “method of finite difference”
– Implements some storage

• In 1833 Babbage designed the Analytical Engine, but he died before he could build it
– It was built after his death, powered by steam

Generations of Computers

• Generation of Computers
• First Generation (1946-59)
• Second Generation(1957-64)
• Third Generation(1965-70)
• Fourth Generation(1970-90)
• Fifth Generation(1990 till date)
Generation 0: Mechanical Calculators
Generation 1: Vacuum Tube Computers
Generation 2: Transistor Computers
Generation 3: Integrated Circuits
Generation 4: Microprocessors

Generation 1 : ENIAC
The ENIAC (Electronic Numerical Integrator and Computer) was unveiled in 1946: the first all-electronic, general-purpose digital computer

The use of binary
In the 30s Claude Shannon (the father of “information theory”) had proposed that the use of binary arithmetic and boolean logic should be used with electronic circuits

The Von-Neumann architecture

Generation 2: IBM7094

Generation 3: Integrated Circuits

Seymour Cray created the Cray Research Corporation
Cray-1: $8.8 million, 160 million instructions per seconds and 8 Mbytes of memory

Generation 4: VLSI
Improvements to IC technology made it possible to integrate more and more transistors in a single chip
SSI (Small Scale Integration): 10-100
MSI (Medium Scale Integration): 100-1,000
LSI (Large Scale Integration): 1,000-10,000
VLSI (Very Large Scale Integration): >10,000

Microprocessors

Generation 5?
The term “Generation 5” is used sometimes to refer to all more or less “sci fi” future developments
Voice recognition
Artificial intelligence
Quantum computing
Bio computing
Nano technology
Learning
Natural languages

Generation 5 Computers

Master list of Java interview questions

Master list of Java interview questions - 115 questions
By admin | July 18, 2005
115 questions total, not for the weak. Covers everything from basics to JDBC connectivity, AWT and JSP.

What is the difference between procedural and object-oriented programs?- a) In procedural program, programming logic follows certain procedures and the instructions are executed one after another. In OOP program, unit of program is object, which is nothing but combination of data and code. b) In procedural program, data is exposed to the whole program whereas in OOPs program, it is accessible with in the object and which in turn assures the security of the code.
What are Encapsulation, Inheritance and Polymorphism?- Encapsulation is the mechanism that binds together code and data it manipulates and keeps both safe from outside interference and misuse. Inheritance is the process by which one object acquires the properties of another object. Polymorphism is the feature that allows one interface to be used for general class actions.
What is the difference between Assignment and Initialization?- Assignment can be done as many times as desired whereas initialization can be done only once.

What is OOPs?
- Object oriented programming organizes a program around its data, i. e. , objects and a set of well defined interfaces to that data. An object-oriented program can be characterized as data controlling access to code.
What are Class, Constructor and Primitive data types?
- Class is a template for multiple objects with similar features and it is a blue print for objects. It defines a type of object according to the data the object can hold and the operations the object can perform. Constructor is a special kind of method that determines how an object is initialized when created. Primitive data types are 8 types and they are: byte, short, int, long, float, double, boolean, char.
What is an Object and how do you allocate memory to it?
- Object is an instance of a class and it is a software unit that combines a structured set of data with a set of operations for inspecting and manipulating that data. When an object is created using new operator, memory is allocated to it.
What is the difference between constructor and method?
- Constructor will be automatically invoked when an object is created whereas method has to be called explicitly.
What are methods and how are they defined?
- Methods are functions that operate on instances of classes in which they are defined. Objects can communicate with each other using methods and can call methods in other classes. Method definition has four parts. They are name of the method, type of object or primitive type the method returns, a list of parameters and the body of the method. A method’s signature is a combination of the first three parts mentioned above.
What is the use of bin and lib in JDK?
- Bin contains all tools such as javac, appletviewer, awt tool, etc., whereas lib contains API and all packages.
What is casting?
- Casting is used to convert the value of one type to another.
How many ways can an argument be passed to a subroutine and explain them?
- An argument can be passed in two ways. They are passing by value and passing by reference. Passing by value: This method copies the value of an argument into the formal parameter of the subroutine. Passing by reference: In this method, a reference to an argument (not the value of the argument) is passed to the parameter.
What is the difference between an argument and a parameter?
- While defining method, variables passed in the method are called parameters. While using those methods, values passed to those variables are called arguments.
What are different types of access modifiers?
- public: Any thing declared as public can be accessed from anywhere. private: Any thing declared as private can’t be seen outside of its class. protected: Any thing declared as protected can be accessed by classes in the same package and subclasses in the other packages. default modifier : Can be accessed only to classes in the same package.
What is final, finalize() and finally?
- final : final keyword can be used for class, method and variables. A final class cannot be subclassed and it prevents other programmers from subclassing a secure class to invoke insecure methods. A final method can’t be overridden. A final variable can’t change from its initialized value. finalize() : finalize() method is used just before an object is destroyed and can be called just prior to garbage collection. finally : finally, a key word used in exception handling, creates a block of code that will be executed after a try/catch block has completed and before the code following the try/catch block. The finally block will execute whether or not an exception is thrown. For example, if a method opens a file upon exit, then you will not want the code that closes the file to be bypassed by the exception-handling mechanism. This finally keyword is designed to address this contingency.
What is UNICODE?
- Unicode is used for internal representation of characters and strings and it uses 16 bits to represent each other.
What is Garbage Collection and how to call it explicitly?
- When an object is no longer referred to by any variable, java automatically reclaims memory used by that object. This is known as garbage collection. System. gc() method may be used to call it explicitly.
What is finalize() method?
- finalize () method is used just before an object is destroyed and can be called just prior to garbage collection.
What are Transient and Volatile Modifiers?
- Transient: The transient modifier applies to variables only and it is not stored as part of its object’s Persistent state. Transient variables are not serialized. Volatile: Volatile modifier applies to variables only and it tells the compiler that the variable modified by volatile can be changed unexpectedly by other parts of the program.
What is method overloading and method overriding?
- Method overloading: When a method in a class having the same method name with different arguments is said to be method overloading. Method overriding : When a method in a class having the same method name with same arguments is said to be method overriding.
What is difference between overloading and overriding?
- a) In overloading, there is a relationship between methods available in the same class whereas in overriding, there is relationship between a superclass method and subclass method. b) Overloading does not block inheritance from the superclass whereas overriding blocks inheritance from the superclass. c) In overloading, separate methods share the same name whereas in overriding, subclass method replaces the superclass. d) Overloading must have different method signatures whereas overriding must have same signature.
What is meant by Inheritance and what are its advantages?
- Inheritance is the process of inheriting all the features from a class. The advantages of inheritance are reusability of code and accessibility of variables and methods of the super class by subclasses.
What is the difference between this() and super()?
- this() can be used to invoke a constructor of the same class whereas super() can be used to invoke a super class constructor.
What is the difference between superclass and subclass?
- A super class is a class that is inherited whereas sub class is a class that does the inheriting.
What modifiers may be used with top-level class?
- public, abstract and final can be used for top-level class.
What are inner class and anonymous class?
- Inner class : classes defined in other classes, including those defined in methods are called inner classes. An inner class can have any accessibility including private. Anonymous class : Anonymous class is a class defined inside a method without a name and is instantiated and declared in the same place and cannot have explicit constructors.
What is a package?
- A package is a collection of classes and interfaces that provides a high-level layer of access protection and name space management.
What is a reflection package?
- java. lang. reflect package has the ability to analyze itself in runtime.
What is interface and its use?
- Interface is similar to a class which may contain method’s signature only but not bodies and it is a formal set of method and constant declarations that must be defined by the class that implements it. Interfaces are useful for: a)Declaring methods that one or more classes are expected to implement b)Capturing similarities between unrelated classes without forcing a class relationship. c)Determining an object’s programming interface without revealing the actual body of the class.
What is an abstract class?
- An abstract class is a class designed with implementation gaps for subclasses to fill in and is deliberately incomplete.
What is the difference between Integer and int?
- a) Integer is a class defined in the java. lang package, whereas int is a primitive data type defined in the Java language itself. Java does not automatically convert from one to the other. b) Integer can be used as an argument for a method that requires an object, whereas int can be used for calculations.
What is a cloneable interface and how many methods does it contain?
- It is not having any method because it is a TAGGED or MARKER interface.
What is the difference between abstract class and interface?
- a) All the methods declared inside an interface are abstract whereas abstract class must have at least one abstract method and others may be concrete or abstract. b) In abstract class, key word abstract must be used for the methods whereas interface we need not use that keyword for the methods. c) Abstract class must have subclasses whereas interface can’t have subclasses.
Can you have an inner class inside a method and what variables can you access?
- Yes, we can have an inner class inside a method and final variables can be accessed.
What is the difference between String and String Buffer?
- a) String objects are constants and immutable whereas StringBuffer objects are not. b) String class supports constant strings whereas StringBuffer class supports growable and modifiable strings.
What is the difference between Array and vector?
- Array is a set of related data type and static whereas vector is a growable array of objects and dynamic.
What is the difference between exception and error?
- The exception class defines mild error conditions that your program encounters. Exceptions can occur when trying to open the file, which does not exist, the network connection is disrupted, operands being manipulated are out of prescribed ranges, the class file you are interested in loading is missing. The error class defines serious error conditions that you should not attempt to recover from. In most cases it is advisable to let the program terminate when such an error is encountered.
What is the difference between process and thread?
- Process is a program in execution whereas thread is a separate path of execution in a program.
What is multithreading and what are the methods for inter-thread communication and what is the class in which these methods are defined?
- Multithreading is the mechanism in which more than one thread run independent of each other within the process. wait (), notify () and notifyAll() methods can be used for inter-thread communication and these methods are in Object class. wait() : When a thread executes a call to wait() method, it surrenders the object lock and enters into a waiting state. notify() or notifyAll() : To remove a thread from the waiting state, some other thread must make a call to notify() or notifyAll() method on the same object.
What is the class and interface in java to create thread and which is the most advantageous method?
- Thread class and Runnable interface can be used to create threads and using Runnable interface is the most advantageous method to create threads because we need not extend thread class here.
What are the states associated in the thread?
- Thread contains ready, running, waiting and dead states.
What is synchronization?
- Synchronization is the mechanism that ensures that only one thread is accessed the resources at a time.
When you will synchronize a piece of your code?
- When you expect your code will be accessed by different threads and these threads may change a particular data causing data corruption.
What is deadlock?
- When two threads are waiting each other and can’t precede the program is said to be deadlock.
What is daemon thread and which method is used to create the daemon thread?
- Daemon thread is a low priority thread which runs intermittently in the back ground doing the garbage collection operation for the java runtime system. setDaemon method is used to create a daemon thread.
Are there any global variables in Java, which can be accessed by other part of your program?- No, it is not the main method in which you define variables. Global variables is not possible because concept of encapsulation is eliminated here.
What is an applet?
- Applet is a dynamic and interactive program that runs inside a web page displayed by a java capable browser.
What is the difference between applications and applets?
- a)Application must be run on local machine whereas applet needs no explicit installation on local machine. b)Application must be run explicitly within a java-compatible virtual machine whereas applet loads and runs itself automatically in a java-enabled browser. d)Application starts execution with its main method whereas applet starts execution with its init method. e)Application can run with or without graphical user interface whereas applet must run within a graphical user interface.
How does applet recognize the height and width?
- Using getParameters() method.
When do you use codebase in applet?
- When the applet class file is not in the same directory, codebase is used.
What is the lifecycle of an applet?
- init() method - Can be called when an applet is first loaded start() method - Can be called each time an applet is started. paint() method - Can be called when the applet is minimized or maximized. stop() method - Can be used when the browser moves off the applet’s page. destroy() method - Can be called when the browser is finished with the applet.
How do you set security in applets?- using setSecurityManager() method
What is an event and what are the models available for event handling?- An event is an event object that describes a state of change in a source. In other words, event occurs when an action is generated, like pressing button, clicking mouse, selecting a list, etc. There are two types of models for handling events and they are: a) event-inheritance model and b) event-delegation model
What are the advantages of the model over the event-inheritance model?- The event-delegation model has two advantages over the event-inheritance model. They are: a)It enables event handling by objects other than the ones that generate the events. This allows a clean separation between a component’s design and its use. b)It performs much better in applications where many events are generated. This performance improvement is due to the fact that the event-delegation model does not have to be repeatedly process unhandled events as is the case of the event-inheritance.
What is source and listener?- source : A source is an object that generates an event. This occurs when the internal state of that object changes in some way. listener : A listener is an object that is notified when an event occurs. It has two major requirements. First, it must have been registered with one or more sources to receive notifications about specific types of events. Second, it must implement methods to receive and process these notifications.
What is adapter class?- An adapter class provides an empty implementation of all methods in an event listener interface. Adapter classes are useful when you want to receive and process only some of the events that are handled by a particular event listener interface. You can define a new class to act listener by extending one of the adapter classes and implementing only those events in which you are interested. For example, the MouseMotionAdapter class has two methods, mouseDragged()and mouseMoved(). The signatures of these empty are exactly as defined in the MouseMotionListener interface. If you are interested in only mouse drag events, then you could simply extend MouseMotionAdapter and implement mouseDragged() .
What is meant by controls and what are different types of controls in AWT?- Controls are components that allow a user to interact with your application and the AWT supports the following types of controls: Labels, Push Buttons, Check Boxes, Choice Lists, Lists, Scrollbars, Text Components. These controls are subclasses of Component.
What is the difference between choice and list?- A Choice is displayed in a compact form that requires you to pull it down to see the list of available choices and only one item may be selected from a choice. A List may be displayed in such a way that several list items are visible and it supports the selection of one or more list items.
What is the difference between scrollbar and scrollpane?- A Scrollbar is a Component, but not a Container whereas Scrollpane is a Conatiner and handles its own events and perform its own scrolling.
What is a layout manager and what are different types of layout managers available in java AWT?- A layout manager is an object that is used to organize components in a container. The different layouts are available are FlowLayout, BorderLayout, CardLayout, GridLayout and GridBagLayout.
How are the elements of different layouts organized?- FlowLayout: The elements of a FlowLayout are organized in a top to bottom, left to right fashion. BorderLayout: The elements of a BorderLayout are organized at the borders (North, South, East and West) and the center of a container. CardLayout: The elements of a CardLayout are stacked, on top of the other, like a deck of cards. GridLayout: The elements of a GridLayout are of equal size and are laid out using the square of a grid. GridBagLayout: The elements of a GridBagLayout are organized according to a grid. However, the elements are of different size and may occupy more than one row or column of the grid. In addition, the rows and columns may have different sizes.
Which containers use a Border layout as their default layout?- Window, Frame and Dialog classes use a BorderLayout as their layout.
Which containers use a Flow layout as their default layout?- Panel and Applet classes use the FlowLayout as their default layout.
What are wrapper classes?- Wrapper classes are classes that allow primitive types to be accessed as objects.
What are Vector, Hashtable, LinkedList and Enumeration?- Vector : The Vector class provides the capability to implement a growable array of objects. Hashtable : The Hashtable class implements a Hashtable data structure. A Hashtable indexes and stores objects in a dictionary using hash codes as the object’s keys. Hash codes are integer values that identify objects. LinkedList: Removing or inserting elements in the middle of an array can be done using LinkedList. A LinkedList stores each object in a separate link whereas an array stores object references in consecutive locations. Enumeration: An object that implements the Enumeration interface generates a series of elements, one at a time. It has two methods, namely hasMoreElements() and nextElement(). HasMoreElemnts() tests if this enumeration has more elements and nextElement method returns successive elements of the series.
What is the difference between set and list?- Set stores elements in an unordered way but does not contain duplicate elements, whereas list stores elements in an ordered way but may contain duplicate elements.
What is a stream and what are the types of Streams and classes of the Streams?- A Stream is an abstraction that either produces or consumes information. There are two types of Streams and they are: Byte Streams: Provide a convenient means for handling input and output of bytes. Character Streams: Provide a convenient means for handling input & output of characters. Byte Streams classes: Are defined by using two abstract classes, namely InputStream and OutputStream. Character Streams classes: Are defined by using two abstract classes, namely Reader and Writer.
What is the difference between Reader/Writer and InputStream/Output Stream?- The Reader/Writer class is character-oriented and the InputStream/OutputStream class is byte-oriented.
What is an I/O filter?- An I/O filter is an object that reads from one stream and writes to another, usually altering the data in some way as it is passed from one stream to another.
What is serialization and deserialization?- Serialization is the process of writing the state of an object to a byte stream. Deserialization is the process of restoring these objects.
What is JDBC?
- JDBC is a set of Java API for executing SQL statements. This API consists of a set of classes and interfaces to enable programs to write pure Java Database applications.
What are drivers available?
- a) JDBC-ODBC Bridge driver b) Native API Partly-Java driver c) JDBC-Net Pure Java driver d) Native-Protocol Pure Java driver
What is the difference between JDBC and ODBC?
- a) OBDC is for Microsoft and JDBC is for Java applications. b) ODBC can’t be directly used with Java because it uses a C interface. c) ODBC makes use of pointers which have been removed totally from Java. d) ODBC mixes simple and advanced features together and has complex options for simple queries. But JDBC is designed to keep things simple while allowing advanced capabilities when required. e) ODBC requires manual installation of the ODBC driver manager and driver on all client machines. JDBC drivers are written in Java and JDBC code is automatically installable, secure, and portable on all platforms. f) JDBC API is a natural Java interface and is built on ODBC. JDBC retains some of the basic features of ODBC.
What are the types of JDBC Driver Models and explain them?
- There are two types of JDBC Driver Models and they are: a) Two tier model and b) Three tier model Two tier model: In this model, Java applications interact directly with the database. A JDBC driver is required to communicate with the particular database management system that is being accessed. SQL statements are sent to the database and the results are given to user. This model is referred to as client/server configuration where user is the client and the machine that has the database is called as the server. Three tier model: A middle tier is introduced in this model. The functions of this model are: a) Collection of SQL statements from the client and handing it over to the database, b) Receiving results from database to the client and c) Maintaining control over accessing and updating of the above.
What are the steps involved for making a connection with a database or how do you connect to a database?a) Loading the driver : To load the driver, Class. forName() method is used. Class. forName(”sun. jdbc. odbc. JdbcOdbcDriver”); When the driver is loaded, it registers itself with the java. sql. DriverManager class as an available database driver. b) Making a connection with database: To open a connection to a given database, DriverManager. getConnection() method is used. Connection con = DriverManager. getConnection (”jdbc:odbc:somedb”, “user”, “password”); c) Executing SQL statements : To execute a SQL query, java. sql. statements class is used. createStatement() method of Connection to obtain a new Statement object. Statement stmt = con. createStatement(); A query that returns data can be executed using the executeQuery() method of Statement. This method executes the statement and returns a java. sql. ResultSet that encapsulates the retrieved data: ResultSet rs = stmt. executeQuery(”SELECT * FROM some table”); d) Process the results : ResultSet returns one row at a time. Next() method of ResultSet object can be called to move to the next row. The getString() and getObject() methods are used for retrieving column values: while(rs. next()) { String event = rs. getString(”event”); Object count = (Integer) rs. getObject(”count”);
What type of driver did you use in project?- JDBC-ODBC Bridge driver (is a driver that uses native(C language) libraries and makes calls to an existing ODBC driver to access a database engine).
What are the types of statements in JDBC?- Statement: to be used createStatement() method for executing single SQL statement PreparedStatement — To be used preparedStatement() method for executing same SQL statement over and over. CallableStatement — To be used prepareCall() method for multiple SQL statements over and over.
What is stored procedure?- Stored procedure is a group of SQL statements that forms a logical unit and performs a particular task. Stored Procedures are used to encapsulate a set of operations or queries to execute on database. Stored procedures can be compiled and executed with different parameters and results and may have any combination of input/output parameters.
How to create and call stored procedures?- To create stored procedures: Create procedure procedurename (specify in, out and in out parameters) BEGIN Any multiple SQL statement; END; To call stored procedures: CallableStatement csmt = con. prepareCall(”{call procedure name(?,?)}”); csmt. registerOutParameter(column no. , data type); csmt. setInt(column no. , column name) csmt. execute();
What is servlet?- Servlets are modules that extend request/response-oriented servers, such as java-enabled web servers. For example, a servlet might be responsible for taking data in an HTML order-entry form and applying the business logic used to update a company’s order database.
What are the classes and interfaces for servlets?- There are two packages in servlets and they are javax. servlet and
What is the difference between an applet and a servlet?- a) Servlets are to servers what applets are to browsers. b) Applets must have graphical user interfaces whereas servlets have no graphical user interfaces.
What is the difference between doPost and doGet methods?- a) doGet() method is used to get information, while doPost() method is used for posting information. b) doGet() requests can’t send large amount of information and is limited to 240-255 characters. However, doPost()requests passes all of its data, of unlimited length. c) A doGet() request is appended to the request URL in a query string and this allows the exchange is visible to the client, whereas a doPost() request passes directly over the socket connection as part of its HTTP request body and the exchange are invisible to the client.
What is the life cycle of a servlet?- Each Servlet has the same life cycle: a) A server loads and initializes the servlet by init () method. b) The servlet handles zero or more client’s requests through service() method. c) The server removes the servlet through destroy() method.
Who is loading the init() method of servlet?- Web server
What are the different servers available for developing and deploying Servlets?- a) Java Web Server b) JRun g) Apache Server h) Netscape Information Server i) Web Logic
How many ways can we track client and what are they?- The servlet API provides two ways to track client state and they are: a) Using Session tracking and b) Using Cookies.
What is session tracking and how do you track a user session in servlets?- Session tracking is a mechanism that servlets use to maintain state about a series requests from the same user across some period of time. The methods used for session tracking are: a) User Authentication - occurs when a web server restricts access to some of its resources to only those clients that log in using a recognized username and password. b) Hidden form fields - fields are added to an HTML form that are not displayed in the client’s browser. When the form containing the fields is submitted, the fields are sent back to the server. c) URL rewriting - every URL that the user clicks on is dynamically modified or rewritten to include extra information. The extra information can be in the form of extra path information, added parameters or some custom, server-specific URL change. d) Cookies - a bit of information that is sent by a web server to a browser and which can later be read back from that browser. e) HttpSession- places a limit on the number of sessions that can exist in memory. This limit is set in the session. maxresidents property.
What is Server-Side Includes (SSI)?- Server-Side Includes allows embedding servlets within HTML pages using a special servlet tag. In many servlets that support servlets, a page can be processed by the server to include output from servlets at certain points inside the HTML page. This is accomplished using a special internal SSINCLUDE, which processes the servlet tags. SSINCLUDE servlet will be invoked whenever a file with an. shtml extension is requested. So HTML files that include server-side includes must be stored with an . shtml extension.
What are cookies and how will you use them?- Cookies are a mechanism that a servlet uses to have a client hold a small amount of state-information associated with the user. a) Create a cookie with the Cookie constructor: public Cookie(String name, String value) b) A servlet can send a cookie to the client by passing a Cookie object to the addCookie() method of HttpServletResponse: public void HttpServletResponse. addCookie(Cookie cookie) c) A servlet retrieves cookies by calling the getCookies() method of HttpServletRequest: public Cookie[ ] HttpServletRequest. getCookie().
Is it possible to communicate from an applet to servlet and how many ways and how?- Yes, there are three ways to communicate from an applet to servlet and they are: a) HTTP Communication(Text-based and object-based) b) Socket Communication c) RMI Communication
What is connection pooling?- With servlets, opening a database connection is a major bottleneck because we are creating and tearing down a new connection for every page request and the time taken to create connection will be more. Creating a connection pool is an ideal approach for a complicated servlet. With a connection pool, we can duplicate only the resources we need to duplicate rather than the entire servlet. A connection pool can also intelligently manage the size of the pool and make sure each connection remains valid. A number of connection pool packages are currently available. Some like DbConnectionBroker are freely available from Java Exchange Works by creating an object that dispenses connections and connection Ids on request. The ConnectionPool class maintains a Hastable, using Connection objects as keys and Boolean values as stored values. The Boolean value indicates whether a connection is in use or not. A program calls getConnection() method of the ConnectionPool for getting Connection object it can use; it calls returnConnection() to give the connection back to the pool.
Why should we go for interservlet communication?- Servlets running together in the same server communicate with each other in several ways. The three major reasons to use interservlet communication are: a) Direct servlet manipulation - allows to gain access to the other currently loaded servlets and perform certain tasks (through the ServletContext object) b) Servlet reuse - allows the servlet to reuse the public methods of another servlet. c) Servlet collaboration - requires to communicate with each other by sharing specific information (through method invocation)
Is it possible to call servlet with parameters in the URL?- Yes. You can call a servlet with parameters in the syntax as (?Param1 = xxx || m2 = yyy).
What is Servlet chaining?- Servlet chaining is a technique in which two or more servlets can cooperate in servicing a single request. In servlet chaining, one servlet’s output is piped to the next servlet’s input. This process continues until the last servlet is reached. Its output is then sent back to the client.
How do servlets handle multiple simultaneous requests?- The server has multiple threads that are available to handle requests. When a request comes in, it is assigned to a thread, which calls a service method (for example: doGet(), doPost() and service()) of the servlet. For this reason, a single servlet object can have its service methods called by many threads at once.
What is the difference between TCP/IP and UDP?- TCP/IP is a two-way communication between the client and the server and it is a reliable and there is a confirmation regarding reaching the message to the destination. It is like a phone call. UDP is a one-way communication only between the client and the server and it is not a reliable and there is no confirmation regarding reaching the message to the destination. It is like a postal mail.
What is Inet address?- Every computer connected to a network has an IP address. An IP address is a number that uniquely identifies each computer on the Net. An IP address is a 32-bit number.
What is Domain Naming Service(DNS)?- It is very difficult to remember a set of numbers(IP address) to connect to the Internet. The Domain Naming Service(DNS) is used to overcome this problem. It maps one particular IP address to a string of characters. For example, www. mascom. com implies com is the domain name reserved for US commercial sites, moscom is the name of the company and www is the name of the specific computer, which is mascom’s server.
What is URL?- URL stands for Uniform Resource Locator and it points to resource files on the Internet. URL has four components: http://www. address. com:80/index.html, where http - protocol name, address - IP address or host name, 80 - port number and index.html - file path.
What is RMI and steps involved in developing an RMI object?- Remote Method Invocation (RMI) allows java object that executes on one machine and to invoke the method of a Java object to execute on another machine. The steps involved in developing an RMI object are: a) Define the interfaces b) Implementing these interfaces c) Compile the interfaces and their implementations with the java compiler d) Compile the server implementation with RMI compiler e) Run the RMI registry f) Run the application
What is RMI architecture?- RMI architecture consists of four layers and each layer performs specific functions: a) Application layer - contains the actual object definition. b) Proxy layer - consists of stub and skeleton. c) Remote Reference layer - gets the stream of bytes from the transport layer and sends it to the proxy layer. d) Transportation layer - responsible for handling the actual machine-to-machine communication.
what is UnicastRemoteObject?- All remote objects must extend UnicastRemoteObject, which provides functionality that is needed to make objects available from remote machines.
Explain the methods, rebind() and lookup() in Naming class?- rebind() of the Naming class(found in java. rmi) is used to update the RMI registry on the server machine. Naming. rebind(”AddSever”, AddServerImpl); lookup() of the Naming class accepts one argument, the rmi URL and returns a reference to an object of type AddServerImpl.
What is a Java Bean?- A Java Bean is a software component that has been designed to be reusable in a variety of different environments.
What is a Jar file?- Jar file allows to efficiently deploying a set of classes and their associated resources. The elements in a jar file are compressed, which makes downloading a Jar file much faster than separately downloading several uncompressed files. The package java. util. zip contains classes that read and write jar files.
What is BDK?- BDK, Bean Development Kit is a tool that enables to create, configure and connect a set of set of Beans and it can be used to test Beans without writing a code.
What is JSP?- JSP is a dynamic scripting capability for web pages that allows Java as well as a few special tags to be embedded into a web file (HTML/XML, etc). The suffix traditionally ends with .jsp to indicate to the web server that the file is a JSP files. JSP is a server side technology - you can’t do any client side validation with it. The advantages are: a) The JSP assists in making the HTML more functional. Servlets on the other hand allow outputting of HTML but it is a tedious process. b) It is easy to make a change and then let the JSP capability of the web server you are using deal with compiling it into a servlet and running it.
What are JSP scripting elements?- JSP scripting elements lets to insert Java code into the servlet that will be generated from the current JSP page. There are three forms: a) Expressions of the form <%= expression %> that are evaluated and inserted into the output, b) Scriptlets of the formthat are inserted into the servlet’s service method, and c) Declarations of the form <%! Code %>that are inserted into the body of the servlet class, outside of any existing methods.
What are JSP Directives?- A JSP directive affects the overall structure of the servlet class. It usually has the following form:<%@ directive attribute=”value” %> However, you can also combine multiple attribute settings for a single directive, as follows:<%@ directive attribute1=”value1? attribute 2=”value2? . . . attributeN =”valueN” %> There are two main types of directive: page, which lets to do things like import classes, customize the servlet superclass, and the like; and include, which lets to insert a file into the servlet class at the time the JSP file is translated into a servlet
What are Predefined variables or implicit objects?- To simplify code in JSP expressions and scriptlets, we can use eight automatically defined variables, sometimes called implicit objects. They are request, response, out, session, application, config, pageContext, and page.
What are JSP ACTIONS?- JSP actions use constructs in XML syntax to control the behavior of the servlet engine. You can dynamically insert a file, reuse JavaBeans components, forward the user to another page, or generate HTML for the Java plugin. Available actions include: jsp:include - Include a file at the time the page is requested. jsp:useBean - Find or instantiate a JavaBean. jsp:setProperty - Set the property of a JavaBean. jsp:getProperty - Insert the property of a JavaBean into the output. jsp:forward - Forward the requester to a newpage. Jsp: plugin - Generate browser-specific code that makes an OBJECT or EMBED
How do you pass data (including JavaBeans) to a JSP from a servlet?- (1) Request Lifetime: Using this technique to pass beans, a request dispatcher (using either “include” or forward”) can be called. This bean will disappear after processing this request has been completed. Servlet: request. setAttribute(”theBean”, myBean); RequestDispatcher rd = getServletContext(). getRequestDispatcher(”thepage. jsp”); rd. forward(request, response); JSP PAGE:(2) Session Lifetime: Using this technique to pass beans that are relevant to a particular session (such as in individual user login) over a number of requests. This bean will disappear when the session is invalidated or it times out, or when you remove it. Servlet: HttpSession session = request. getSession(true); session. putValue(”theBean”, myBean); /* You can do a request dispatcher here, or just let the bean be visible on the next request */ JSP Page: 3) Application Lifetime: Using this technique to pass beans that are relevant to all servlets and JSP pages in a particular app, for all users. For example, I use this to make a JDBC connection pool object available to the various servlets and JSP pages in my apps. This bean will disappear when the servlet engine is shut down, or when you remove it. Servlet: GetServletContext(). setAttribute(”theBean”, myBean); JSP PAGE:
How can I set a cookie in JSP?- response. setHeader(”Set-Cookie”, “cookie string”); To give the response-object to a bean, write a method setResponse (HttpServletResponse response) - to the bean, and in jsp-file:<% bean. setResponse (response); %>
How can I delete a cookie with JSP?- Say that I have a cookie called “foo, ” that I set a while ago & I want it to go away. I simply: <% Cookie killCookie = new Cookie(”foo”, null); KillCookie. setPath(”/”); killCookie. setMaxAge(0); response. addCookie(killCookie); %>
How are Servlets and JSP Pages related?- JSP pages are focused around HTML (or XML) with Java codes and JSP tags inside them. When a web server that has JSP support is asked for a JSP page, it checks to see if it has already compiled the page into a servlet. Thus, JSP pages become servlets and are transformed into pure Java and then compiled, loaded into the server and executed.
This entry was posted in Java. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.
« Some VB interview questionsGeneral UNIX interview questions »

what types of jobs available in the computer industry

What types of jobs are available in the computer industry?
Question:
What types of jobs are available in the computer industry?

Answer:
Below is short listing of different types of computer related jobs in the industry. This list was created for users who enjoy computers but are uncertain about what field to enter. In the below list we have described each of the jobs, the type of requirements, and recommendations what to do if you're interpreted in the job.

If you're looking for the average pay or the highest paying jobs in the computer industry, this document does not contain that information because of the wide variety of salaries depending on the company and its location. However, it's safe to assume that with the increased difficulty and experience required for a job, the higher you'd get paid. If you're looking for a pay range, refer to your your local listings (newspaper) and/or job listings for pay grades.

If you're looking for your first job in the computer industry or just want to get your foot in the door, we suggest looking at Data Entry, Sales, Quality Assurance (QA) / Tester, or Technical Support (Technician / Help Desk) jobs. The qualifications and requirements for these jobs vary, so it's best to refer to your local listings (newspaper) and/or job listing for available positions and the requirements.

Job quick links

3D Animation / Graphic design
Customer service
Data Entry
Database
Engineer
Hardware
Networking
Programmer / Software developer
Quality Assurance (QA) / System analyst / Tester
Sales
Technical Support (Technician / Help Desk)
Technical Writing
Security expert
WebMaster / Web Designer
3D Animation / Graphic design

Description: A position where you design and create either a graphic or 3D animations for software programs, games, movies, web pages, etc. Position may also require that you work on existing graphics, animations, movies, etc. done by other people.

Requirements: An individual applying for this job would need to be talented in design and creating visuals, for most people this is not something that you could train for. In addition to being talented in design and art you must have a good understanding of the software programs being used to create the visual designs or 3D animations.

Recommendations: If you wish to get into graphic design / arts, learn major graphics programs such as Adobe Photoshop. In addition to this program, there are numerous other programs used to create your own pictures or edit photos; see document CH000760 for a listing of these programs. See our animation dictionary definition for additional information about this term as well as a listing of some of the more popular animation programs.

Difficulty: (MEDIUM - HIGH) Many of the programs used to create a graphic, edit a photo, or create a 3D render are complex programs and often require a lot of learning and experience; and in some cases, training or schooling.

Customer service

Description: Helping customers with general questions relating to the company, ordering, status on orders, account information or status, etc.

Requirements: Good communication skills and a general understanding of the company and its products.

Recommendations: Great starting position for anyone who is looking to get their foot in the door at the company and/or who are not yet that familiar with computers.

Difficulty: (LOW) customer service will require that the employee be familiar with computers and be able to navigate through the companies system. However, will seldom require the employee to be skilled with computer.

Data Entry

Description: A job that commonly requires the employee to take information from a hard copy or other source and enter it into an electronic format. Position may also be taking electronic data and entering it into a database for easy sorting and locating.

Requirements: Generally requires someone capable of typing 40-50 or more WPM, familiarity with computer, and usually requires familiarity with a word processors.

Recommendations: Practice your typing and take typing tests to determine your overall speed. Additional information about improving your typing can be found on document CH000752. See document CH000751 for additional information about how to test your typing skills.

Difficulty: (LOW) Most data entry jobs are beginner level jobs and don't require much or any prior experience or formal education.

Database

Description: A job that requires creating, testing, and/or maintaining one or more database.

Requirements: Commonly requires that the user is familiar with and/or has an extensive knowledge with the database at the place of employment. For example: Access, FoxPro, MySQL, SQL, Sybase, etc.

Recommendations: Become familiar with the database being used at the business. If the job is for developing or continuing the development of a database, you will need to have a great understanding of the database as well as how to program it. Often this knowledge requires past experience or formal education.

Difficulty: (MEDIUM - HIGH) Developing or maintaining a database can be a difficult and sometimes very complex job. As mentioned above you will need to have past experience or formal education with maintaining or developing a database before most companies will even consider you.

Engineer

Description: An engineer is someone who is at the top of their class and almost always someone who has or is working on a college degree or several certifications. Although used broadly in this document, the type of engineer is usually specified in the job requirement. For example, a software development engineer may be a highly skilled computer programmer.

Requirements: The requirements for this type of job change depending on the type of engineer you plan on being. However, as mentioned above, any engineer job will require an extensive understanding of the job. Usually, this understanding is obtained from a school, certifications, training, and/or years of past experience.

Recommendations: Get training and/or education in the subject of interest from a school or other location. Learn as much about the subject as possible from books, the Internet, and other sources. Often before you can qualify for many engineer positions you will need past experience; therefore, it's a good idea to get an entry-level job in the same field. For example, if you want to be an engineer in software development, get a job in programming and/or create your own software programs. If you want to become a network engineer, get a job that requires you to setup, maintain, or otherwise work with networks and setup your own home network.

Difficulty: (HIGH) This is a job / position that requires a lot of work to obtain and is not likely something you will be able to get as your first job.

Hardware

Description: A position as a hardware designer, circuit design, embedded systems, firmware, etc. is a job that requires you to design and create a complete hardware package or portions of a hardware device.

Requirements: Jobs that design and/or create hardware devices often require that the person has a good understanding of electronics, circuits, firmware, and/or design. For this type of position the person will often need to have several years of prior experience and/or a degree in the field.

Recommendations: If you're interested in this type of field we suggest you get a degree in the field.

Difficulty: (HIGH) Hardware design is a difficult position to learn and understand unless you get training or a degree.

Networking

Description: Computer networking jobs involve designing, setting up, and/or maintaining a network.

Requirements: Although most users today have their own home networks, setting up, troubleshooting, and maintaining a corporate network can be a much more complicated task. Often, networking jobs also require a good understanding of how a network works, and in some cases how all the underlying protocols and structure of how networks work.

Recommendations: There are numerous types of network and network related certifications available today, such as the CCNA, MCSE, etc. Often depending on the level of certification and the job you're applying for, the certifications will be more than enough to quality you for most network jobs. Some of the higher networking positions, especially in the development of network hardware or programming side, may also require past experience in networking and/or a degree.

Difficulty: (MEDIUM - HIGH) Depending upon the job specifications and the complexity of the network usually determines the difficulty of this job.

Programmer / Software developer

Description: A job that requires the development and/or continued development and maintenance of a software program.

Requirements: A basic to extensive understanding of a programming language. Because most job positions will require a person to develop sections of a program or the whole program, they often require several years of past experience and/or a degree before even considering you.

Recommendations: Learn one or more programming languages. Depending on what type of programs or scripts you wish to create may change the type of language you wish to learn. See our dictionary programming languages definition for a listing of popular programming languages and what type of programs they are used to create. If you need experience, creating your own software programs is a great way to learn a language and demonstrate your abilities at a job interview.

Difficulty: (HIGH) Learning a programming language can be as difficult as learning a second language and takes a lot of experience and practice to become a skilled programmer.

Quality Assurance (QA) / System analyst / Tester

Description: This job requires that the employee test out all features of a product for any problems or usability issues.

Requirements: Requires that the person have a good understanding of computer software, hardware, and the product being tested.

Recommendations: Become familiar with computers, software, hardware, and/or the products the company makes.

Difficulty: (LOW - MEDIUM) Depending on what is being tested and how much needs to be tested usually determines the difficulty of this job. However, for users familiar with the product or similar products, you should not have much difficulty locating and reporting issues.

Sales

Description: Selling a product or service to another person or company.

Requirements: Good communication skills and a general understanding of computers and/or the product that is being sold.

Recommendations: If you're selling computers, computer hardware, or computer software, become familiar with all aspects of the product. Sites like Computer Hope are a great resource to learn about computers. If you're selling a specialized product developed by the company you will be selling for, visit their web page and become as familiar with the product as possible.

Difficulty: (LOW) Sales for computer software, hardware, electronics, or related products is a good first job and can be a good way to learn more about computers.

Technical Support (Technician / Help Desk)

Description: Helping an end-user or company employee with their computers, software program, and/or hardware device. A technical support position is a great first step for people interested in working in the computer industry.

Requirements: A basic understanding of computers, computer's software, and/or hardware.

Recommendations: Become as familiar as possible with computers, computer software, and/or computer hardware, depending on what you will be supporting. Almost all technical support centers that help end-users with their computers, computer software, or computer hardware products have training that all employees go through before you actually start work but will still often require that the user be familiar with computers.

Help desks for corporations do not usually have any type of training; these positions require that the person being hired already have a very good understanding of computers and troubleshooting computer problems.

Difficulty: (LOW - MEDIUM) The difficulty of this job is really depending on the type of training you get. However, someone who is familiar with computers or works with computers often will generally have an easy time with these positions after a few days working at them.

Technical Writing

Description: This position often involves creating or editing technical papers or manuals.

Requirements: This position often requires that the individual has a basic understanding about the subject being written about and have good writing skills.

Recommendations: Many of these positions will require that the person have a degree and will often test a user before hiring them. In addition to having good writing skills, you should also be familiar with a major word processor.

Difficulty: (LOW - MEDIUM) For someone who has good writing skills and familiarity with the subject, this job can be an easy job.

Security expert

Description: Test and find vulnerabilities in a system, hardware device, or software program.

Requirements: This position is for someone who has a strong familiarity with how software, hardware, and/or networks work and how to exploit them. Often, you will need to have a good understanding of how the overall system works as well as good programming skills.

Recommendations: Keep up-to-date with all security news, advisories, and other related news. The majority of security vulnerabilities are through software, and in order to understand these vulnerabilities or find new security vulnerabilities, you'll need to understand how to program and have a good understand of how software works and interacts with computers.

Difficulty: (MEDIUM - HIGH) The difficulty of this job really depends on what you're testing or trying to find any vulnerabilities in.

WebMaster / Web Designer

Description: A job where a person creates, maintains, or completely designs a web page.

Requirements: For basic web designing positions you should have a good understanding of HTML, the Internet, and web servers. More advanced positions where you will be working with more advanced web pages and not just static web pages may also require that you be familiar with such things as CGI, CSS, Flash, FTP, Linux, Perl, PHP, RSS, SSI, Unix, and/or XHTML.

In addition to having a good understanding of the technologies and code used to create a web page, you're also often required to know the software programs they are created in.

Recommendations: One of the best learning experiences for people who are interested in this type of job is to create your own web page. Keeping in mind that simply designing and posting a web page using Microsoft FrontPage without understanding HTML or the code of how it works may not be sufficient enough for most jobs.

Difficulty: (MEDIUM - HIGH) The complexity of this job is really dependent on how difficult of a project you're working on, simply creating and posting a simple web site with no interaction is not that hard. However, creating an interactive site with forms, databases, and overall more interaction with the user and the server can increase the difficulty of the job significantly.

software engineering

Software engineering
Software engineering is a profession and field of study dedicated to designing, implementing, and modifying software so that it is of higher quality, more affordable, maintainable, and faster to build. The term software engineering first appeared in the 1968 NATO Software Engineering Conference, and was meant to provoke thought regarding the perceived "software crisis" at the time.[1][2] Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. Some people argue that development of computer software is more art than science [3], and that attempting to impose engineering disciplines over a type of art is an exercise in futility because what represents good practice in the creation of software is not even defined.[4] Others, such as Steve McConnell, argue that engineering's blend of art and science to achieve practical ends provides a useful model for software development.[5] The IEEE Computer Society's Software Engineering Body of Knowledge defines "software engineering" as the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.[6]
Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. Although it is questionable what impact it has had on actual software development over the last more than 40 years,[7][8] the field's future looks bright according to Money Magazine and Salary.com, who rated "software engineering" as the best job in the United States in 2006.[9]
History
Main article: History of software engineering
When the first modern digital computers appeared in the early 1940s,[10] the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the first division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.
Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, and Cobol were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper, "Go To Statement Considered Harmful",[11] in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972[12] to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm.
These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a computer and write software for it. This in turn led to the now famous Personal Computer (PC) and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired object-oriented programming languages, including C++, Smalltalk, and Objective C.
Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software.[13] Then the Internet and World Wide Web hit in the mid 90s, changing the engineering of software once again. Distributed systems gained sway as a way to design systems, and the Java programming language was introduced with its own virtual machine as another step in abstraction. Programmers collaborated and wrote the Agile Manifesto, which favored more lightweight processes to create cheaper and more timely software.
The current definition of software engineering is still being debated by practitioners today as they struggle to come up with ways to produce software that is "cheaper, bigger, quicker".
Profession
Main article: Software engineer
Legal requirements for the licensing or certification of professional software engineers vary around the world. Many states of the United States license software engineers[citation needed]. In the UK, the British Computer Society licenses software engineers and members of the society can also become Chartered Engineers (CEng), while in some areas of Canada, such as Alberta, Ontario,[14] and Quebec, software engineers can hold the Professional Engineer (P.Eng)designation and/or the Information Systems Professional (I.S.P.) designation; however, there is no legal requirement to have these qualifications.
The IEEE Computer Society and the ACM, the two main professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge - 2004 Version, or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The IEEE also promulgates a "Software Engineering Code of Ethics".[15]
Employment
In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined.[16] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.[17]
Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers).
There is considerable debate over the future employment prospects for software engineers and other IT professionals. For example, an online futures market called the "ITJOBS Future of IT Jobs in America"[18] attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002.
Certification
Professional certification of software engineers is a contentious issue, with some professional organizations supporting it,[19] and others claiming that it is inappropriate given the current level of maturity in the profession.[20] Some see it as a tool to improve professional practice; "The only purpose of licensing software engineers is to protect the public".[21]
The ACM had a professional certification program in the early 1980s,[citation needed] which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering.[20] As of 2006, the IEEE had certified over 575 software professionals.[19] In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified Members (MBCS). In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP)[22]. The Software Engineering Institute offers certification on specific topic such as Security, Process improvement and Software architecture[23].
Most certification programs in the IT industry are oriented toward specific technologies, and are managed by the vendors of these technologies.[24] These certification programs are tailored to the institutions that would employ people who use these technologies.
In some countries the software engineer is an actual engineering degree (Bachelor of Science or Bachelor of Engineering), as an example in Education in Israel software engineer has the right to be written in the engineering registry, and it would be a felony If a person describe himself as an engineer ( The engineering law defines that a person stating himself as an engineer without the proper license / registration could be sentenced to up to 6 months in jail).
Impact of globalization
Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers.[25] Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected.[26][27] Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions.
Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore.[28] It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.[29]
Education
A knowledge of programming is the main pre-requisite to becoming a software engineer, but it is not sufficient. Many software engineers have degrees in Computer Science due to the lack of software engineering programs in higher education. However, this has started to change with the introduction of new software engineering degrees, especially in post-graduate education. A standard international curriculum for undergraduate software engineering degrees was defined by the CCSE.
Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers.[30] In 2004 the IEEE Computer Society produced the SWEBOK, which has become an ISO standard describing the body of knowledge covered by a software engineer[citation needed].
The European Commission within the Erasmus Mundus Programme offers a European master degree called European Master on Software Engineering for students from Europe and also outside Europe[31]. This is a joint program (double degree) involving four universities in Europe.
Sub-disciplines
Software engineering can be divided into ten subdisciplines. They are:[6]
• Software requirements: The elicitation, analysis, specification, and validation of requirements for software.
• Software design: The design of software is usually done with Computer-Aided Software Engineering (CASE) tools and use standards for the format, such as the Unified Modeling Language (UML).
• Software development: The construction of software through the use of programming languages.
• Software testing
• Software maintenance: Software systems often have problems and need enhancements for a long time after they are first completed. This subfield deals with those problems.
• Software configuration management: Since software systems are very complex, their configuration (such as versioning and source control) have to be managed in a standardized and structured method.
• Software engineering management: The management of software systems borrows heavily from project management, but there are nuances encountered in software not seen in other management disciplines.
• Software development process: The process of building software is hotly debated among practitioners with the main paradigms being agile or waterfall.
• Software engineering tools, see Computer Aided Software Engineering
• Software quality

Operating System

Operating system
From Wikipedia, the free encyclopedia

In computing, an operating system (OS) is software (programs and data) that provides an interface between the hardware and other software. The OS is responsible for management and coordination of processes and allocation and sharing of hardware resources such as RAM and disk space, and acts as a host for computing applications running on the OS. An operating system may also provide orderly accesses to the hardware by competing software routines. This relieves the application programmers from having to manage these details.
Operating systems offer a number of services to application programs. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. On large systems such as Unix-like systems, the user interface is always implemented as software that runs outside the operating system. In some other systems like Windows, the Window manager can be part of the operating system itself.
While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems,[1][2] although the Microsoft Windows line of operating systems has almost 90% of the client PC market.
History
Main article: History of operating systems
Mainframe
Through the 1950s, many major features were pioneered in the field of operating systems, including input/output interrupt, buffering, multitasking, spooling, and runtime libraries. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704 and IBM 709 mainframes. In 1964, IBM produced the System/360 family of mainframe computers, available in widely differing capacities and price points, for which a single operating system OS/360 was provided, which eliminated costly, incompatable, ad-hoc programs for every individual model. This concept of a single OS spanning an entire product line was crucial for the success of System/360 and, in fact, IBM's current mainframe operating systems are distant descendants of this original system; applications written for the OS/360 can still be run on modern machines. In the mid-'70s, the MVS, the descendant of OS/360 offered the first[citation needed] implementation of using RAM as a transparent cache for data.
OS/360 also pioneered a number of concepts that, in some cases, are still not seen outside of the mainframe arena. For instance, in OS/360, when a program is started, the operating system keeps track of all of the system resources that are used including storage, locks, data files, and so on. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. An alternative CP-67 system started a whole line of operating systems focused on the concept of virtual machines.
Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General lComprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.
In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.
The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
• Burroughs MCP – B5000,1961 to Unisys Clearpath/MCP, present.
• IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
• IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
• UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.
Microcomputers
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as Monitors. One notable early disk-based operating system was CP/M, which was supported on many early microcomputers and was closely imitated in MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS), its successors making Microsoft. In the 80's Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.
The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the Unix-like NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.
The GNU project was started by activist and programmer Richard Stallman with the goal of a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991 Finnish computer science student Linus Torvalds, with cooperation from volunteers over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU userland and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention which Stallman and the Free Software Foundation remain opposed to, preferring the name "GNU/Linux" instead. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
Features
Program execution
Main article: Process (computing)
The operating system acts as an interface between an application and the hardware. The user interacts with the hardware from "the other side". The operating system is a set of services which simplifies development of applications. Executing a program involves the creation of a process by the operating system. The kernel creates a process by assigning memory and other resources, establishing a priority for the process (in multi-tasking systems), loading program code into memory, and executing the program. The program then interacts with the user and/or other devices and performs its intended function.
Interrupts
Main article: interrupt
Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative—having the operating system "watch" the various sources of input for events (polling) that require action—can be found in older systems with very small stacks (50 or 60 bytes) but fairly unusual in modern systems with fairly large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.
When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.
Protected mode and supervisor mode
Main article: Protected mode
Main article: Supervisor mode
Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. CPUs might have other modes similar to 80286 protected mode as well, such as the virtual 8086 mode of the 80386 (Intel's x86 32-bit microprocessor or i386).
However, the term is used here more generally in operating system theory to refer to all modes which limit the capabilities of programs running in that mode, providing things like virtual memory addressing and limiting access to hardware in a manner determined by a program running in supervisor mode. Similar modes have existed in supercomputers, minicomputers, and mainframes as they are essential to fully supporting UNIX-like multi-user operating systems.
When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.
In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.
The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).
Memory management
Main article: memory management
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself. With cooperative memory management it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU) which doesn't exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.
Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. Under Windows 9x all MS-DOS applications ran in supervisor mode, giving them almost unlimited control over the computer. A general protection fault would be produced indicating a segmentation violation had occurred, however the system would often crash anyway.
In most GNU/Linux systems, part of the hard disk is reserved for virtual memory when the Operating system is being installed on the system. This part is known as swap space. Windows systems use a swap file instead of a partition.
Virtual memory
Main article: Virtual memory
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Further information: Page fault
Multitasking
Main article: Computer multitasking
Main article: Process management (computing)
Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)
Further information: Context switch
Further information: Preemptive multitasking
Further information: Cooperative multitasking
Kernel preemption
Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
Oracle/Sun Solaris has had most kernel thread processing pre-emptive since Solaris 8[3] in February 2000. In November 2001, concerns arose because of long latencies associated with kernel run-times in Linux kernel 2.4, sometimes on the order of 100 ms or more in systems with monolithic kernels. These latencies often produce noticeable slowness in desktop systems, and can prevent operating systems from performing time-sensitive operations such as audio recording and some communications.[4] In December 2003, the preemptable kernel model introduced in GNU/Linux version 2.6, allowing all device drivers and some other parts of kernel code to take advantage of preemptive multi-tasking. January 2007, Windows Vista the introduction of the Windows Display Driver Model (WDDM) accomplishes this for display drivers.
Under Windows versions prior to Windows Vista and Linux prior to version 2.6 all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system.
Disk access and file systems
Main article: Virtual file system
Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and GNU/Linux support a technology known as a virtual file system or VFS. An operating system like UNIX supports a wide array of storage devices, regardless of their design or file systems to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them through the use of specific device drivers and file system drivers.
A connected storage device such as a hard drive is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in GNU/Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in GNU/Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through FS-driver and rfstool).
Device drivers
Main article: Device driver
A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operative systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operative system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view.
Networking
Main article: Computer network
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
Client/server networking involves a program on a computer somewhere which connects via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address[disambiguation needed]. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
Security
Main article: Computer security
A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.
The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
File system support in modern operating systems
Support for file systems is highly varied among modern operating systems although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on.
GNU/Linux
Many GNU/Linux distributions support some or all of ext2, ext3, ext4, ReiserFS, Reiser4, JFS , XFS , GFS, GFS2, OCFS, OCFS2, and NILFS. The ext file systems, namely ext2, ext3 and ext4 are based on the original GNU/Linux file system. Others have been developed by companies to meet their specific needs, hobbyists, or adapted from UNIX, Microsoft Windows, and other operating systems. GNU/Linux has full support for XFS and JFS, along with FAT (the MS-DOS file system), and HFS which is the primary file system for the Macintosh.
In recent years support for Microsoft Windows NT's NTFS file system has appeared in GNU/Linux, and is now comparable to the support available for other native UNIX file systems. ISO 9660 and Universal Disk Format (UDF) are supported which are standard file systems used on CDs, DVDs, and BluRay discs. It is possible to install GNU/Linux on the majority of these file systems. Unlike other operating systems, GNU/Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB key, or even contained within a file located on another file system.
Mac OS X
Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT, UDF, and other file systems, but cannot be installed to them. Due to its UNIX heritage Mac OS X now supports virtually all the file systems supported by the VFS.
Microsoft Windows
Microsoft Windows currently supports NTFS and FAT file systems (including FAT16 and FAT32), along with network file systems[disambiguation needed] shared from other computers, and the ISO 9660 and UDF filesystems used for CDs, DVDs, and other optical discs such as Blu-ray. Under Windows each file system is usually limited in application to certain media, for example CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. Windows Embedded CE 6.0, Windows Vista Service Pack 1, and Windows Server 2008 support ExFAT, a (late version MSWindows-only) file system more suitable for flash drives.
Solaris
The Solaris Operating System uses UFS as its primary file system. Prior to 1998, Solaris UFS did not have logging/journaling capabilities, but over time the OS has gained this and other new data management capabilities.
Additional features include Veritas (Journaling) VxFS, QFS from Sun Microsystems, enhancements to UFS including multiterabyte support and UFS volume management included as part of the OS, and ZFS (free software, poolable, 128-bit, compressible, and error-correcting).
Kernel extensions were added to Solaris to allow for bootable Veritas VxFS operation. Logging or journaling was added to UFS in Solaris 7. Releases of Solaris 10, Solaris Express, OpenSolaris, and other open source variants of Solaris later supported bootable ZFS.
Logical Volume Management allows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Solaris includes Solaris Volume Manager (formerly known as Solstice DiskSuite.) Solaris is one of many operating systems supported by Veritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools in ZFS.
Special-purpose file systems
FAT file systems are commonly found on floppy disks, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. Performance of FAT compares poorly to most other file systems as it uses overly simplistic data structures, making file operations time-consuming, and makes poor use of disk space in situations where many small files are present. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs. Mount Rainier is a newer extension to UDF supported by GNU/Linux 2.6 series and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.
Journalized file systems
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.
In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.
Graphical user interfaces
Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. GNU/Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space, however versions between Windows NT 4.0 and Windows Server 2003's graphics drawing routines exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly found setup on most Unix and Unix-like (BSD, GNU/Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[5]
Examples of operating systems
GNU/Linux and Unix-like operating systems
Main articles: Linux and Unix


Ubuntu desktop
Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and GNU/Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "Unix-like" is commonly used to refer to the large set of operating systems which resemble the original Unix.
Unix-like systems run on a wide variety of machine architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free Unix variants, such as GNU/Linux and BSD, are popular in these areas.
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that vendor's hardware. Others, such as Solaris, can run on multiple types of hardware, including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
Mac OS X
Mac OS X is a line of partially proprietary, graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, six more distinct "client" and "server" editions of Mac OS X have been released, the most recent being Mac OS X v10.6, which was first made available on August 28, 2009. Releases of Mac OS X are named after big cats; the current version of Mac OS X is nicknamed "Snow Leopard".
The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.
Microsoft Windows
Microsoft Windows is a family of proprietary operating systems that originated as an add-on to the older MS-DOS operating system for the IBM PC. Modern versions are based on the newer Windows NT kernel that was originally intended for OS/2. Windows runs on x86, x86-64 and Itanium processors. Earlier versions also ran on the Alpha, MIPS, Fairchild (later Intergraph), Clipper and PowerPC architectures (some work was done to port it to the SPARC architecture).
As of 2009, Microsoft Windows still holds a large amount of the worldwide desktop market share. Windows is also used on servers, supporting applications such as web servers and database servers. In recent years, Microsoft has spent significant marketing and research & development money to demonstrate that Windows is capable of running any enterprise application, which has resulted in consistent price/performance records (see the TPC) and significant acceptance in the enterprise market.
Currently, the most widely used version of the Microsoft Windows family is Windows XP, released on October 25, 2001.
In November 2006, after more than five years of development work, Microsoft released Windows Vista, a major new operating system version of Microsoft Windows family which contains a large number of new features and architectural changes. Chief amongst these are a new user interface and visual style called Windows Aero, a number of new security features such as User Account Control, and a few new multimedia applications such as Windows DVD Maker. A server variant based on the same kernel, Windows Server 2008, was released in early 2008.
On October 22, 2009, Microsoft released Windows 7, the successor to Windows Vista, coming three years after its release. While Vista was about introducing new features, Windows 7 aims to streamline these and provide for a faster overall working environment. Windows Server 2008 R2, the server variant, was released at the same time.
Google Chrome OS
On July 7th 2009 Google announced that they will be releasing an Operating System by the second half of 2010. Google Chrome OS will be designed to work exclusively with web applications. It will be an open source OS.


This is what Google Chrome OS is expected to look like.
Plan 9


Plan 9
Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. Plan 9 has yet to become as popular as Unix derivatives, but it has an expanding community of developers. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.
Real-time operating systems
Main article: real-time operating system
A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase[citation needed].
Some embedded systems use operating systems such as Symbian OS, Palm OS, BSD, and GNU/Linux, although such operating systems do not support real-time computing.
Hobby development
Operating system development is one of the more involved and technical options for the computing hobbyist. A hobby operating system is classified as one with little or no support from maintenance developers. [6] Development usually begins with an existing operating system. The hobbyist is their own developer, or they interact in a relatively small and unstructured group of individuals who are all similarly situated with the same code base. Examples of a hobby operating system include Syllable and ReactOS; Minix is a classical example.
Commodore
Commodore International designed a series of 8 bit platforms that were all to one degree or another separately intelligent and yet interconnectable. For instance, one computer always powered up as a host, and the others powered up in a generally cooperative state, according to a complex coordination of signals (TALK/LISTEN protocol) so they could work separately or in tandem, depending on whatever tasks were at hand.[citation needed] Although the TALK/LISTEN protocol logically supported up to 30 devices daisy-chaining together on the serial bus, signal attenuation required some kind of device in the middle for voltage maintenance through a buffer, amplifier, and propagator. For the state of the art in the late 1980s, the machine was at a roadblock. The TALK/LISTEN protocol was quite similar to SCSI bus management but there was no arbitration phase, and only one powered up as host, which could then command one or more of the other devices to enter into a TALKing or LISTENing state, until such time that some other computer in the daisy chain was willing to be the host. In some cases, one or more computers could drop off the daisy chain for a period of time until they voluntarily (of their own accord) came back, which was called "reentrance",[citation needed] but there was still no arbitration phase like that enjoyed by SCSI-compliant computers. One of the limitations was the small number of physical devices (close to 32, depending on the way the signal was amplified prior to propagation) that could be connected, preventing it from being useful in a multi-user environment.
Other
Older operating systems which are still used in niche markets include OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300. Some, most notably AmigaOS 4 and RISC OS, continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard.
There were a number of operating systems for 8 bit computers - Apple's DOS (Disk Operating System) 3.2 & 3.3 for Apple II, ProDOS, UCSD, CP/M - available for various 8 and 16 bit environments, FutureOS for the Amstrad CPC6128 and 6128Plus.
Research and development of new operating systems continues. GNU Hurd is designed to be backwards compatible with Unix, but with enhanced functionality and a microkernel architecture. Singularity is a project at Microsoft Research to develop an operating system with better memory protection based on the .Net managed code model. Systems development follows the same model used by other Software development, which involves maintainers, version control "trees",[citation needed] forks, "patches", and specifications. From the AT&T-Berkeley lawsuit the new unencumbered systems were based on 4.4BSD which forked as FreeBSD and NetBSD efforts to replace missing code after the Unix wars. Recent forks include DragonFly BSD and Darwin from BSD Unix.[citation needed]
Diversity of operating systems and portability
Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java, Qt or for web browsers. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.

Wednesday 16 June 2010

Windows XP shortcut keys

General keyboard shortcuts
• CTRL+C (Copy)
• CTRL+X (Cut)
• CTRL+V (Paste)
• CTRL+Z (Undo)
• DELETE (Delete)
• SHIFT+DELETE (Delete the selected item permanently without placing the item in the Recycle Bin)
• CTRL while dragging an item (Copy the selected item)
• CTRL+SHIFT while dragging an item (Create a shortcut to the selected item)
• F2 key (Rename the selected item)
• CTRL+RIGHT ARROW (Move the insertion point to the beginning of the next word)
• CTRL+LEFT ARROW (Move the insertion point to the beginning of the previous word)
• CTRL+DOWN ARROW (Move the insertion point to the beginning of the next paragraph)
• CTRL+UP ARROW (Move the insertion point to the beginning of the previous paragraph)
• CTRL+SHIFT with any of the arrow keys (Highlight a block of text)
• SHIFT with any of the arrow keys (Select more than one item in a window or on the desktop, or select text in a document)
• CTRL+A (Select all)
• F3 key (Search for a file or a folder)
• ALT+ENTER (View the properties for the selected item)
• ALT+F4 (Close the active item, or quit the active program)
• ALT+ENTER (Display the properties of the selected object)
• ALT+SPACEBAR (Open the shortcut menu for the active window)
• CTRL+F4 (Close the active document in programs that enable you to have multiple documents open simultaneously)
• ALT+TAB (Switch between the open items)
• ALT+ESC (Cycle through items in the order that they had been opened)
• F6 key (Cycle through the screen elements in a window or on the desktop)
• F4 key (Display the Address bar list in My Computer or Windows Explorer)
• SHIFT+F10 (Display the shortcut menu for the selected item)
• ALT+SPACEBAR (Display the System menu for the active window)
• CTRL+ESC (Display the Start menu)
• ALT+Underlined letter in a menu name (Display the corresponding menu)
• Underlined letter in a command name on an open menu (Perform the corresponding command)
• F10 key (Activate the menu bar in the active program)
• RIGHT ARROW (Open the next menu to the right, or open a submenu)
• LEFT ARROW (Open the next menu to the left, or close a submenu)
• F5 key (Update the active window)
• BACKSPACE (View the folder one level up in My Computer or Windows Explorer)
• ESC (Cancel the current task)
• SHIFT when you insert a CD-ROM into the CD-ROM drive (Prevent the CD-ROM from automatically playing)
• CTRL+SHIFT+ESC (Open Task Manager)
Dialog box keyboard shortcuts
If you press SHIFT+F8 in extended selection list boxes, you enable extended selection mode. In this mode, you can use an arrow key to move a cursor without changing the selection. You can press CTRL+SPACEBAR or SHIFT+SPACEBAR to adjust the selection. To cancel extended selection mode, press SHIFT+F8 again. Extended selection mode cancels itself when you move the focus to another control.
• CTRL+TAB (Move forward through the tabs)
• CTRL+SHIFT+TAB (Move backward through the tabs)
• TAB (Move forward through the options)
• SHIFT+TAB (Move backward through the options)
• ALT+Underlined letter (Perform the corresponding command or select the corresponding option)
• ENTER (Perform the command for the active option or button)
• SPACEBAR (Select or clear the check box if the active option is a check box)
• Arrow keys (Select a button if the active option is a group of option buttons)
• F1 key (Display Help)
• F4 key (Display the items in the active list)
• BACKSPACE (Open a folder one level up if a folder is selected in the Save As or Open dialog box)
Microsoft natural keyboard shortcuts
• Windows Logo (Display or hide the Start menu)
• Windows Logo+BREAK (Display the System Properties dialog box)
• Windows Logo+D (Display the desktop)
• Windows Logo+M (Minimize all of the windows)
• Windows Logo+SHIFT+M (Restore the minimized windows)
• Windows Logo+E (Open My Computer)
• Windows Logo+F (Search for a file or a folder)
• CTRL+Windows Logo+F (Search for computers)
• Windows Logo+F1 (Display Windows Help)
• Windows Logo+ L (Lock the keyboard)
• Windows Logo+R (Open the Run dialog box)
• Windows Logo+U (Open Utility Manager)

Accessibility keyboard shortcuts
• Right SHIFT for eight seconds (Switch FilterKeys either on or off)
• Left ALT+left SHIFT+PRINT SCREEN (Switch High Contrast either on or off)
• Left ALT+left SHIFT+NUM LOCK (Switch the MouseKeys either on or off)
• SHIFT five times (Switch the StickyKeys either on or off)
• NUM LOCK for five seconds (Switch the ToggleKeys either on or off)
• Windows Logo +U (Open Utility Manager)
Windows Explorer keyboard shortcuts
• END (Display the bottom of the active window)
• HOME (Display the top of the active window)
• NUM LOCK+Asterisk sign (*) (Display all of the subfolders that are under the selected folder)
• NUM LOCK+Plus sign (+) (Display the contents of the selected folder)
• NUM LOCK+Minus sign (-) (Collapse the selected folder)
• LEFT ARROW (Collapse the current selection if it is expanded, or select the parent folder)
• RIGHT ARROW (Display the current selection if it is collapsed, or select the first subfolder)
Shortcut keys for Character Map
After you double-click a character on the grid of characters, you can move through the grid by using the keyboard shortcuts:
• RIGHT ARROW (Move to the right or to the beginning of the next line)
• LEFT ARROW (Move to the left or to the end of the previous line)
• UP ARROW (Move up one row)
• DOWN ARROW (Move down one row)
• PAGE UP (Move up one screen at a time)
• PAGE DOWN (Move down one screen at a time)
• HOME (Move to the beginning of the line)
• END (Move to the end of the line)
• CTRL+HOME (Move to the first character)
• CTRL+END (Move to the last character)
• SPACEBAR (Switch between Enlarged and Normal mode when a character is selected)
Back to the top
Microsoft Management Console (MMC) main window keyboard shortcuts
• CTRL+O (Open a saved console)
• CTRL+N (Open a new console)
• CTRL+S (Save the open console)
• CTRL+M (Add or remove a console item)
• CTRL+W (Open a new window)
• F5 key (Update the content of all console windows)
• ALT+SPACEBAR (Display the MMC window menu)
• ALT+F4 (Close the console)
• ALT+A (Display the Action menu)
• ALT+V (Display the View menu)
• ALT+F (Display the File menu)
• ALT+O (Display the Favorites menu)
MMC console window keyboard shortcuts
• CTRL+P (Print the current page or active pane)
• ALT+Minus sign (-) (Display the window menu for the active console window)
• SHIFT+F10 (Display the Action shortcut menu for the selected item)
• F1 key (Open the Help topic, if any, for the selected item)
• F5 key (Update the content of all console windows)
• CTRL+F10 (Maximize the active console window)
• CTRL+F5 (Restore the active console window)
• ALT+ENTER (Display the Properties dialog box, if any, for the selected item)
• F2 key (Rename the selected item)
• CTRL+F4 (Close the active console window. When a console has only one console window, this shortcut closes the console)
Remote desktop connection navigation
• CTRL+ALT+END (Open the Microsoft Windows NT Security dialog box)
• ALT+PAGE UP (Switch between programs from left to right)
• ALT+PAGE DOWN (Switch between programs from right to left)
• ALT+INSERT (Cycle through the programs in most recently used order)
• ALT+HOME (Display the Start menu)
• CTRL+ALT+BREAK (Switch the client computer between a window and a full screen)
• ALT+DELETE (Display the Windows menu)
• CTRL+ALT+Minus sign (-) (Place a snapshot of the entire client window area on the Terminal server clipboard and provide the same functionality as pressing ALT+PRINT SCREEN on a local computer.)
• CTRL+ALT+Plus sign (+) (Place a snapshot of the active window in the client on the Terminal server clipboard and provide the same functionality as pressing PRINT SCREEN on a local computer.)


Microsoft Internet Explorer navigation
• CTRL+B (Open the Organize Favorites dialog box)
• CTRL+E (Open the Search bar)
• CTRL+F (Start the Find utility)
• CTRL+H (Open the History bar)
• CTRL+I (Open the Favorites bar)
• CTRL+L (Open the Open dialog box)
• CTRL+N (Start another instance of the browser with the same Web address)
• CTRL+O (Open the Open dialog box, the same as CTRL+L)
• CTRL+P (Open the Print dialog box)
• CTRL+R (Update the current Web page)
• CTRL+W (Close the current window)

Tuesday 15 June 2010

Computer Fundamentals

Introduction To Computers

• Definition:
• Its an electronic Device that is used for information Processing.
• Computer.. Latin word.. compute
• Calculation Machine
• A computer system includes a computer, peripheral devices, and software
• Accepts input, processes data, stores data, and produces output
• Input refers to whatever is sent to a Computer system
• Data refers to the symbols that represent facts, objects, and ideas
• Processing is the way that a computer manipulates data
• A computer processes data in a device called the central processing unit (CPU)
• Memory is an area of a computer that holds data that is waiting to be processed, stored, or output
• Storage is the area where data can be left on a permanent basis
• Computer output is the result produced by the computer
• An output device displays, prints or transmits the results of processing

Computer
Performs computations and makes logical decisions
Millions / billions times faster than human beings
Computer programs
Sets of instructions for which computer processes data
Hardware
Physical devices of computer system
Software
Programs that run on computers

• Capabilities of Computers
• Huge Data Storage
• Input and Output
• Processing

• Characteristics of Computers
• High Processing Speed
• Accuracy
• Reliability
• Versatility
• Diligence
History Of Computers
• Before the 1500s, in Europe, calculations were made with an abacus
Invented around 500BC, available in many cultures (China, Mesopotamia, Japan, Greece, Rome, etc.)

• In 1642, Blaise Pascal (French mathematician, physicist, philosopher) invented a mechanical calculator called the Pascaline

• In 1671, Gottfried von Leibniz (German mathematician, philosopher) extended the Pascaline to do multiplications, divisions, square roots: the Stepped Reckoner

None of these machines had memory, and they required human intervention at each step

• In 1822 Charles Babbage (English mathematician, philosopher), sometimes called the “father of computing” built the Difference Engine

• Machine designed to automate the computation (tabulation) of polynomial functions (which are known to be good approximations of many useful functions)
– Based on the “method of finite difference”
– Implements some storage

• In 1833 Babbage designed the Analytical Engine, but he died before he could build it
– It was built after his death, powered by steam

Generations of Computers

• Generation of Computers
• First Generation (1946-59)
• Second Generation(1957-64)
• Third Generation(1965-70)
• Fourth Generation(1970-90)
• Fifth Generation(1990 till date)
Generation 0: Mechanical Calculators
Generation 1: Vacuum Tube Computers
Generation 2: Transistor Computers
Generation 3: Integrated Circuits
Generation 4: Microprocessors

Generation 1 : ENIAC
The ENIAC (Electronic Numerical Integrator and Computer) was unveiled in 1946: the first all-electronic, general-purpose digital computer

The use of binary
In the 30s Claude Shannon (the father of “information theory”) had proposed that the use of binary arithmetic and boolean logic should be used with electronic circuits

The Von-Neumann architecture

Generation 2: IBM7094

Generation 3: Integrated Circuits

Seymour Cray created the Cray Research Corporation
Cray-1: $8.8 million, 160 million instructions per seconds and 8 Mbytes of memory

Generation 4: VLSI
Improvements to IC technology made it possible to integrate more and more transistors in a single chip
SSI (Small Scale Integration): 10-100
MSI (Medium Scale Integration): 100-1,000
LSI (Large Scale Integration): 1,000-10,000
VLSI (Very Large Scale Integration): >10,000

Microprocessors

Generation 5?
The term “Generation 5” is used sometimes to refer to all more or less “sci fi” future developments
Voice recognition
Artificial intelligence
Quantum computing
Bio computing
Nano technology
Learning
Natural languages

Generation 5 Computers

Saturday 12 June 2010

Tuesday 18 May 2010

Master list of Java interview questions

Master list of Java interview questions - 115 questions
By admin | July 18, 2005
115 questions total, not for the weak. Covers everything from basics to JDBC connectivity, AWT and JSP.

What is the difference between procedural and object-oriented programs?- a) In procedural program, programming logic follows certain procedures and the instructions are executed one after another. In OOP program, unit of program is object, which is nothing but combination of data and code. b) In procedural program, data is exposed to the whole program whereas in OOPs program, it is accessible with in the object and which in turn assures the security of the code.
What are Encapsulation, Inheritance and Polymorphism?- Encapsulation is the mechanism that binds together code and data it manipulates and keeps both safe from outside interference and misuse. Inheritance is the process by which one object acquires the properties of another object. Polymorphism is the feature that allows one interface to be used for general class actions.
What is the difference between Assignment and Initialization?- Assignment can be done as many times as desired whereas initialization can be done only once.

What is OOPs?
- Object oriented programming organizes a program around its data, i. e. , objects and a set of well defined interfaces to that data. An object-oriented program can be characterized as data controlling access to code.
What are Class, Constructor and Primitive data types?
- Class is a template for multiple objects with similar features and it is a blue print for objects. It defines a type of object according to the data the object can hold and the operations the object can perform. Constructor is a special kind of method that determines how an object is initialized when created. Primitive data types are 8 types and they are: byte, short, int, long, float, double, boolean, char.
What is an Object and how do you allocate memory to it?
- Object is an instance of a class and it is a software unit that combines a structured set of data with a set of operations for inspecting and manipulating that data. When an object is created using new operator, memory is allocated to it.
What is the difference between constructor and method?
- Constructor will be automatically invoked when an object is created whereas method has to be called explicitly.
What are methods and how are they defined?
- Methods are functions that operate on instances of classes in which they are defined. Objects can communicate with each other using methods and can call methods in other classes. Method definition has four parts. They are name of the method, type of object or primitive type the method returns, a list of parameters and the body of the method. A method’s signature is a combination of the first three parts mentioned above.
What is the use of bin and lib in JDK?
- Bin contains all tools such as javac, appletviewer, awt tool, etc., whereas lib contains API and all packages.
What is casting?
- Casting is used to convert the value of one type to another.
How many ways can an argument be passed to a subroutine and explain them?
- An argument can be passed in two ways. They are passing by value and passing by reference. Passing by value: This method copies the value of an argument into the formal parameter of the subroutine. Passing by reference: In this method, a reference to an argument (not the value of the argument) is passed to the parameter.
What is the difference between an argument and a parameter?
- While defining method, variables passed in the method are called parameters. While using those methods, values passed to those variables are called arguments.
What are different types of access modifiers?
- public: Any thing declared as public can be accessed from anywhere. private: Any thing declared as private can’t be seen outside of its class. protected: Any thing declared as protected can be accessed by classes in the same package and subclasses in the other packages. default modifier : Can be accessed only to classes in the same package.
What is final, finalize() and finally?
- final : final keyword can be used for class, method and variables. A final class cannot be subclassed and it prevents other programmers from subclassing a secure class to invoke insecure methods. A final method can’t be overridden. A final variable can’t change from its initialized value. finalize() : finalize() method is used just before an object is destroyed and can be called just prior to garbage collection. finally : finally, a key word used in exception handling, creates a block of code that will be executed after a try/catch block has completed and before the code following the try/catch block. The finally block will execute whether or not an exception is thrown. For example, if a method opens a file upon exit, then you will not want the code that closes the file to be bypassed by the exception-handling mechanism. This finally keyword is designed to address this contingency.
What is UNICODE?
- Unicode is used for internal representation of characters and strings and it uses 16 bits to represent each other.
What is Garbage Collection and how to call it explicitly?
- When an object is no longer referred to by any variable, java automatically reclaims memory used by that object. This is known as garbage collection. System. gc() method may be used to call it explicitly.
What is finalize() method?
- finalize () method is used just before an object is destroyed and can be called just prior to garbage collection.
What are Transient and Volatile Modifiers?
- Transient: The transient modifier applies to variables only and it is not stored as part of its object’s Persistent state. Transient variables are not serialized. Volatile: Volatile modifier applies to variables only and it tells the compiler that the variable modified by volatile can be changed unexpectedly by other parts of the program.
What is method overloading and method overriding?
- Method overloading: When a method in a class having the same method name with different arguments is said to be method overloading. Method overriding : When a method in a class having the same method name with same arguments is said to be method overriding.
What is difference between overloading and overriding?
- a) In overloading, there is a relationship between methods available in the same class whereas in overriding, there is relationship between a superclass method and subclass method. b) Overloading does not block inheritance from the superclass whereas overriding blocks inheritance from the superclass. c) In overloading, separate methods share the same name whereas in overriding, subclass method replaces the superclass. d) Overloading must have different method signatures whereas overriding must have same signature.
What is meant by Inheritance and what are its advantages?
- Inheritance is the process of inheriting all the features from a class. The advantages of inheritance are reusability of code and accessibility of variables and methods of the super class by subclasses.
What is the difference between this() and super()?
- this() can be used to invoke a constructor of the same class whereas super() can be used to invoke a super class constructor.
What is the difference between superclass and subclass?
- A super class is a class that is inherited whereas sub class is a class that does the inheriting.
What modifiers may be used with top-level class?
- public, abstract and final can be used for top-level class.
What are inner class and anonymous class?
- Inner class : classes defined in other classes, including those defined in methods are called inner classes. An inner class can have any accessibility including private. Anonymous class : Anonymous class is a class defined inside a method without a name and is instantiated and declared in the same place and cannot have explicit constructors.
What is a package?
- A package is a collection of classes and interfaces that provides a high-level layer of access protection and name space management.
What is a reflection package?
- java. lang. reflect package has the ability to analyze itself in runtime.
What is interface and its use?
- Interface is similar to a class which may contain method’s signature only but not bodies and it is a formal set of method and constant declarations that must be defined by the class that implements it. Interfaces are useful for: a)Declaring methods that one or more classes are expected to implement b)Capturing similarities between unrelated classes without forcing a class relationship. c)Determining an object’s programming interface without revealing the actual body of the class.
What is an abstract class?
- An abstract class is a class designed with implementation gaps for subclasses to fill in and is deliberately incomplete.
What is the difference between Integer and int?
- a) Integer is a class defined in the java. lang package, whereas int is a primitive data type defined in the Java language itself. Java does not automatically convert from one to the other. b) Integer can be used as an argument for a method that requires an object, whereas int can be used for calculations.
What is a cloneable interface and how many methods does it contain?
- It is not having any method because it is a TAGGED or MARKER interface.
What is the difference between abstract class and interface?
- a) All the methods declared inside an interface are abstract whereas abstract class must have at least one abstract method and others may be concrete or abstract. b) In abstract class, key word abstract must be used for the methods whereas interface we need not use that keyword for the methods. c) Abstract class must have subclasses whereas interface can’t have subclasses.
Can you have an inner class inside a method and what variables can you access?
- Yes, we can have an inner class inside a method and final variables can be accessed.
What is the difference between String and String Buffer?
- a) String objects are constants and immutable whereas StringBuffer objects are not. b) String class supports constant strings whereas StringBuffer class supports growable and modifiable strings.
What is the difference between Array and vector?
- Array is a set of related data type and static whereas vector is a growable array of objects and dynamic.
What is the difference between exception and error?
- The exception class defines mild error conditions that your program encounters. Exceptions can occur when trying to open the file, which does not exist, the network connection is disrupted, operands being manipulated are out of prescribed ranges, the class file you are interested in loading is missing. The error class defines serious error conditions that you should not attempt to recover from. In most cases it is advisable to let the program terminate when such an error is encountered.
What is the difference between process and thread?
- Process is a program in execution whereas thread is a separate path of execution in a program.
What is multithreading and what are the methods for inter-thread communication and what is the class in which these methods are defined?
- Multithreading is the mechanism in which more than one thread run independent of each other within the process. wait (), notify () and notifyAll() methods can be used for inter-thread communication and these methods are in Object class. wait() : When a thread executes a call to wait() method, it surrenders the object lock and enters into a waiting state. notify() or notifyAll() : To remove a thread from the waiting state, some other thread must make a call to notify() or notifyAll() method on the same object.
What is the class and interface in java to create thread and which is the most advantageous method?
- Thread class and Runnable interface can be used to create threads and using Runnable interface is the most advantageous method to create threads because we need not extend thread class here.
What are the states associated in the thread?
- Thread contains ready, running, waiting and dead states.
What is synchronization?
- Synchronization is the mechanism that ensures that only one thread is accessed the resources at a time.
When you will synchronize a piece of your code?
- When you expect your code will be accessed by different threads and these threads may change a particular data causing data corruption.
What is deadlock?
- When two threads are waiting each other and can’t precede the program is said to be deadlock.
What is daemon thread and which method is used to create the daemon thread?
- Daemon thread is a low priority thread which runs intermittently in the back ground doing the garbage collection operation for the java runtime system. setDaemon method is used to create a daemon thread.
Are there any global variables in Java, which can be accessed by other part of your program?- No, it is not the main method in which you define variables. Global variables is not possible because concept of encapsulation is eliminated here.
What is an applet?
- Applet is a dynamic and interactive program that runs inside a web page displayed by a java capable browser.
What is the difference between applications and applets?
- a)Application must be run on local machine whereas applet needs no explicit installation on local machine. b)Application must be run explicitly within a java-compatible virtual machine whereas applet loads and runs itself automatically in a java-enabled browser. d)Application starts execution with its main method whereas applet starts execution with its init method. e)Application can run with or without graphical user interface whereas applet must run within a graphical user interface.
How does applet recognize the height and width?
- Using getParameters() method.
When do you use codebase in applet?
- When the applet class file is not in the same directory, codebase is used.
What is the lifecycle of an applet?
- init() method - Can be called when an applet is first loaded start() method - Can be called each time an applet is started. paint() method - Can be called when the applet is minimized or maximized. stop() method - Can be used when the browser moves off the applet’s page. destroy() method - Can be called when the browser is finished with the applet.
How do you set security in applets?- using setSecurityManager() method
What is an event and what are the models available for event handling?- An event is an event object that describes a state of change in a source. In other words, event occurs when an action is generated, like pressing button, clicking mouse, selecting a list, etc. There are two types of models for handling events and they are: a) event-inheritance model and b) event-delegation model
What are the advantages of the model over the event-inheritance model?- The event-delegation model has two advantages over the event-inheritance model. They are: a)It enables event handling by objects other than the ones that generate the events. This allows a clean separation between a component’s design and its use. b)It performs much better in applications where many events are generated. This performance improvement is due to the fact that the event-delegation model does not have to be repeatedly process unhandled events as is the case of the event-inheritance.
What is source and listener?- source : A source is an object that generates an event. This occurs when the internal state of that object changes in some way. listener : A listener is an object that is notified when an event occurs. It has two major requirements. First, it must have been registered with one or more sources to receive notifications about specific types of events. Second, it must implement methods to receive and process these notifications.
What is adapter class?- An adapter class provides an empty implementation of all methods in an event listener interface. Adapter classes are useful when you want to receive and process only some of the events that are handled by a particular event listener interface. You can define a new class to act listener by extending one of the adapter classes and implementing only those events in which you are interested. For example, the MouseMotionAdapter class has two methods, mouseDragged()and mouseMoved(). The signatures of these empty are exactly as defined in the MouseMotionListener interface. If you are interested in only mouse drag events, then you could simply extend MouseMotionAdapter and implement mouseDragged() .
What is meant by controls and what are different types of controls in AWT?- Controls are components that allow a user to interact with your application and the AWT supports the following types of controls: Labels, Push Buttons, Check Boxes, Choice Lists, Lists, Scrollbars, Text Components. These controls are subclasses of Component.
What is the difference between choice and list?- A Choice is displayed in a compact form that requires you to pull it down to see the list of available choices and only one item may be selected from a choice. A List may be displayed in such a way that several list items are visible and it supports the selection of one or more list items.
What is the difference between scrollbar and scrollpane?- A Scrollbar is a Component, but not a Container whereas Scrollpane is a Conatiner and handles its own events and perform its own scrolling.
What is a layout manager and what are different types of layout managers available in java AWT?- A layout manager is an object that is used to organize components in a container. The different layouts are available are FlowLayout, BorderLayout, CardLayout, GridLayout and GridBagLayout.
How are the elements of different layouts organized?- FlowLayout: The elements of a FlowLayout are organized in a top to bottom, left to right fashion. BorderLayout: The elements of a BorderLayout are organized at the borders (North, South, East and West) and the center of a container. CardLayout: The elements of a CardLayout are stacked, on top of the other, like a deck of cards. GridLayout: The elements of a GridLayout are of equal size and are laid out using the square of a grid. GridBagLayout: The elements of a GridBagLayout are organized according to a grid. However, the elements are of different size and may occupy more than one row or column of the grid. In addition, the rows and columns may have different sizes.
Which containers use a Border layout as their default layout?- Window, Frame and Dialog classes use a BorderLayout as their layout.
Which containers use a Flow layout as their default layout?- Panel and Applet classes use the FlowLayout as their default layout.
What are wrapper classes?- Wrapper classes are classes that allow primitive types to be accessed as objects.
What are Vector, Hashtable, LinkedList and Enumeration?- Vector : The Vector class provides the capability to implement a growable array of objects. Hashtable : The Hashtable class implements a Hashtable data structure. A Hashtable indexes and stores objects in a dictionary using hash codes as the object’s keys. Hash codes are integer values that identify objects. LinkedList: Removing or inserting elements in the middle of an array can be done using LinkedList. A LinkedList stores each object in a separate link whereas an array stores object references in consecutive locations. Enumeration: An object that implements the Enumeration interface generates a series of elements, one at a time. It has two methods, namely hasMoreElements() and nextElement(). HasMoreElemnts() tests if this enumeration has more elements and nextElement method returns successive elements of the series.
What is the difference between set and list?- Set stores elements in an unordered way but does not contain duplicate elements, whereas list stores elements in an ordered way but may contain duplicate elements.
What is a stream and what are the types of Streams and classes of the Streams?- A Stream is an abstraction that either produces or consumes information. There are two types of Streams and they are: Byte Streams: Provide a convenient means for handling input and output of bytes. Character Streams: Provide a convenient means for handling input & output of characters. Byte Streams classes: Are defined by using two abstract classes, namely InputStream and OutputStream. Character Streams classes: Are defined by using two abstract classes, namely Reader and Writer.
What is the difference between Reader/Writer and InputStream/Output Stream?- The Reader/Writer class is character-oriented and the InputStream/OutputStream class is byte-oriented.
What is an I/O filter?- An I/O filter is an object that reads from one stream and writes to another, usually altering the data in some way as it is passed from one stream to another.
What is serialization and deserialization?- Serialization is the process of writing the state of an object to a byte stream. Deserialization is the process of restoring these objects.
What is JDBC?
- JDBC is a set of Java API for executing SQL statements. This API consists of a set of classes and interfaces to enable programs to write pure Java Database applications.
What are drivers available?
- a) JDBC-ODBC Bridge driver b) Native API Partly-Java driver c) JDBC-Net Pure Java driver d) Native-Protocol Pure Java driver
What is the difference between JDBC and ODBC?
- a) OBDC is for Microsoft and JDBC is for Java applications. b) ODBC can’t be directly used with Java because it uses a C interface. c) ODBC makes use of pointers which have been removed totally from Java. d) ODBC mixes simple and advanced features together and has complex options for simple queries. But JDBC is designed to keep things simple while allowing advanced capabilities when required. e) ODBC requires manual installation of the ODBC driver manager and driver on all client machines. JDBC drivers are written in Java and JDBC code is automatically installable, secure, and portable on all platforms. f) JDBC API is a natural Java interface and is built on ODBC. JDBC retains some of the basic features of ODBC.
What are the types of JDBC Driver Models and explain them?
- There are two types of JDBC Driver Models and they are: a) Two tier model and b) Three tier model Two tier model: In this model, Java applications interact directly with the database. A JDBC driver is required to communicate with the particular database management system that is being accessed. SQL statements are sent to the database and the results are given to user. This model is referred to as client/server configuration where user is the client and the machine that has the database is called as the server. Three tier model: A middle tier is introduced in this model. The functions of this model are: a) Collection of SQL statements from the client and handing it over to the database, b) Receiving results from database to the client and c) Maintaining control over accessing and updating of the above.
What are the steps involved for making a connection with a database or how do you connect to a database?a) Loading the driver : To load the driver, Class. forName() method is used. Class. forName(”sun. jdbc. odbc. JdbcOdbcDriver”); When the driver is loaded, it registers itself with the java. sql. DriverManager class as an available database driver. b) Making a connection with database: To open a connection to a given database, DriverManager. getConnection() method is used. Connection con = DriverManager. getConnection (”jdbc:odbc:somedb”, “user”, “password”); c) Executing SQL statements : To execute a SQL query, java. sql. statements class is used. createStatement() method of Connection to obtain a new Statement object. Statement stmt = con. createStatement(); A query that returns data can be executed using the executeQuery() method of Statement. This method executes the statement and returns a java. sql. ResultSet that encapsulates the retrieved data: ResultSet rs = stmt. executeQuery(”SELECT * FROM some table”); d) Process the results : ResultSet returns one row at a time. Next() method of ResultSet object can be called to move to the next row. The getString() and getObject() methods are used for retrieving column values: while(rs. next()) { String event = rs. getString(”event”); Object count = (Integer) rs. getObject(”count”);
What type of driver did you use in project?- JDBC-ODBC Bridge driver (is a driver that uses native(C language) libraries and makes calls to an existing ODBC driver to access a database engine).
What are the types of statements in JDBC?- Statement: to be used createStatement() method for executing single SQL statement PreparedStatement — To be used preparedStatement() method for executing same SQL statement over and over. CallableStatement — To be used prepareCall() method for multiple SQL statements over and over.
What is stored procedure?- Stored procedure is a group of SQL statements that forms a logical unit and performs a particular task. Stored Procedures are used to encapsulate a set of operations or queries to execute on database. Stored procedures can be compiled and executed with different parameters and results and may have any combination of input/output parameters.
How to create and call stored procedures?- To create stored procedures: Create procedure procedurename (specify in, out and in out parameters) BEGIN Any multiple SQL statement; END; To call stored procedures: CallableStatement csmt = con. prepareCall(”{call procedure name(?,?)}”); csmt. registerOutParameter(column no. , data type); csmt. setInt(column no. , column name) csmt. execute();
What is servlet?- Servlets are modules that extend request/response-oriented servers, such as java-enabled web servers. For example, a servlet might be responsible for taking data in an HTML order-entry form and applying the business logic used to update a company’s order database.
What are the classes and interfaces for servlets?- There are two packages in servlets and they are javax. servlet and
What is the difference between an applet and a servlet?- a) Servlets are to servers what applets are to browsers. b) Applets must have graphical user interfaces whereas servlets have no graphical user interfaces.
What is the difference between doPost and doGet methods?- a) doGet() method is used to get information, while doPost() method is used for posting information. b) doGet() requests can’t send large amount of information and is limited to 240-255 characters. However, doPost()requests passes all of its data, of unlimited length. c) A doGet() request is appended to the request URL in a query string and this allows the exchange is visible to the client, whereas a doPost() request passes directly over the socket connection as part of its HTTP request body and the exchange are invisible to the client.
What is the life cycle of a servlet?- Each Servlet has the same life cycle: a) A server loads and initializes the servlet by init () method. b) The servlet handles zero or more client’s requests through service() method. c) The server removes the servlet through destroy() method.
Who is loading the init() method of servlet?- Web server
What are the different servers available for developing and deploying Servlets?- a) Java Web Server b) JRun g) Apache Server h) Netscape Information Server i) Web Logic
How many ways can we track client and what are they?- The servlet API provides two ways to track client state and they are: a) Using Session tracking and b) Using Cookies.
What is session tracking and how do you track a user session in servlets?- Session tracking is a mechanism that servlets use to maintain state about a series requests from the same user across some period of time. The methods used for session tracking are: a) User Authentication - occurs when a web server restricts access to some of its resources to only those clients that log in using a recognized username and password. b) Hidden form fields - fields are added to an HTML form that are not displayed in the client’s browser. When the form containing the fields is submitted, the fields are sent back to the server. c) URL rewriting - every URL that the user clicks on is dynamically modified or rewritten to include extra information. The extra information can be in the form of extra path information, added parameters or some custom, server-specific URL change. d) Cookies - a bit of information that is sent by a web server to a browser and which can later be read back from that browser. e) HttpSession- places a limit on the number of sessions that can exist in memory. This limit is set in the session. maxresidents property.
What is Server-Side Includes (SSI)?- Server-Side Includes allows embedding servlets within HTML pages using a special servlet tag. In many servlets that support servlets, a page can be processed by the server to include output from servlets at certain points inside the HTML page. This is accomplished using a special internal SSINCLUDE, which processes the servlet tags. SSINCLUDE servlet will be invoked whenever a file with an. shtml extension is requested. So HTML files that include server-side includes must be stored with an . shtml extension.
What are cookies and how will you use them?- Cookies are a mechanism that a servlet uses to have a client hold a small amount of state-information associated with the user. a) Create a cookie with the Cookie constructor: public Cookie(String name, String value) b) A servlet can send a cookie to the client by passing a Cookie object to the addCookie() method of HttpServletResponse: public void HttpServletResponse. addCookie(Cookie cookie) c) A servlet retrieves cookies by calling the getCookies() method of HttpServletRequest: public Cookie[ ] HttpServletRequest. getCookie().
Is it possible to communicate from an applet to servlet and how many ways and how?- Yes, there are three ways to communicate from an applet to servlet and they are: a) HTTP Communication(Text-based and object-based) b) Socket Communication c) RMI Communication
What is connection pooling?- With servlets, opening a database connection is a major bottleneck because we are creating and tearing down a new connection for every page request and the time taken to create connection will be more. Creating a connection pool is an ideal approach for a complicated servlet. With a connection pool, we can duplicate only the resources we need to duplicate rather than the entire servlet. A connection pool can also intelligently manage the size of the pool and make sure each connection remains valid. A number of connection pool packages are currently available. Some like DbConnectionBroker are freely available from Java Exchange Works by creating an object that dispenses connections and connection Ids on request. The ConnectionPool class maintains a Hastable, using Connection objects as keys and Boolean values as stored values. The Boolean value indicates whether a connection is in use or not. A program calls getConnection() method of the ConnectionPool for getting Connection object it can use; it calls returnConnection() to give the connection back to the pool.
Why should we go for interservlet communication?- Servlets running together in the same server communicate with each other in several ways. The three major reasons to use interservlet communication are: a) Direct servlet manipulation - allows to gain access to the other currently loaded servlets and perform certain tasks (through the ServletContext object) b) Servlet reuse - allows the servlet to reuse the public methods of another servlet. c) Servlet collaboration - requires to communicate with each other by sharing specific information (through method invocation)
Is it possible to call servlet with parameters in the URL?- Yes. You can call a servlet with parameters in the syntax as (?Param1 = xxx || m2 = yyy).
What is Servlet chaining?- Servlet chaining is a technique in which two or more servlets can cooperate in servicing a single request. In servlet chaining, one servlet’s output is piped to the next servlet’s input. This process continues until the last servlet is reached. Its output is then sent back to the client.
How do servlets handle multiple simultaneous requests?- The server has multiple threads that are available to handle requests. When a request comes in, it is assigned to a thread, which calls a service method (for example: doGet(), doPost() and service()) of the servlet. For this reason, a single servlet object can have its service methods called by many threads at once.
What is the difference between TCP/IP and UDP?- TCP/IP is a two-way communication between the client and the server and it is a reliable and there is a confirmation regarding reaching the message to the destination. It is like a phone call. UDP is a one-way communication only between the client and the server and it is not a reliable and there is no confirmation regarding reaching the message to the destination. It is like a postal mail.
What is Inet address?- Every computer connected to a network has an IP address. An IP address is a number that uniquely identifies each computer on the Net. An IP address is a 32-bit number.
What is Domain Naming Service(DNS)?- It is very difficult to remember a set of numbers(IP address) to connect to the Internet. The Domain Naming Service(DNS) is used to overcome this problem. It maps one particular IP address to a string of characters. For example, www. mascom. com implies com is the domain name reserved for US commercial sites, moscom is the name of the company and www is the name of the specific computer, which is mascom’s server.
What is URL?- URL stands for Uniform Resource Locator and it points to resource files on the Internet. URL has four components: http://www. address. com:80/index.html, where http - protocol name, address - IP address or host name, 80 - port number and index.html - file path.
What is RMI and steps involved in developing an RMI object?- Remote Method Invocation (RMI) allows java object that executes on one machine and to invoke the method of a Java object to execute on another machine. The steps involved in developing an RMI object are: a) Define the interfaces b) Implementing these interfaces c) Compile the interfaces and their implementations with the java compiler d) Compile the server implementation with RMI compiler e) Run the RMI registry f) Run the application
What is RMI architecture?- RMI architecture consists of four layers and each layer performs specific functions: a) Application layer - contains the actual object definition. b) Proxy layer - consists of stub and skeleton. c) Remote Reference layer - gets the stream of bytes from the transport layer and sends it to the proxy layer. d) Transportation layer - responsible for handling the actual machine-to-machine communication.
what is UnicastRemoteObject?- All remote objects must extend UnicastRemoteObject, which provides functionality that is needed to make objects available from remote machines.
Explain the methods, rebind() and lookup() in Naming class?- rebind() of the Naming class(found in java. rmi) is used to update the RMI registry on the server machine. Naming. rebind(”AddSever”, AddServerImpl); lookup() of the Naming class accepts one argument, the rmi URL and returns a reference to an object of type AddServerImpl.
What is a Java Bean?- A Java Bean is a software component that has been designed to be reusable in a variety of different environments.
What is a Jar file?- Jar file allows to efficiently deploying a set of classes and their associated resources. The elements in a jar file are compressed, which makes downloading a Jar file much faster than separately downloading several uncompressed files. The package java. util. zip contains classes that read and write jar files.
What is BDK?- BDK, Bean Development Kit is a tool that enables to create, configure and connect a set of set of Beans and it can be used to test Beans without writing a code.
What is JSP?- JSP is a dynamic scripting capability for web pages that allows Java as well as a few special tags to be embedded into a web file (HTML/XML, etc). The suffix traditionally ends with .jsp to indicate to the web server that the file is a JSP files. JSP is a server side technology - you can’t do any client side validation with it. The advantages are: a) The JSP assists in making the HTML more functional. Servlets on the other hand allow outputting of HTML but it is a tedious process. b) It is easy to make a change and then let the JSP capability of the web server you are using deal with compiling it into a servlet and running it.
What are JSP scripting elements?- JSP scripting elements lets to insert Java code into the servlet that will be generated from the current JSP page. There are three forms: a) Expressions of the form <%= expression %> that are evaluated and inserted into the output, b) Scriptlets of the formthat are inserted into the servlet’s service method, and c) Declarations of the form <%! Code %>that are inserted into the body of the servlet class, outside of any existing methods.
What are JSP Directives?- A JSP directive affects the overall structure of the servlet class. It usually has the following form:<%@ directive attribute=”value” %> However, you can also combine multiple attribute settings for a single directive, as follows:<%@ directive attribute1=”value1? attribute 2=”value2? . . . attributeN =”valueN” %> There are two main types of directive: page, which lets to do things like import classes, customize the servlet superclass, and the like; and include, which lets to insert a file into the servlet class at the time the JSP file is translated into a servlet
What are Predefined variables or implicit objects?- To simplify code in JSP expressions and scriptlets, we can use eight automatically defined variables, sometimes called implicit objects. They are request, response, out, session, application, config, pageContext, and page.
What are JSP ACTIONS?- JSP actions use constructs in XML syntax to control the behavior of the servlet engine. You can dynamically insert a file, reuse JavaBeans components, forward the user to another page, or generate HTML for the Java plugin. Available actions include: jsp:include - Include a file at the time the page is requested. jsp:useBean - Find or instantiate a JavaBean. jsp:setProperty - Set the property of a JavaBean. jsp:getProperty - Insert the property of a JavaBean into the output. jsp:forward - Forward the requester to a newpage. Jsp: plugin - Generate browser-specific code that makes an OBJECT or EMBED
How do you pass data (including JavaBeans) to a JSP from a servlet?- (1) Request Lifetime: Using this technique to pass beans, a request dispatcher (using either “include” or forward”) can be called. This bean will disappear after processing this request has been completed. Servlet: request. setAttribute(”theBean”, myBean); RequestDispatcher rd = getServletContext(). getRequestDispatcher(”thepage. jsp”); rd. forward(request, response); JSP PAGE:(2) Session Lifetime: Using this technique to pass beans that are relevant to a particular session (such as in individual user login) over a number of requests. This bean will disappear when the session is invalidated or it times out, or when you remove it. Servlet: HttpSession session = request. getSession(true); session. putValue(”theBean”, myBean); /* You can do a request dispatcher here, or just let the bean be visible on the next request */ JSP Page: 3) Application Lifetime: Using this technique to pass beans that are relevant to all servlets and JSP pages in a particular app, for all users. For example, I use this to make a JDBC connection pool object available to the various servlets and JSP pages in my apps. This bean will disappear when the servlet engine is shut down, or when you remove it. Servlet: GetServletContext(). setAttribute(”theBean”, myBean); JSP PAGE:
How can I set a cookie in JSP?- response. setHeader(”Set-Cookie”, “cookie string”); To give the response-object to a bean, write a method setResponse (HttpServletResponse response) - to the bean, and in jsp-file:<% bean. setResponse (response); %>
How can I delete a cookie with JSP?- Say that I have a cookie called “foo, ” that I set a while ago & I want it to go away. I simply: <% Cookie killCookie = new Cookie(”foo”, null); KillCookie. setPath(”/”); killCookie. setMaxAge(0); response. addCookie(killCookie); %>
How are Servlets and JSP Pages related?- JSP pages are focused around HTML (or XML) with Java codes and JSP tags inside them. When a web server that has JSP support is asked for a JSP page, it checks to see if it has already compiled the page into a servlet. Thus, JSP pages become servlets and are transformed into pure Java and then compiled, loaded into the server and executed.
This entry was posted in Java. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.
« Some VB interview questionsGeneral UNIX interview questions »

Saturday 3 April 2010

what types of jobs available in the computer industry

What types of jobs are available in the computer industry?
Question:
What types of jobs are available in the computer industry?

Answer:
Below is short listing of different types of computer related jobs in the industry. This list was created for users who enjoy computers but are uncertain about what field to enter. In the below list we have described each of the jobs, the type of requirements, and recommendations what to do if you're interpreted in the job.

If you're looking for the average pay or the highest paying jobs in the computer industry, this document does not contain that information because of the wide variety of salaries depending on the company and its location. However, it's safe to assume that with the increased difficulty and experience required for a job, the higher you'd get paid. If you're looking for a pay range, refer to your your local listings (newspaper) and/or job listings for pay grades.

If you're looking for your first job in the computer industry or just want to get your foot in the door, we suggest looking at Data Entry, Sales, Quality Assurance (QA) / Tester, or Technical Support (Technician / Help Desk) jobs. The qualifications and requirements for these jobs vary, so it's best to refer to your local listings (newspaper) and/or job listing for available positions and the requirements.

Job quick links

3D Animation / Graphic design
Customer service
Data Entry
Database
Engineer
Hardware
Networking
Programmer / Software developer
Quality Assurance (QA) / System analyst / Tester
Sales
Technical Support (Technician / Help Desk)
Technical Writing
Security expert
WebMaster / Web Designer
3D Animation / Graphic design

Description: A position where you design and create either a graphic or 3D animations for software programs, games, movies, web pages, etc. Position may also require that you work on existing graphics, animations, movies, etc. done by other people.

Requirements: An individual applying for this job would need to be talented in design and creating visuals, for most people this is not something that you could train for. In addition to being talented in design and art you must have a good understanding of the software programs being used to create the visual designs or 3D animations.

Recommendations: If you wish to get into graphic design / arts, learn major graphics programs such as Adobe Photoshop. In addition to this program, there are numerous other programs used to create your own pictures or edit photos; see document CH000760 for a listing of these programs. See our animation dictionary definition for additional information about this term as well as a listing of some of the more popular animation programs.

Difficulty: (MEDIUM - HIGH) Many of the programs used to create a graphic, edit a photo, or create a 3D render are complex programs and often require a lot of learning and experience; and in some cases, training or schooling.

Customer service

Description: Helping customers with general questions relating to the company, ordering, status on orders, account information or status, etc.

Requirements: Good communication skills and a general understanding of the company and its products.

Recommendations: Great starting position for anyone who is looking to get their foot in the door at the company and/or who are not yet that familiar with computers.

Difficulty: (LOW) customer service will require that the employee be familiar with computers and be able to navigate through the companies system. However, will seldom require the employee to be skilled with computer.

Data Entry

Description: A job that commonly requires the employee to take information from a hard copy or other source and enter it into an electronic format. Position may also be taking electronic data and entering it into a database for easy sorting and locating.

Requirements: Generally requires someone capable of typing 40-50 or more WPM, familiarity with computer, and usually requires familiarity with a word processors.

Recommendations: Practice your typing and take typing tests to determine your overall speed. Additional information about improving your typing can be found on document CH000752. See document CH000751 for additional information about how to test your typing skills.

Difficulty: (LOW) Most data entry jobs are beginner level jobs and don't require much or any prior experience or formal education.

Database

Description: A job that requires creating, testing, and/or maintaining one or more database.

Requirements: Commonly requires that the user is familiar with and/or has an extensive knowledge with the database at the place of employment. For example: Access, FoxPro, MySQL, SQL, Sybase, etc.

Recommendations: Become familiar with the database being used at the business. If the job is for developing or continuing the development of a database, you will need to have a great understanding of the database as well as how to program it. Often this knowledge requires past experience or formal education.

Difficulty: (MEDIUM - HIGH) Developing or maintaining a database can be a difficult and sometimes very complex job. As mentioned above you will need to have past experience or formal education with maintaining or developing a database before most companies will even consider you.

Engineer

Description: An engineer is someone who is at the top of their class and almost always someone who has or is working on a college degree or several certifications. Although used broadly in this document, the type of engineer is usually specified in the job requirement. For example, a software development engineer may be a highly skilled computer programmer.

Requirements: The requirements for this type of job change depending on the type of engineer you plan on being. However, as mentioned above, any engineer job will require an extensive understanding of the job. Usually, this understanding is obtained from a school, certifications, training, and/or years of past experience.

Recommendations: Get training and/or education in the subject of interest from a school or other location. Learn as much about the subject as possible from books, the Internet, and other sources. Often before you can qualify for many engineer positions you will need past experience; therefore, it's a good idea to get an entry-level job in the same field. For example, if you want to be an engineer in software development, get a job in programming and/or create your own software programs. If you want to become a network engineer, get a job that requires you to setup, maintain, or otherwise work with networks and setup your own home network.

Difficulty: (HIGH) This is a job / position that requires a lot of work to obtain and is not likely something you will be able to get as your first job.

Hardware

Description: A position as a hardware designer, circuit design, embedded systems, firmware, etc. is a job that requires you to design and create a complete hardware package or portions of a hardware device.

Requirements: Jobs that design and/or create hardware devices often require that the person has a good understanding of electronics, circuits, firmware, and/or design. For this type of position the person will often need to have several years of prior experience and/or a degree in the field.

Recommendations: If you're interested in this type of field we suggest you get a degree in the field.

Difficulty: (HIGH) Hardware design is a difficult position to learn and understand unless you get training or a degree.

Networking

Description: Computer networking jobs involve designing, setting up, and/or maintaining a network.

Requirements: Although most users today have their own home networks, setting up, troubleshooting, and maintaining a corporate network can be a much more complicated task. Often, networking jobs also require a good understanding of how a network works, and in some cases how all the underlying protocols and structure of how networks work.

Recommendations: There are numerous types of network and network related certifications available today, such as the CCNA, MCSE, etc. Often depending on the level of certification and the job you're applying for, the certifications will be more than enough to quality you for most network jobs. Some of the higher networking positions, especially in the development of network hardware or programming side, may also require past experience in networking and/or a degree.

Difficulty: (MEDIUM - HIGH) Depending upon the job specifications and the complexity of the network usually determines the difficulty of this job.

Programmer / Software developer

Description: A job that requires the development and/or continued development and maintenance of a software program.

Requirements: A basic to extensive understanding of a programming language. Because most job positions will require a person to develop sections of a program or the whole program, they often require several years of past experience and/or a degree before even considering you.

Recommendations: Learn one or more programming languages. Depending on what type of programs or scripts you wish to create may change the type of language you wish to learn. See our dictionary programming languages definition for a listing of popular programming languages and what type of programs they are used to create. If you need experience, creating your own software programs is a great way to learn a language and demonstrate your abilities at a job interview.

Difficulty: (HIGH) Learning a programming language can be as difficult as learning a second language and takes a lot of experience and practice to become a skilled programmer.

Quality Assurance (QA) / System analyst / Tester

Description: This job requires that the employee test out all features of a product for any problems or usability issues.

Requirements: Requires that the person have a good understanding of computer software, hardware, and the product being tested.

Recommendations: Become familiar with computers, software, hardware, and/or the products the company makes.

Difficulty: (LOW - MEDIUM) Depending on what is being tested and how much needs to be tested usually determines the difficulty of this job. However, for users familiar with the product or similar products, you should not have much difficulty locating and reporting issues.

Sales

Description: Selling a product or service to another person or company.

Requirements: Good communication skills and a general understanding of computers and/or the product that is being sold.

Recommendations: If you're selling computers, computer hardware, or computer software, become familiar with all aspects of the product. Sites like Computer Hope are a great resource to learn about computers. If you're selling a specialized product developed by the company you will be selling for, visit their web page and become as familiar with the product as possible.

Difficulty: (LOW) Sales for computer software, hardware, electronics, or related products is a good first job and can be a good way to learn more about computers.

Technical Support (Technician / Help Desk)

Description: Helping an end-user or company employee with their computers, software program, and/or hardware device. A technical support position is a great first step for people interested in working in the computer industry.

Requirements: A basic understanding of computers, computer's software, and/or hardware.

Recommendations: Become as familiar as possible with computers, computer software, and/or computer hardware, depending on what you will be supporting. Almost all technical support centers that help end-users with their computers, computer software, or computer hardware products have training that all employees go through before you actually start work but will still often require that the user be familiar with computers.

Help desks for corporations do not usually have any type of training; these positions require that the person being hired already have a very good understanding of computers and troubleshooting computer problems.

Difficulty: (LOW - MEDIUM) The difficulty of this job is really depending on the type of training you get. However, someone who is familiar with computers or works with computers often will generally have an easy time with these positions after a few days working at them.

Technical Writing

Description: This position often involves creating or editing technical papers or manuals.

Requirements: This position often requires that the individual has a basic understanding about the subject being written about and have good writing skills.

Recommendations: Many of these positions will require that the person have a degree and will often test a user before hiring them. In addition to having good writing skills, you should also be familiar with a major word processor.

Difficulty: (LOW - MEDIUM) For someone who has good writing skills and familiarity with the subject, this job can be an easy job.

Security expert

Description: Test and find vulnerabilities in a system, hardware device, or software program.

Requirements: This position is for someone who has a strong familiarity with how software, hardware, and/or networks work and how to exploit them. Often, you will need to have a good understanding of how the overall system works as well as good programming skills.

Recommendations: Keep up-to-date with all security news, advisories, and other related news. The majority of security vulnerabilities are through software, and in order to understand these vulnerabilities or find new security vulnerabilities, you'll need to understand how to program and have a good understand of how software works and interacts with computers.

Difficulty: (MEDIUM - HIGH) The difficulty of this job really depends on what you're testing or trying to find any vulnerabilities in.

WebMaster / Web Designer

Description: A job where a person creates, maintains, or completely designs a web page.

Requirements: For basic web designing positions you should have a good understanding of HTML, the Internet, and web servers. More advanced positions where you will be working with more advanced web pages and not just static web pages may also require that you be familiar with such things as CGI, CSS, Flash, FTP, Linux, Perl, PHP, RSS, SSI, Unix, and/or XHTML.

In addition to having a good understanding of the technologies and code used to create a web page, you're also often required to know the software programs they are created in.

Recommendations: One of the best learning experiences for people who are interested in this type of job is to create your own web page. Keeping in mind that simply designing and posting a web page using Microsoft FrontPage without understanding HTML or the code of how it works may not be sufficient enough for most jobs.

Difficulty: (MEDIUM - HIGH) The complexity of this job is really dependent on how difficult of a project you're working on, simply creating and posting a simple web site with no interaction is not that hard. However, creating an interactive site with forms, databases, and overall more interaction with the user and the server can increase the difficulty of the job significantly.

Thursday 1 April 2010

software engineering

Software engineering
Software engineering is a profession and field of study dedicated to designing, implementing, and modifying software so that it is of higher quality, more affordable, maintainable, and faster to build. The term software engineering first appeared in the 1968 NATO Software Engineering Conference, and was meant to provoke thought regarding the perceived "software crisis" at the time.[1][2] Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. Some people argue that development of computer software is more art than science [3], and that attempting to impose engineering disciplines over a type of art is an exercise in futility because what represents good practice in the creation of software is not even defined.[4] Others, such as Steve McConnell, argue that engineering's blend of art and science to achieve practical ends provides a useful model for software development.[5] The IEEE Computer Society's Software Engineering Body of Knowledge defines "software engineering" as the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.[6]
Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. Although it is questionable what impact it has had on actual software development over the last more than 40 years,[7][8] the field's future looks bright according to Money Magazine and Salary.com, who rated "software engineering" as the best job in the United States in 2006.[9]
History
Main article: History of software engineering
When the first modern digital computers appeared in the early 1940s,[10] the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the first division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.
Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, and Cobol were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper, "Go To Statement Considered Harmful",[11] in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972[12] to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm.
These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a computer and write software for it. This in turn led to the now famous Personal Computer (PC) and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired object-oriented programming languages, including C++, Smalltalk, and Objective C.
Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software.[13] Then the Internet and World Wide Web hit in the mid 90s, changing the engineering of software once again. Distributed systems gained sway as a way to design systems, and the Java programming language was introduced with its own virtual machine as another step in abstraction. Programmers collaborated and wrote the Agile Manifesto, which favored more lightweight processes to create cheaper and more timely software.
The current definition of software engineering is still being debated by practitioners today as they struggle to come up with ways to produce software that is "cheaper, bigger, quicker".
Profession
Main article: Software engineer
Legal requirements for the licensing or certification of professional software engineers vary around the world. Many states of the United States license software engineers[citation needed]. In the UK, the British Computer Society licenses software engineers and members of the society can also become Chartered Engineers (CEng), while in some areas of Canada, such as Alberta, Ontario,[14] and Quebec, software engineers can hold the Professional Engineer (P.Eng)designation and/or the Information Systems Professional (I.S.P.) designation; however, there is no legal requirement to have these qualifications.
The IEEE Computer Society and the ACM, the two main professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge - 2004 Version, or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The IEEE also promulgates a "Software Engineering Code of Ethics".[15]
Employment
In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined.[16] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.[17]
Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers).
There is considerable debate over the future employment prospects for software engineers and other IT professionals. For example, an online futures market called the "ITJOBS Future of IT Jobs in America"[18] attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002.
Certification
Professional certification of software engineers is a contentious issue, with some professional organizations supporting it,[19] and others claiming that it is inappropriate given the current level of maturity in the profession.[20] Some see it as a tool to improve professional practice; "The only purpose of licensing software engineers is to protect the public".[21]
The ACM had a professional certification program in the early 1980s,[citation needed] which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering.[20] As of 2006, the IEEE had certified over 575 software professionals.[19] In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified Members (MBCS). In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP)[22]. The Software Engineering Institute offers certification on specific topic such as Security, Process improvement and Software architecture[23].
Most certification programs in the IT industry are oriented toward specific technologies, and are managed by the vendors of these technologies.[24] These certification programs are tailored to the institutions that would employ people who use these technologies.
In some countries the software engineer is an actual engineering degree (Bachelor of Science or Bachelor of Engineering), as an example in Education in Israel software engineer has the right to be written in the engineering registry, and it would be a felony If a person describe himself as an engineer ( The engineering law defines that a person stating himself as an engineer without the proper license / registration could be sentenced to up to 6 months in jail).
Impact of globalization
Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers.[25] Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected.[26][27] Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions.
Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore.[28] It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.[29]
Education
A knowledge of programming is the main pre-requisite to becoming a software engineer, but it is not sufficient. Many software engineers have degrees in Computer Science due to the lack of software engineering programs in higher education. However, this has started to change with the introduction of new software engineering degrees, especially in post-graduate education. A standard international curriculum for undergraduate software engineering degrees was defined by the CCSE.
Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers.[30] In 2004 the IEEE Computer Society produced the SWEBOK, which has become an ISO standard describing the body of knowledge covered by a software engineer[citation needed].
The European Commission within the Erasmus Mundus Programme offers a European master degree called European Master on Software Engineering for students from Europe and also outside Europe[31]. This is a joint program (double degree) involving four universities in Europe.
Sub-disciplines
Software engineering can be divided into ten subdisciplines. They are:[6]
• Software requirements: The elicitation, analysis, specification, and validation of requirements for software.
• Software design: The design of software is usually done with Computer-Aided Software Engineering (CASE) tools and use standards for the format, such as the Unified Modeling Language (UML).
• Software development: The construction of software through the use of programming languages.
• Software testing
• Software maintenance: Software systems often have problems and need enhancements for a long time after they are first completed. This subfield deals with those problems.
• Software configuration management: Since software systems are very complex, their configuration (such as versioning and source control) have to be managed in a standardized and structured method.
• Software engineering management: The management of software systems borrows heavily from project management, but there are nuances encountered in software not seen in other management disciplines.
• Software development process: The process of building software is hotly debated among practitioners with the main paradigms being agile or waterfall.
• Software engineering tools, see Computer Aided Software Engineering
• Software quality

Operating System

Operating system
From Wikipedia, the free encyclopedia

In computing, an operating system (OS) is software (programs and data) that provides an interface between the hardware and other software. The OS is responsible for management and coordination of processes and allocation and sharing of hardware resources such as RAM and disk space, and acts as a host for computing applications running on the OS. An operating system may also provide orderly accesses to the hardware by competing software routines. This relieves the application programmers from having to manage these details.
Operating systems offer a number of services to application programs. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. On large systems such as Unix-like systems, the user interface is always implemented as software that runs outside the operating system. In some other systems like Windows, the Window manager can be part of the operating system itself.
While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems,[1][2] although the Microsoft Windows line of operating systems has almost 90% of the client PC market.
History
Main article: History of operating systems
Mainframe
Through the 1950s, many major features were pioneered in the field of operating systems, including input/output interrupt, buffering, multitasking, spooling, and runtime libraries. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704 and IBM 709 mainframes. In 1964, IBM produced the System/360 family of mainframe computers, available in widely differing capacities and price points, for which a single operating system OS/360 was provided, which eliminated costly, incompatable, ad-hoc programs for every individual model. This concept of a single OS spanning an entire product line was crucial for the success of System/360 and, in fact, IBM's current mainframe operating systems are distant descendants of this original system; applications written for the OS/360 can still be run on modern machines. In the mid-'70s, the MVS, the descendant of OS/360 offered the first[citation needed] implementation of using RAM as a transparent cache for data.
OS/360 also pioneered a number of concepts that, in some cases, are still not seen outside of the mainframe arena. For instance, in OS/360, when a program is started, the operating system keeps track of all of the system resources that are used including storage, locks, data files, and so on. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. An alternative CP-67 system started a whole line of operating systems focused on the concept of virtual machines.
Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General lComprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.
In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.
The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
• Burroughs MCP – B5000,1961 to Unisys Clearpath/MCP, present.
• IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
• IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
• UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.
Microcomputers
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as Monitors. One notable early disk-based operating system was CP/M, which was supported on many early microcomputers and was closely imitated in MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS), its successors making Microsoft. In the 80's Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.
The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the Unix-like NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.
The GNU project was started by activist and programmer Richard Stallman with the goal of a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991 Finnish computer science student Linus Torvalds, with cooperation from volunteers over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU userland and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention which Stallman and the Free Software Foundation remain opposed to, preferring the name "GNU/Linux" instead. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
Features
Program execution
Main article: Process (computing)
The operating system acts as an interface between an application and the hardware. The user interacts with the hardware from "the other side". The operating system is a set of services which simplifies development of applications. Executing a program involves the creation of a process by the operating system. The kernel creates a process by assigning memory and other resources, establishing a priority for the process (in multi-tasking systems), loading program code into memory, and executing the program. The program then interacts with the user and/or other devices and performs its intended function.
Interrupts
Main article: interrupt
Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative—having the operating system "watch" the various sources of input for events (polling) that require action—can be found in older systems with very small stacks (50 or 60 bytes) but fairly unusual in modern systems with fairly large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.
When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.
Protected mode and supervisor mode
Main article: Protected mode
Main article: Supervisor mode
Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. CPUs might have other modes similar to 80286 protected mode as well, such as the virtual 8086 mode of the 80386 (Intel's x86 32-bit microprocessor or i386).
However, the term is used here more generally in operating system theory to refer to all modes which limit the capabilities of programs running in that mode, providing things like virtual memory addressing and limiting access to hardware in a manner determined by a program running in supervisor mode. Similar modes have existed in supercomputers, minicomputers, and mainframes as they are essential to fully supporting UNIX-like multi-user operating systems.
When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.
In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.
The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).
Memory management
Main article: memory management
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself. With cooperative memory management it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU) which doesn't exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.
Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. Under Windows 9x all MS-DOS applications ran in supervisor mode, giving them almost unlimited control over the computer. A general protection fault would be produced indicating a segmentation violation had occurred, however the system would often crash anyway.
In most GNU/Linux systems, part of the hard disk is reserved for virtual memory when the Operating system is being installed on the system. This part is known as swap space. Windows systems use a swap file instead of a partition.
Virtual memory
Main article: Virtual memory
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Further information: Page fault
Multitasking
Main article: Computer multitasking
Main article: Process management (computing)
Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)
Further information: Context switch
Further information: Preemptive multitasking
Further information: Cooperative multitasking
Kernel preemption
Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
Oracle/Sun Solaris has had most kernel thread processing pre-emptive since Solaris 8[3] in February 2000. In November 2001, concerns arose because of long latencies associated with kernel run-times in Linux kernel 2.4, sometimes on the order of 100 ms or more in systems with monolithic kernels. These latencies often produce noticeable slowness in desktop systems, and can prevent operating systems from performing time-sensitive operations such as audio recording and some communications.[4] In December 2003, the preemptable kernel model introduced in GNU/Linux version 2.6, allowing all device drivers and some other parts of kernel code to take advantage of preemptive multi-tasking. January 2007, Windows Vista the introduction of the Windows Display Driver Model (WDDM) accomplishes this for display drivers.
Under Windows versions prior to Windows Vista and Linux prior to version 2.6 all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system.
Disk access and file systems
Main article: Virtual file system
Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and GNU/Linux support a technology known as a virtual file system or VFS. An operating system like UNIX supports a wide array of storage devices, regardless of their design or file systems to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them through the use of specific device drivers and file system drivers.
A connected storage device such as a hard drive is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in GNU/Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in GNU/Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through FS-driver and rfstool).
Device drivers
Main article: Device driver
A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operative systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operative system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view.
Networking
Main article: Computer network
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
Client/server networking involves a program on a computer somewhere which connects via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address[disambiguation needed]. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
Security
Main article: Computer security
A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.
The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
File system support in modern operating systems
Support for file systems is highly varied among modern operating systems although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on.
GNU/Linux
Many GNU/Linux distributions support some or all of ext2, ext3, ext4, ReiserFS, Reiser4, JFS , XFS , GFS, GFS2, OCFS, OCFS2, and NILFS. The ext file systems, namely ext2, ext3 and ext4 are based on the original GNU/Linux file system. Others have been developed by companies to meet their specific needs, hobbyists, or adapted from UNIX, Microsoft Windows, and other operating systems. GNU/Linux has full support for XFS and JFS, along with FAT (the MS-DOS file system), and HFS which is the primary file system for the Macintosh.
In recent years support for Microsoft Windows NT's NTFS file system has appeared in GNU/Linux, and is now comparable to the support available for other native UNIX file systems. ISO 9660 and Universal Disk Format (UDF) are supported which are standard file systems used on CDs, DVDs, and BluRay discs. It is possible to install GNU/Linux on the majority of these file systems. Unlike other operating systems, GNU/Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB key, or even contained within a file located on another file system.
Mac OS X
Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT, UDF, and other file systems, but cannot be installed to them. Due to its UNIX heritage Mac OS X now supports virtually all the file systems supported by the VFS.
Microsoft Windows
Microsoft Windows currently supports NTFS and FAT file systems (including FAT16 and FAT32), along with network file systems[disambiguation needed] shared from other computers, and the ISO 9660 and UDF filesystems used for CDs, DVDs, and other optical discs such as Blu-ray. Under Windows each file system is usually limited in application to certain media, for example CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. Windows Embedded CE 6.0, Windows Vista Service Pack 1, and Windows Server 2008 support ExFAT, a (late version MSWindows-only) file system more suitable for flash drives.
Solaris
The Solaris Operating System uses UFS as its primary file system. Prior to 1998, Solaris UFS did not have logging/journaling capabilities, but over time the OS has gained this and other new data management capabilities.
Additional features include Veritas (Journaling) VxFS, QFS from Sun Microsystems, enhancements to UFS including multiterabyte support and UFS volume management included as part of the OS, and ZFS (free software, poolable, 128-bit, compressible, and error-correcting).
Kernel extensions were added to Solaris to allow for bootable Veritas VxFS operation. Logging or journaling was added to UFS in Solaris 7. Releases of Solaris 10, Solaris Express, OpenSolaris, and other open source variants of Solaris later supported bootable ZFS.
Logical Volume Management allows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Solaris includes Solaris Volume Manager (formerly known as Solstice DiskSuite.) Solaris is one of many operating systems supported by Veritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools in ZFS.
Special-purpose file systems
FAT file systems are commonly found on floppy disks, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. Performance of FAT compares poorly to most other file systems as it uses overly simplistic data structures, making file operations time-consuming, and makes poor use of disk space in situations where many small files are present. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs. Mount Rainier is a newer extension to UDF supported by GNU/Linux 2.6 series and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.
Journalized file systems
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.
In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.
Graphical user interfaces
Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. GNU/Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space, however versions between Windows NT 4.0 and Windows Server 2003's graphics drawing routines exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly found setup on most Unix and Unix-like (BSD, GNU/Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[5]
Examples of operating systems
GNU/Linux and Unix-like operating systems
Main articles: Linux and Unix


Ubuntu desktop
Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and GNU/Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "Unix-like" is commonly used to refer to the large set of operating systems which resemble the original Unix.
Unix-like systems run on a wide variety of machine architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free Unix variants, such as GNU/Linux and BSD, are popular in these areas.
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that vendor's hardware. Others, such as Solaris, can run on multiple types of hardware, including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
Mac OS X
Mac OS X is a line of partially proprietary, graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, six more distinct "client" and "server" editions of Mac OS X have been released, the most recent being Mac OS X v10.6, which was first made available on August 28, 2009. Releases of Mac OS X are named after big cats; the current version of Mac OS X is nicknamed "Snow Leopard".
The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.
Microsoft Windows
Microsoft Windows is a family of proprietary operating systems that originated as an add-on to the older MS-DOS operating system for the IBM PC. Modern versions are based on the newer Windows NT kernel that was originally intended for OS/2. Windows runs on x86, x86-64 and Itanium processors. Earlier versions also ran on the Alpha, MIPS, Fairchild (later Intergraph), Clipper and PowerPC architectures (some work was done to port it to the SPARC architecture).
As of 2009, Microsoft Windows still holds a large amount of the worldwide desktop market share. Windows is also used on servers, supporting applications such as web servers and database servers. In recent years, Microsoft has spent significant marketing and research & development money to demonstrate that Windows is capable of running any enterprise application, which has resulted in consistent price/performance records (see the TPC) and significant acceptance in the enterprise market.
Currently, the most widely used version of the Microsoft Windows family is Windows XP, released on October 25, 2001.
In November 2006, after more than five years of development work, Microsoft released Windows Vista, a major new operating system version of Microsoft Windows family which contains a large number of new features and architectural changes. Chief amongst these are a new user interface and visual style called Windows Aero, a number of new security features such as User Account Control, and a few new multimedia applications such as Windows DVD Maker. A server variant based on the same kernel, Windows Server 2008, was released in early 2008.
On October 22, 2009, Microsoft released Windows 7, the successor to Windows Vista, coming three years after its release. While Vista was about introducing new features, Windows 7 aims to streamline these and provide for a faster overall working environment. Windows Server 2008 R2, the server variant, was released at the same time.
Google Chrome OS
On July 7th 2009 Google announced that they will be releasing an Operating System by the second half of 2010. Google Chrome OS will be designed to work exclusively with web applications. It will be an open source OS.


This is what Google Chrome OS is expected to look like.
Plan 9


Plan 9
Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. Plan 9 has yet to become as popular as Unix derivatives, but it has an expanding community of developers. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.
Real-time operating systems
Main article: real-time operating system
A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase[citation needed].
Some embedded systems use operating systems such as Symbian OS, Palm OS, BSD, and GNU/Linux, although such operating systems do not support real-time computing.
Hobby development
Operating system development is one of the more involved and technical options for the computing hobbyist. A hobby operating system is classified as one with little or no support from maintenance developers. [6] Development usually begins with an existing operating system. The hobbyist is their own developer, or they interact in a relatively small and unstructured group of individuals who are all similarly situated with the same code base. Examples of a hobby operating system include Syllable and ReactOS; Minix is a classical example.
Commodore
Commodore International designed a series of 8 bit platforms that were all to one degree or another separately intelligent and yet interconnectable. For instance, one computer always powered up as a host, and the others powered up in a generally cooperative state, according to a complex coordination of signals (TALK/LISTEN protocol) so they could work separately or in tandem, depending on whatever tasks were at hand.[citation needed] Although the TALK/LISTEN protocol logically supported up to 30 devices daisy-chaining together on the serial bus, signal attenuation required some kind of device in the middle for voltage maintenance through a buffer, amplifier, and propagator. For the state of the art in the late 1980s, the machine was at a roadblock. The TALK/LISTEN protocol was quite similar to SCSI bus management but there was no arbitration phase, and only one powered up as host, which could then command one or more of the other devices to enter into a TALKing or LISTENing state, until such time that some other computer in the daisy chain was willing to be the host. In some cases, one or more computers could drop off the daisy chain for a period of time until they voluntarily (of their own accord) came back, which was called "reentrance",[citation needed] but there was still no arbitration phase like that enjoyed by SCSI-compliant computers. One of the limitations was the small number of physical devices (close to 32, depending on the way the signal was amplified prior to propagation) that could be connected, preventing it from being useful in a multi-user environment.
Other
Older operating systems which are still used in niche markets include OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300. Some, most notably AmigaOS 4 and RISC OS, continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard.
There were a number of operating systems for 8 bit computers - Apple's DOS (Disk Operating System) 3.2 & 3.3 for Apple II, ProDOS, UCSD, CP/M - available for various 8 and 16 bit environments, FutureOS for the Amstrad CPC6128 and 6128Plus.
Research and development of new operating systems continues. GNU Hurd is designed to be backwards compatible with Unix, but with enhanced functionality and a microkernel architecture. Singularity is a project at Microsoft Research to develop an operating system with better memory protection based on the .Net managed code model. Systems development follows the same model used by other Software development, which involves maintainers, version control "trees",[citation needed] forks, "patches", and specifications. From the AT&T-Berkeley lawsuit the new unencumbered systems were based on 4.4BSD which forked as FreeBSD and NetBSD efforts to replace missing code after the Unix wars. Recent forks include DragonFly BSD and Darwin from BSD Unix.[citation needed]
Diversity of operating systems and portability
Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java, Qt or for web browsers. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.