Monday, November 14, 2011

What is server consolidation?

Server consolidation is an approach to the efficient usage of computer server resources in order to reduce the total number of servers or server locations that an organization requires. The practice developed in response to the problem of server sprawl, a situation in which multiple, under-utilized servers take up more space and consume more resources than can be justified by their workload.
According to Tony Iams, Senior Analyst at D.H.

Brown Associates Inc. in Port Chester, NY, servers in many companies typically run at 15-20% of their capacity, which may not be a sustainable ratio in the current economic environment. Businesses are increasingly turning to server consolidation as one means of cutting unnecessary costs and maximizing return on investment (ROI) in the data center. Of 518 respondents in a Gartner Group research study, six percent had conducted a server consolidation project, 61% were currently conducting one, and 28% were planning to do so in the immediate future.


Although consolidation can substantially increase the efficient use of server resources, it may also result in complex configurations of data, applications, and servers that can be confusing for the average user to contend with. To alleviate this problem, server virtualization may be used to mask the details of server resources from users while optimizing resource sharing. Another approach to server consolidation is the use of blade servers to maximize the efficient use of space.

Friday, August 5, 2011

What is virtual machine (VM)?

A virtual machine (VM) is an environment, usually a program or operating system, which does not physically exist but is created within another environment. In this context, a VM is called a "guest" while the environment it runs within is called a "host." Virtual machines are often created to execute an instruction set different than that of the host environment. One host environment can often run multiple VMs at once. Because VMs are separated from the physical resources they use, the host environment is often able to dynamically assign those resources among them.
The phrase "virtual machine" is commonly used to describe Java runtime environment, the Java Virtual Machine (JVM), in which Java-specific commands are interpreted. The JVM is a virtual machine in that it executes code compiled specifically for it – known as bytecode – and abstracts use of resources for this bytecode. The Java programming language does not rely on platform-specific instruction sets, such as APIs specific to any one operating system, to display output or access resources such as files. Instead, the JVM creates virtualized resources which the bytecode accesses. These actions are then passed on to the machine's actual resources.
A user interacting with a virtualized server can view the server as a physical machine, in the sense that the user would see access to machines resources like hard disks, RAM, processors and Ethernet connections. In fact, all of these machine resources are virtual. For instance, instead of accessing a real hard disk, the user is accessing a construct of the host environment. This construct then accesses the real disk to record the data.
"A running program is often referred to as a virtual machine - a machine that doesn't exist as a matter of actual physical reality. The virtual machine idea is itself one of the most elegant in the history of technology and is a crucial step in the evolution of ideas about software. To come up with it, scientists and technologists had to recognize that a computer running a program isn't merely a washer doing laundry. A washer is a washer whatever clothes you put inside, but when you put a new program in a computer, it becomes a new machine. . . The virtual machine: A way of understanding software that frees us to think of software design as machine design." -From David Gelernter's "Truth, Beauty, and the Virtual Machine," Discover Magazine, September 1997, p. 72.

What is private cloud (internal cloud or corporate cloud)?

What is a private cloud?
Private cloud (also called internal cloud or corporate cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall.
Advances in virtualization and distributed computing have allowed corporate network and datacenter administrators to effectively become service providers that meet the needs of their "customers" within the corporation.
Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs or wants more control over their data than they can get by using a third-party hosted service such as Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service (S3). 

What is service-oriented architecture (SOA)?

A service-oriented architecture (SOA) is the underlying structure supporting communications between services. SOA defines how two computing entities, such as programs, interact in such a way as to enable one entity to perform a unit of work on behalf of another entity. Service interactions are defined using a description language. Each interaction is self-contained and loosely coupled, so that each interaction is independent of any other interaction.
Simple Object Access Protocol (SOAP)-based Web services are becoming the most common implementation of SOA. However, there are non-Web services implementations of SOA that provide similar benefits. The protocol independence of SOA means that different consumers can communicate with the service in different ways. Ideally, there should be a management layer between the providers and consumers to ensure complete flexibility regarding implementation protocols.
Whether you realize it or not, you've probably relied upon SOA, perhaps when you made a purchase online. Let's use Land's End as an example. You look at their catalog and choose a number of items. You specify your order through one service, which communicates with an inventory service to find out if the items you've requested are available in the sizes and colors that you want. Your order and shipping details are submitted to another service which calculates your total, tells you when your order should arrive and furnishes a tracking number that, through another service, will allow you to keep track of your order's status and location en route to your door. The entire process, from the initial order to its delivery, is managed by communications between the Web services -- programs talking to other programs, all made possible by the underlying framework that SOA provides.

Wednesday, July 27, 2011

What is virtualization?

Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.
You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.
Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.
In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:
  • Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.
  • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
  • Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.
Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.

What is World Wide Web (WWW)?

A technical definition of the World Wide Web is: all the resources and users on the Internet that are using the Hypertext Transfer Protocol (HTTP).
A broader definition comes from the organization that Web inventor Tim Berners-Lee helped found, the World Wide Web Consortium (W3C):
"The World Wide Web is the universe of network-accessible information, an embodiment of human knowledge."

What is Web 2.0 (or Web 2)?

Web 2.0 (or Web 2) is the popular term for advanced Internet technology and applications including blogs, wikis, RSS and social bookmarking. The two major components of Web 2.0 are the technological advances enabled by Ajax and other new applications such as RSS and Eclipse and the user empowerment that they support.

Tim O'Reilly is generally credited with inventing the term, following a conference dealing with next-generation Web concepts and issues held by O'Reilly Media and MediaLive International in 2004. O'Reilly Media has subsequently been energetic about trying to copyright "Web 2.0" and holds an annual conference of the same name. There is, however, some dispute about whether O'Reilly is responsible for the original coinage. Joe Firmage, for instance, used Web 2.0 to describe using the World Wide Web as a platform in 2003.

One of the most significant differences between Web 2.0 and the traditional World Wide Web (retroactively referred to as Web 1.0) is greater collaboration among Internet users and other users, content providers, and enterprises.

Originally, data was posted on Web sites, and users simply viewed or downloaded the content. Increasingly, users have more input into the nature and scope of Web content and in some cases exert real-time control over it. For example, multiple-vendor online book outlets such as BookFinder4U make it possible for users to upload book reviews as well as find rare and out-of-print books at a minimum price, and dynamic encyclopedias such as Wikipedia allow users to create and edit the content of a worldwide information database in multiple languages. Internet forums have become more extensive and led to the proliferation of blogging. The dissemination of news evolved into RSS.

There is no clear-cut demarcation between Web 2.0 and Web 1.0 technologies, hardware and applications. The distinction is, to a large extent, subjective. Here are a few characteristics often noted as descriptive of Web 2.0:

  • blogging
  • Ajax and other new technologies
  • Google Base and other free Web services
  • RSS-generated syndication
  • social bookmarking
  • mash-ups
  • wikis and other collaborative applications
  • dynamic as opposed to static site content
  • interactive encyclopedias and dictionaries
  • ease of data creation, modification or deletion by individual users
  • advanced gaming.

Critics of Web 2.0 maintain that it makes it too easy for the average person to affect online content and that, as a result, the credibility, ethics and even legality of Web content could suffer. Defenders of Web 2.0 point out that these problems have existed ever since the infancy of the medium and that the alternative -- widespread censorship based on ill-defined elitism -- would be far worse. The final judgment concerning any Web content, say the defenders, should be made by end users alone. Web 2.0 reflects evolution in that direction.

Some industry pundits are already claiming that Web 2.0 is merely a transitional phase between the early days of the World Wide Web's existence and a more established phase they're calling Web 3.0.