FACTOID # 9: The bookmobile capital of America is Kentucky.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Distributed computing

Distributed computing is a method of computer processing in which different parts of a program are run simultaneously on two or more computers that are communicating with each other over a network. Distributed computing is a type of segmented or parallel computing, but the latter term is most commonly used to refer to processing in which different parts of a program run simultaneously on two or more processors that are part of the same computer. While both types of processing require that a program be segmented—divided into sections that can run simultaneously, distributed computing also requires that the division of the program take into account the different environments on which the different sections of the program will be running. For example, two computers are likely to have different file systems and different hardware components. Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. ...


An example of distributed computing is BOINC, a framework in which large problems can be divided into many small problems which are distributed to many computers. Later, the small results are reassembled into a larger solution. The Berkeley Open Infrastructure for Network Computing (BOINC) is a distributed computing infrastructure intended to be useful to fields beyond SETI. It is being developed by a team based at the University of California, Berkeley led by the project director of SETI@home, David Anderson. ...


Distributed computing is a natural result of using networks to enable computers to communicate efficiently. But distributed computing is distinct from computer networking or fragmented computing. The latter refers to two or more computers interacting with each other, but not, typically, sharing the processing of a single program. The World Wide Web is an example of a network, but not an example of distributed computing. This article or section is in need of attention from an expert on the subject. ... WWWs historical logo designed by Robert Cailliau The World Wide Web (commonly shortened to the Web) is a system of interlinked, hypertext documents accessed via the Internet. ...


There are numerous technologies and standards used to construct distributed computations, including some which are specially designed and optimized for that purpose, such as Remote Procedure Calls (RPC) or Remote Method Invocation (RMI) or .NET Remoting. Remote procedure call (RPC) is a protocol that allows a computer program running on one computer to cause a subroutine on another computer to be executed without the programmer explicitly coding the details for this interaction. ... The Java Remote Method Invocation API, or Java RMI, is a Java application programming interface for performing remote procedure calls. ... . ...

a group of distributed computers working together can outperform one large mainframe.
a group of distributed computers working together can outperform one large mainframe. [1]

Contents

Image File history File links Size of this preview: 800 × 452 pixelsFull resolution‎ (839 × 474 pixels, file size: 64 KB, MIME type: image/jpeg) author: C.Martinez, created and marked Created Commons and posted in interquannta. ... Image File history File links Size of this preview: 800 × 452 pixelsFull resolution‎ (839 × 474 pixels, file size: 64 KB, MIME type: image/jpeg) author: C.Martinez, created and marked Created Commons and posted in interquannta. ...

Organization

Organizing the interaction between each computer is of prime importance. In order to be able to use the widest possible range and types of computers, the protocol or communication channel should not contain or use any information that may not be understood by certain machines. Special care must also be taken that messages are indeed delivered correctly and that invalid messages are rejected which would otherwise bring down the system and perhaps the rest of the network.


Another important factor is the ability to send software to another computer in a portable way so that it may execute and interact with the existing network. This may not always be possible or practical when using differing hardware and resources, in which case other methods must be used such as cross-compiling or manually porting this software.


Goals and advantages

There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems. In human-computer interaction, computer transparency is an aspect of user friendliness which prevents the user from worrying about technical details (like installation, updating, downloading or device drivers). ... In telecommunications and software engineering, scalability indicates the capability of a system to increase performance under an increased load when resources (typically hardware) are added. ... Fault-tolerance or graceful degradation is the property of a system that continues operating properly in the event of failure of some of its parts. ... Stand-alone is a loosely defined term used to sort computer programs. ...


Openness

Openness is the property of distributed systems such that each subsystem is continually open to interaction with other systems (see references). Web Services protocols are standards which enable distributed systems to be extended and scaled. In general, an open system that scales has an advantage over a perfectly closed and self-contained system. A web service is a collection of protocols and standards used for exchanging data between applications. ... For meanings in specific fields, see protocol (computing) or protocol (cryptography). ...


Consequently, open distributed systems are required to meet the following challenges:

Monotonicity
Once something is published in an open system, it cannot be taken back.
Pluralism
Different subsystems of an open distributed system include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in open distributed systems.
Unbounded nondeterminism
Asynchronously, different subsystems can come up and go down and communication links can come in and go out between subsystems of an open distributed system. Therefore the time that it will take to complete an operation cannot be bounded in advance (see unbounded nondeterminism).

In computer science, unbounded nondeterminism (sometimes called unbounded indeterminacy) is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. ...

Drawbacks and disadvantages

See also: Fallacies of Distributed Computing

The Fallacies of Distributed Computing are a set of common but flawed assumptions made by programmers when first developing distributed applications. ...

Technical issues

If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."[2] Reliability concerns quality or consistency. ... In telecommunications and reliability theory, the term availability has the following meanings: 1. ... Leslie Lamport Dr. Leslie Lamport (born 1941) is an American computer scientist. ...


Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes. Troubleshooting is a form of problem solving. ...


Many types of computation are not well suited for distributed environments, typically owing to the amount of network communication or synchronization that would be required between nodes. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment. Look up computation in Wiktionary, the free dictionary. ... Synchronization (or Sync) is a problem in timekeeping which requires the coordination of events to operate a system in unison. ... Node(Latin nodus ‘knot’) is critical element of any computer network. ... Bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a filter, a communication channel, or a signal spectrum, and is typically measured in hertz. ... Latency is the time a message takes to traverse a system. ... Buskers perform in San Francisco A performance, in performing arts, generally comprises an event in which one group of people (the performer or performers) behave in a particular way for another group of people (the audience). ...


Project-related problems

Distributed computing projects may generate data that is proprietary to private industry, even though the process of generating that data involves the resources of volunteers. This may result in controversy as private industry profits from the data which is generated with the aid of volunteers. In addition, some distributed computing projects, such as biology projects that aim to develop thousands or millions of "candidate molecules" for solving various medical problems, may create vast amounts of raw data. This raw data may be useless by itself without refinement of the raw data or testing of candidate results in real-world experiments. Such refinement and experimentation may be so expensive and time-consuming that it may literally take decades to sift through the data. Until the data is refined, no benefits can be acquired from the computing work.


Other projects suffer from lack of planning on behalf of their well-meaning originators. These poorly planned projects may not generate results that are palpable, or may not generate data that ultimately result in finished, innovative scientific papers. Sensing that a project may not be generating useful data, the project managers may decide to abruptly terminate the project without definitive results, resulting in wastage of the electricity and computing resources used in the project. Volunteers may feel disappointed and abused by such outcomes. There is an obvious opportunity cost of devoting time and energy to a project that ultimately is useless, when that computing power could have been devoted to a better planned distributed computing project generating useful, concrete results.


Another problem with distributed computing projects is that they may devote resources to problems that may not ultimately be soluble, or to problems that are best pursued later in the future, when desktop computing power becomes fast enough to make pursuit of such solutions practical. Some distributed computing projects may also attempt to use computers to find solutions by number-crunching mathematical or physical models. With such projects there is the risk that the model may not be designed well enough to efficiently generate concrete solutions. The effectiveness of a distributed computing project is therefore determined largely by the sophistication of the project creators.-


Architecture

Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely-coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system. In computing, a process is an instance of a computer program that is being executed. ...


Distributed programming typically falls into one of several basic architectures or categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or tight coupling. Client/Server is a network application architecture which separates the client (usually the graphical user interface) from the server. ... It has been suggested that this article or section be merged with Multitier architecture. ... In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client-server architecture, originally designed by Jonathon Bolster of Hematites Corp, in which an application is executed by more than one distinct software agent. ... Software modules that are designed to work together but reside in multiple computer systems throughout the organization. ... Loosely coupled describes a resilient relationship between two or more computer systems that are exchanging data. ... An example of a Computer cluster A computer cluster is a group of tightly coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. ...

  • Client-server — Smart client code contacts the server for data, then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change.
  • 3-tier architecture — Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier.
  • N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
  • Tightly coupled (clustered) — refers typically to a set of highly integrated machines that run the same process in parallel, subdividing the task in parts that are made individually by each one, and then put back together to make the final result.
  • Peer-to-peer — an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.
  • Space based — refers to an infrastructure that creates the illusion (virtualization) of one single address-space. Data are transparently replicated according to application needs. Decoupling in time, space and reference is achieved.

Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database.[3] Client/Server is a network application architecture which separates the client (usually the graphical user interface) from the server. ... It has been suggested that this article or section be merged with Multitier architecture. ... In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client-server architecture, originally designed by Jonathon Bolster of Hematites Corp, in which an application is executed by more than one distinct software agent. ... An application server is a software engine that delivers applications to client computers or devices. ... An example of a Computer cluster A computer cluster is a group of tightly coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. ... A peer-to-peer (or P2P) computer network is a network that relies on the computing power and bandwidth of the participants in the network rather than concentrating it in a relatively few servers. ... Space-Based Architecture (SBA) is a software architecture pattern for achieving linear scalability of stateful, high-performance applications using the tuple space paradigm. ... In computer science, message passing is a form of communication used in concurrent programming, parallel programming, object-oriented programming, and interprocess communication. ... This article is about the computer concept. ... Database-centric architecture is a term that has several distinct meanings, all of which relate to software architectures in which databases play a crucial role. ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... This article is about computing. ...


Concurrency

Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not taught as distinct subjects [4]. The Dining Philosophers, a classic problem involving concurrency and shared resources In computer science, concurrency is a property of systems in which several computational processes are executing at the same time, and potentially interacting with each other. ... Parallel programming (also concurrent programming), is a computer programming technique that provides for the execution of operations concurrently, either within a single computer, or across a number of systems. ...


Multiprocessor systems

A multiprocessor system is simply a computer that has >1 & not <=1 CPU on its motherboard. If the operating system is built to take advantage of this, it can run different processes (or different threads belonging to the same process) on different CPUs. Multiprocessing is traditionally known as the use of multiple concurrent processes in a system as opposed to a single process at any one instant. ... In computing, a process is an instance of a computer program that is being executed. ...


Multicore systems

Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a technology called Hyperthreading that allowed more than one thread (usually two) to run on the same CPU. The more recent Sun UltraSPARC T1, AMD Athlon 64 X2, AMD Athlon FX, AMD Opteron, Intel Pentium D, Intel Core, Intel Core 2 and Intel Xeon processors feature multiple processor cores to also increase the number of concurrent threads they can run. The Pentium 4[1] brand refers to Intels mainstream desktop and mobile single-core CPUs (introduced on November 20, 2000[2]) with the seventh-generation NetBurst architecture, which was the companys first all-new design since the Intel P6 of the Pentium Pro branded CPUs of 1995. ... Hyper-Threading (HTT = Hyper Threading Technology) is Intels trademark for their implementation of the simultaneous multithreading technology on the Pentium 4 microarchitecture. ... For the form of code consisting entirely of subroutine calls, see Threaded code. ... Sun Microsystems UltraSPARC T1 microprocessor, known until its 14 November 2005 announcement by its development codename Niagara , is a multithreading, multicore CPU. Designed to lower the energy consumption of server computers, the CPU uses typically 72 W of power at 1. ... Athlon 64 X2 Logo Athlon 64 X2 E6 3800+ The Athlon 64 X2 is the first dual-core desktop CPU manufactured by AMD. It is essentially a processor consisting of two Athlon 64 cores joined together on one die with additional control logic. ... // Features The Athlon FX features an on-die memory controller, a feature not previously seen on x86 CPUs. ... The Opteron is AMDs x86 server processor line, and was the first processor to implement the AMD64 instruction set architecture (known generically as x86-64). ... Pentium D logo as of 2006. ... This article is about the Intel mobile processor family. ... The Core 2 brand refers to a range of Intels consumer 64-bit dual-core and MCM quad-core CPUs with the x86-64 instruction set, and based on the Intel Core microarchitecture, which derived from the 32-bit dual-core Yonah laptop processor. ... This article is about the Intel microprocessor. ...


Multicomputer systems

A multicomputer may be considered to be either a loosely coupled NUMA computer or a tightly coupled cluster. Multicomputers are commonly used when strong compute power is required in an environment with restricted physical space or electrical power. Non-Uniform Memory Access or Non-Uniform Memory Architecture (NUMA) is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. ... Linux Cluster at Purdue University A computer cluster is a group of locally connected computers that work together as a unit. ...


Common suppliers include Mercury Computer Systems, CSPI, and SKY Computers. Mercury Computer Systems, Inc. ...


Common uses include 3D medical imaging devices and mobile radar.


Computing taxonomies

The types of distributed systems are based on Flynn's taxonomy of systems; single instruction, single data (SISD), single instruction, multiple data (SIMD), multiple instruction, single data (MISD), and multiple instruction, multiple data (MIMD). Other taxonomies and architectures available at Computer architecture and in Category:Computer architecture. Flynns taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1972. ... SISD is an acronym for Single Instruction stream over a Single Data stream. ... -1... Multiple Instruction Single Data (MISD) is a type of parallel computing architecture where many functional units perform different operations on the same data. ... Multiple Instruction Multiple Data (MIMD) is a type of parallel computing architecture where many functional units perform different operations on different data. ... A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79). ...


Computer clusters

Main article: Cluster computing

A cluster consists of multiple stand-alone machines acting in parallel across a local high speed network. Distributed computing differs from cluster computing in that computers in a distributed computing environment are typically not exclusively running "group" tasks, whereas clustered computers are usually much more tightly coupled. Distributed computing also often consists of machines which are widely separated geographically. Linux Cluster at Purdue University A computer cluster is a group of locally connected computers that work together as a unit. ... Linux Cluster at Purdue University A computer cluster is a group of locally connected computers that work together as a unit. ...


Grid computing

Main article: Grid computing

A grid uses the resources of many separate computers, loosely connected by a network (usually the Internet), to solve large-scale computation problems. Public grids may use idle time on many thousands of computers throughout the world. Such arrangements permit handling of data that would otherwise require the power of expensive supercomputers or would have been impossible to analyze. Grid computing is a phrase in distributed computing which can have several meanings: A local computer cluster which is like a grid because it is composed of multiple nodes. ... For other uses, see Supercomputer (disambiguation). ...


Languages

Nearly any programming language that has access to the full hardware of the system could handle distributed programming given enough time and code. Remote procedure calls distribute operating system commands over a network connection. Systems like CORBA, Microsoft DCOM, Java RMI and others, try to map object oriented design to the network. Loosely coupled systems communicate through intermediate documents that are typically human readable (e.g. XML, HTML, SGML, X.500, and EDI). A programming language is an artificial language that can be used to control the behavior of a machine, particularly a computer. ... For other uses, see Hardware (disambiguation). ... Remote procedure call (RPC) is a protocol that allows a computer program running on one computer to cause a subroutine on another computer to be executed without the programmer explicitly coding the details for this interaction. ... An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. ... In computing, Common Object Request Broker Architecture (CORBA) is a standard for software componentry, created and controlled by the Object Management Group (OMG). ... Distributed Component Object Model (DCOM) is a Microsoft proprietary technology for software components distributed across several networked computers to communicate with each other. ... The Java Remote Method Invocation API, or RMI, is a Java application programming interface for performing remote procedural calls. ... In computer science, object-oriented programming, OOP for short, is a computer programming paradigm. ... The Extensible Markup Language (XML) is a general-purpose markup language. ... HTML, an initialism of Hypertext Markup Language, is the predominant markup language for web pages. ... The Standard Generalized Markup Language (SGML) is a metalanguage in which one can define markup languages for documents. ... X.500 is the set of ITU-T computer networking standards covering electronic directory services such as white pages, Knowbot and whois. ... An inter-company, application-to-application communication of data in standard format for business transactions Electronic Data Interchange (EDI) is a set of standards for structuring information that is to be electronically exchanged between and within businesses, organizations, government entities and other groups. ...


Languages specifically tailored for distributed programming are:

Ada is a structured, statically typed imperative computer programming language designed by a team led by Jean Ichbiah of CII Honeywell Bull during 1977–1983. ... The Alef programming language was designed by Phil Winterbottom of Bell Labs as part of the Plan 9 operating system. ... E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller and others at Electric Communities in 1997. ... Erlang is a general-purpose concurrent programming language and runtime system. ... Limbo is a programming language for writing distributed systems and is the language used to write applications for the Inferno operating system. ... Oz is a multi-paradigm programming language. ... ZPL (short for Z-level Programming Language) is an array programming language designed to replace C and C++ programming languages in engineering and scientific applications. ...

Examples

Projects

A variety of distributed computing projects have grown up in recent years. Many are run on a volunteer basis, and involve users donating their unused computational power to work on interesting computational problems. Examples of such projects include the Stanford University Chemistry Department Folding@home project, which is focused on simulations of protein folding to find disease cures; World Community Grid, an effort to create the world's largest public computing grid to tackle scientific research projects that benefit humanity, run and funded by IBM; SETI@home, which is focused on analyzing radio-telescope data to find evidence of intelligent signals from space, hosted by the Space Sciences Laboratory at the University of California, Berkeley; and distributed.net, which is focused on breaking various cryptographic ciphers.[6] A list of distributed computing projects. ... Stanford redirects here. ... For other uses, see Chemistry (disambiguation). ... Folding@Home (also known as FAH or F@H) is a distributed computing project designed to perform computationally intensive simulations of protein folding and other molecular dynamics. ... Protein before and after folding. ... World Community Grid (WCG) is an effort to create the worlds largest public computing grid to tackle scientific research projects that benefit humanity. ... For other uses, see IBM (disambiguation) and Big Blue. ... SETI@home logo SETI@home (SETI at home) is a distributed computing project using Internet-connected computers, hosted by the Space Sciences Laboratory, at the University of California, Berkeley, in the United States. ... The Space Sciences Laboratory (SSL) is run by the University of California, Berkeley. ... Sather tower (the Campanile) looking out over the San Francisco Bay and Mount Tamalpais. ... The distributed. ...


Distributed computing projects also often involve competition with other distributed systems. This competition may be for prestige, or it may be a matter of enticing users to donate processing power to a specific project. For example, stat races are a measure of the work a distributed computing project has been able to compute over the past day or week. This has been found to be so important in practice that virtually all distributed computing projects offer online statistical analyses of their performances, updated at least daily if not in real-time.


See also

Wikibooks
Wikibooks has a book on the topic of

Image File history File links Wikibooks-logo-en. ... Wikibooks logo Wikibooks, previously called Wikimedia Free Textbook Project and Wikimedia-Textbooks, is a wiki for the creation of books. ... The Fallacies of Distributed Computing are a set of common but flawed assumptions made by programmers when first developing distributed applications. ... This is a list of important publications in computer science, organized by field. ... Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. ... Network Agility is an architectural discipline for computer networking. ... An application server is a software engine that delivers applications to client computers or devices. ... It has been suggested that this article or section be merged with Component-based software engineering. ... The Distributed Computing Environment (DCE) is a software system developed in the early 1990s by a consortium that included Apollo Computer (later part of Hewlett-Packard), IBM, Digital Equipment Corporation, and others. ... High-throughput computing (HTC) is a computer science term to describe the use many computing resources over long periods of time to accomplish a computational task. ... A list of distributed computing projects. ...

References

  1. ^ Fish Picture from the Distributed Computing Review
  2. ^ Leslie Lamport. Subject: distribution (Email message sent to a DEC SRC bulletin board at 12:23:29 PDT on 28 May 87). Retrieved on 2007-04-28.
  3. ^ A database-centric virtual chemistry system, J Chem Inf Model. 2006 May-Jun;46(3):1034-9
  4. ^ CS236370 Concurrent and Distributed Programming 2002
  5. ^ Ada Reference Manual, ISO/IEC 8652:2005(E) Ed. 3, Annex E Distributed Systems
  6. ^ David P. Anderson (2005-05-23). "A Million Years of Computing". Retrieved on 2006-08-11.

Leslie Lamport Dr. Leslie Lamport (born 1941) is an American computer scientist. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ... is the 118th day of the year (119th in leap years) in the Gregorian calendar. ... David P. Anderson is currently a Research Scientist at the Space Sciences Laboratory, at the University of California, Berkeley. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 223rd day of the year (224th in leap years) in the Gregorian calendar. ...

Further reading

  • Attiya, Hagit and Welch, Jennifer (2004). Distributed Computing: Fundamentals, Simulations, and Advanced Topics. Wiley-Interscience.  ISBN 0471453242.
  • Lynch, Nancy A (1997). Distributed Algorithms. Morgan Kaufmann.  ISBN 1558603484.
  • Tel, Gerard (1994). Introduction to Distributed Algorithms. Cambridge University Press. 
  • Davies, Antony (June 2004). "Computational Intermediation and the Evolution of Computation as a Commodity". Applied Economics. 
  • Kornfeld, William (January 1981). "The Scientific Community Metaphor". MIT AI (Memo 641). 
  • Hewitt, Carl (August 1983). "Analyzing the Roles of Descriptions and Actions in Open Systems". Proceedings of the National Conference on Artificial Intelligence. 
  • Hewitt, Carl (April 1985). "The Challenge of Open Systems". Byte Magazine. 
  • Hewitt, Carl (1999-10-23–1999-10-27). "Towards Open Information Systems Semantics". Proceedings of 10th International Workshop on Distributed Artificial Intelligence. 
  • Hewitt, Carl (January 1991). "Open Information Systems Semantics". Journal of Artificial Intelligence. 
  • Nadiminti, Dias de Assunção, Buyya (September 2006). "Distributed Systems and Recent Innovations: Challenges and Benefits". InfoNet Magazine, Volume 16, Issue 3, Melbourne, Australia. 

External links


  Results from FactBites:
 
The MathWorks - Distributed Computing Toolbox - Perform distributed and parallel computations (257 words)
By distributing the tasks across three servers, we reduced the simulation time from 6 hours to 1.2 hours.
Distributed Computing Toolbox enables you to solve computationally and data-intensive problems using MATLAB and Simulink in a multiprocessor computing environment.
These processors can reside in one multiprocessor computer or, when the toolbox is used with MATLAB Distributed Computing Engine, on a computer cluster.
Distributed computing - Wikipedia, the free encyclopedia (1968 words)
Distributed computing is decentralised and parallel computing, using two or more computers communicating over a network to accomplish a common objective or task.
Distributed computing differs from cluster computing in that computers in a distributed computing environment are typically not exclusively running "group" tasks, whereas clustered computers are usually much more tightly coupled.
"Computational Intermediation and the Evolution of Computation as a Commodity".
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m