Einfache Projektliste Software-Karte

Distributed Computing
171 Projekte im Ergebnis
Letztes Update: 2014-06-03 08:35

JPPF

JPPF makes it easy to parallelize computationally intensive tasks and execute them on a Grid.

Letztes Update: 2016-08-09 20:26

Xming X Server for Windows

Xming は、Microsoft Windows XP/Vista/7/8 (+ Server 2003/2008/2012) のための、すぐれたX Window サーバです。完全な機能を有し、小型で高速、簡単にインストールでき、Microsoft Windows上で単独で動作するとともに、(マシン毎にインストールすることなく)どこででも使えます。

Letztes Update: 2015-06-14 02:35

hadoop for windows

unofficial prebuild binary packages of apache hadoop for windows, apache hive for windows, apache spark for windows, apache drill for windows and azkaban for windows.


Windows で動作する Apache Hadoop の非公式のビルド済...

Entwicklungsstatus: 2 - Pre-Alpha
Zielbenutzer: Science/Research
Programmiersprache: Java
Register Date: 2015-02-22 06:32
Letztes Update: 2021-09-20 22:01

Talend Open Studio for Data Integration

Talendは、革新的で強力なオープンソースのデータ統合ソフトウェアであり、業務システムからETLまでのデータ統合、およびあらゆる規模の組織の移行において使用することができます。

Letztes Update: 2013-07-29 22:58

Makeflow

Makeflow is a workflow engine for executing large complex applications on clusters, clouds, and grids. It can be used to drive several different distributed computing systems, including Condor, SGE, and the included Work Queue system. It does not require a distributed filesystem, so you can use it to harness whatever collection of machines you have available. It is typically used for scaling up data-intensive scientific applications to hundreds or thousands of cores.

Letztes Update: 2012-11-06 23:43

Shared Scientific Toolbox in Java

The Shared Scientific Toolbox is a library that facilitates development of efficient, modular, and robust scientific/distributed computing applications in Java. It features multidimensional arrays with extensive linear algebra and FFT support, an asynchronous, scalable networking layer, and advanced class loading, message passing, and statistics packages.

Letztes Update: 2023-06-22 11:19

cloudi

CloudI is an open-source private cloud computing framework for efficient, secure, and internal data processing. CloudI provides scaling for previously unscalable source code with efficient fault-tolerant execution of ATS, C/C++, Erlang/Elixir, Go, Haskell, Java, JavaScript/node.js, OCaml, Perl, PHP, Python, Ruby, or Rust services.

The bare essentials for efficient fault-tolerant processing on a cloud!

Letztes Update: 2010-05-05 09:09

XtreemOS

The overall objective of the XtreemOS project is the design, implementation, evaluation, and distribution of a grid operating system (called XtreemOS) with native support for virtual organizations (VO). XtreemOS is capable of running on a wide range of underlying platforms, from clusters to mobiles. It is based on Mandriva Linux, with support to come for other distributions later.

Letztes Update: 2014-06-07 03:10

magic.jar

magic.jar is a command line tool that allows you to execute the mobile, sandboxed Lua snippets available on TinyBrain.de on any machine. It can do text operations and display GUIs.

(Machine Translation)
Letztes Update: 2013-07-10 21:21

Hados

Hados stores files in a cluster of servers. Its goal is to handle high availability by storing copies of the same file on several nodes. It provides RESTFUL APIs to easily store, check, or retrieve files. Using the cluster APIs, you can retrieve files from whichever node hosts them. To avoid any single point of failure, it is possible to apply a request to any node of the cluster; there is no master node.

(Machine Translation)
Letztes Update: 2010-06-17 07:58

DAC

DAC (Dynamic Agent Computations) is a novel software framework designed for implementing multi-agent systems that describe parallel computations. The whole system is easy to configure and extend, but also very efficient and scalable. Moreover, the technology that is used (JMS, Cajo, JMX) ensures high reliability of the framework, which can be used in a production environment.

Letztes Update: 2019-12-14 16:04

Diskless Remote Boot in Linux (DRBL)

DRBL provides diskless or systemless environment. It uses distributed hardware resources and makes it possible for clients to fully access local hardware. It also includes Clonezilla, a partition and disk cloning utility similar to Ghost.

Letztes Update: 2010-12-14 19:35

StarCluster

StarCluster is a utility for creating traditional computing clusters used in research labs or for general distributed computing applications on Amazon's Elastic Compute Cloud (EC2). It uses a simple configuration file provided by the user to request cloud resources from Amazon and to automatically configure them with a queuing system, an NFS shared /home directory, passwordless SSH, OpenMPI, and ~140GB scratch disk space. It consists of a Python library and a simple command line interface to the library. For end-users, the command line interface provides simple intuitive options for getting started with distributed computing on EC2 (i.e. starting/stopping clusters, managing AMIs, etc). For developers, the library wraps the EC2 API to provide a simplified interface for launching/terminating nodes, executing commands on the nodes, copying files to/from the nodes, etc.

Letztes Update: 2013-07-29 22:54

Parrot and Chirp

Parrot and Chirp are user-level tools that make it easy to rapidly deploy wide area filesystems. Parrot is the client component: it transparently attaches to unmodified applications, and redirects their system calls to various remote servers. A variety of controls can be applied to modify the namespace and resources available to the application. Chirp is the server component: it allows an ordinary user to easily export and share storage across the wide area with a single command. A rich access control system allows users to mix and match multiple authentication types. Parrot and Chirp are most useful in the context of large scale distributed systems such as clusters, clouds, and grids where one may have limited permissions to install software.

Letztes Update: 2011-03-22 04:39

Dapper Dataflow Engine

Dapper, or "Distributed and Parallel Program Execution Runtime", is a tool for taming the complexities of developing for large-scale cloud and grid computing, enabling the user to create distributed computations from the essentials: the code that will execute, along with a dataflow graph description. It supports rich execution semantics, carefree deployment, a robust control protocol, modification of the dataflow graph at runtime, and an intuitive user interface.