Version: DRAFT


At the current time Gestalt is in beta. For access to the beta repositories please contact The private repos will be made public near the end of june. Our target date is June 29th, 2016.

Gestalt Framework

This document provides and overview of Galactic Fog’s Gestalt Framework including dependencies, deployment patterns, and current adapter road map.


The Gestalt Framework is comprised of three major components:

Infrastructure Layer

The Gestalt Framework’s infrastructure layer is used to host the gestalt integration services as well as client applications. The infrastructure layer is designed to make everything that runs inside it scalable and resilient.

The way the infrastructure layer achieves the goals of resiliency is by using Apache Mesos to ensure the applications have the required number of application services running at all times. An application service can be a simple process, or a docker container. If an application service dies it is restarted automatically. If the server that the application service is running on dies, the service is restarted on a different server. A distributed file system is used to ensure that when the service is restarted on another node it has access to all the state information required to resume functioning.

Scalability is achieved by allowing easy scale up and load balancing of all application services. If additional instances of an application service are required, then the scale parameter is increased and additional nodes are started on the compute cluster. As those nodes come up they are put behind a load balancer automatically.

Note: Gestalt will not make unscalable technologies magically scalable.

Gestalt Infrastructure Node Requirements

The infrastructure layer can be installed onto RHEL/CENTOS 6.X & 7.X. The 7.X series is recommended for both performance reasons and future support reasons.

The minimum recommended instance sizing is:

Operating System RHEL/CENTOS 6.X or 7.X (7.X recommend for performance and long term support reasons)
CPU 4 Cores
Primary Partition 10 GB Storage
Data Device 20 GB Storage

Cloud Size: For Amazon, an m4.xlarge with an additional data device meets these requirements.

Note: These are the recommended minimums. More CPU/MEMORY allows for higher density and efficiency.

Infrastructure Node Components

The following software is installed on the infrastructure nodes:

Infrastructure Deployment Patterns

While all the infrastructure nodes are identical in terms of the software installation there are still specific topologies required to achieve specific behavior. High Availability is achieved with a cluster of at least 3 nodes in the same data center. The infrastructure layer supports cross data center deployment but requires a separate cluster in each data center. You can not run them as a single cluster across multiple datacenters as there can be unforeseen performance penalties. For systems that will have some number of nodes with direct internet access we recommend at least 5 nodes, with two of the nodes being in the DMZ. This is for isolation of the management infrastructure.

Deployment Patterns


The appliance vm is a single node that works with no HA. It does not use mesos, zookeeper, etc. This vm is useful when you wish to test applications with the gestalt framework. It consists of a virtualbox VM with all the gestalt micro-services loaded into it as docker containers. You can load your own application in and start it using standard docker semantics. The vm consumes about 6-8gb ram, before client applications are loaded onto it, so resources can be tight on a standard laptop with only 16gb ram.

HA Cluster

The most common use case is the HA cluster. An HA cluster is comprised of a minimum of 3 nodes.

If the application is going to be internet facing the recommendation is to use at least 5 nodes, where two of the nodes sit in the DMZ. This isolates the management interfaces. This is just best practice, not a hard requirement, as it is possible to lock down and secure the management interfaces, it just takes additional work.

Capacity Requirements: The infrastructure + gestalt services consume about 25 GB ram and 4 cores in a minimum configuration of 3 nodes leaving about 20gb ram free and 8 cores free for client processes/containers. Additional instances added to the cluster lose only about 20% to overhead.

Distributed File System

Each node runs a server and client process for the distributed file system. The distributed file system is used so that when an application container dies it can be restarted on any infrastructure node without losing its state. The distributed file system is comprised of a server and a client. Either of which can be run independently of the other. The server is used for creating and adding storage to the pool. The client is used to mount a fuse based file system that connects to the server instances. A minimum of 3 nodes is required for the DFS to tolerate failure.

The primary requirement for running the server is that a separate data device must exist on the server that will be contributing storage. That device must be at least 15gb in size. The device can be local storage, cloud storage (eg, Amazon EBS), San, or iscsi. It is possible to use the ephemeral storage on ec2, but this is not recommended as complete failure in an availability zone is possible.

HA Cluster Networking Configuration

The table below lists the ports that need to be open between all nodes to facilitate communications. Unless otherwise stated these ports should only be open to other nodes of the cluster or to the management network

Protocol Port(s)
HAProxy 80, 443 (These can ports be internet facing)
Distributed File System 9095, 7861, 7862, 7863, 7860
Zookeeper 2181
SSH 22
Container Ports Range (3000-4000) This provides room for 1000 containers to each expose one port for their primary service
Launcher / Mesos / Marathon 8080, 5050, 5051
Kafka 9092

DR Cluster

A common requirement is for a DR cluster to be enabled to support failing applications across datacenters in the event of a failure.

The gestalt deployment model for this requirement is consists of two HA Clusters, each in a different data center.

The only networking requirements between them is that the Launcher needs access to both marathon services, the Gestalt Meta DB Service (port 5432) must be open between both clusters, and the Distributed File System need to be able to communicate between the datacenters.

Gestalt Integration Services


The Gestalt Integration Services are designed to make integration with common services simple and high quality. This is done by providing a well designed micro-services for each function that does the following:

- Controllable by policy

Services List

Component Licensing Status Notes Private Repo Public Repo
Gestalt-meta Apache 2 Released The meta service
Gestalt-meta-repository Apache 2 Released Meta Service DB mapping component
Gestalt-Security Apache 2 Released Security As A Service Implementation
Gestalt-Security-SDK Apache 2 Released SDK For integrating Applications with Gestalt Security
Gestalt-Security-AD Commercial TBD Allows attached to AD/LDAP Directories
Gestalt-Security-Play Apache 2 Released Maps security to Play apps.
Gestalt-Security-Federate Commercial TBD Allows federation of security and meta instances.
Gestalt-Security-Encrypt Commercial TBD Adapter for plugging in KMS and HSM security solutions for encryption
Gestalt-Task Apache 2 Released Makes services Asynch
Gestalt-Task-IO Apache 2 Released Data mapping layer.
Gestalt-io Apache 2 Released Generic Data mapping layer.
Gestalt-Config Apache 2 Released Configuration Management Library
Gestalt-Config-IO Apache 2 Released Data mapping layer for config.
Gestalt-Config-SDK Apache 2 Released
Gestalt-Config-Chef Commercial TBD Commercial config management integrations for chef
Gestalt-Config-Puppet Commercial TBD Commercial config management integrations for puppet
Gestalt-Config-Ansible Commercial TBD Commercial config management integrations for ansible
Gestalt-Config-Zookeeper Commercial TBD Commercial config management integrations for zookeeper
Gestalt-Notifier Apache 2 Released Notification Service (listens/transmits on multiple protocols)
Gestalt-Notifier-IO Apache 2 Released Data Mapping Layer for Notifier
Gestalt-Notifier-*Adapters* Commercial TBD Commercial Adapters for Notifier    
Gestalt-Launcher Apache 2 Released Provisioning Service For Gestalt
Gestalt-Launcher-Marathon Apache 2 Released Marathon Adapter
Gestalt-Launcher-Aurora Apache 2 TBD Aurora Adapter
Gestalt-Launcher-Adapters Commercial TBD Commercial Adapters for Launcher.    
Gestalt-DNS Apache 2 Released Abstraction Service for Managing DNS
Gestalt-DNS-Route53 Apache 2 Released DNS Adapters for Amazon Route 53.
Gestlat-DNS-*Adapters* Commercial TBD Commercial Adapters for DNS    
Gestalt-SSL Apache 2 Released Service for provisoning SSL Certificates
Gestalt-SSL-SSLMate Apache 2 Released Adapter for SSLMate
Gestalt-Lambda Apache 2 Active Development Framework for calling lambda services.
Gestalt-Lambda-io Apache 2 Active Development Backend Data Adapters
Gestalt-Vertx Commercial Active Development Adapter for using vertx as a lambda service.
Gestalt-Vertx-io Commercial Active Development Data mapping layer.
Gestalt-Event-transformer Commercial Active Development Service for transforming events across multiple channels into a lambda call.
Gestalt-Billing Apache 2 Released
Gestalt-Transactions Apache 2 TBD Library for Transactions
Gestalt-Transactions-Stripe Apache 2 Released
Gestalt-Streaming-io Apache 2 Released Library for parsing messaging events.
Gestalt-ChangeManagement Commercial Released Service for restricting changes in gestalt and mapping them to some change management product.
Gestalt-ChangeManagement-ServiceNow Commercial TBD Adapters for ServiceNow
Gestalt-ChangeManagement-Remedy Commercial TBD Adapters for Remedy
Gestalt-Policy Commercial Active Development Service for governing configuration, events, and service runtimes.
Gestalt-Loadbalancing Apache 2 TBD Abstraction Services for Load Balancing
Gestalt-LoadBalancing-Adapters Commercial TBD Commercial Adapters for the Load Balancing Service    
Gestalt-Firewalling Apache 2 TBD Abstraction Service for Firewalling
Gestalt-Networking Apache 2 TBD Abstraction Service for Networking/SDNs, etc.
Gestalt-Networking-Adapters Commercial TBD Commercial Adapters for Networking    
Gestalt-BootStrap Apache 2 TBD Bootsrap wizard to deploy Gestalt across cloud/virtual/bare metal environments.
Gestalt-Appliance Apache 2 TBD An appliance Running all GF services that can be deployed on laptop with virtualbox.
Gestalt-CLI Apache 2 Active Development Command Line Interfaces and SDK for Gestalt.
Gestalt-DataBus Apache 2   Abstration services for throwing events to existing Enterprise Service Buses.
Gestalt-DataBus-Adapter Commercial   Commercial Adapters for ESB.    
Gestalt-UI Apache 2 TBD UI For Gestalt
Gestalt-UIE Commercial TBD Enterprise UI for Gestalt
Gestalt-Storage Apache 2   Abstraction Layer for working with volumes and data across vendors.
Gestalt-Storage-*Adapters* Commercial   Commercial Adapters

Service Architecture and Descriptions

The diagram above shows the integration services breakdown. The services on the bottom, such as auth/task/event, are process services, which are services that all other gestalt services consume in order to function.

The process services can be thought of as best practices. In today’s world services must support being asynchronous, have variable auth-n/auth-z strategies, have integration with change management, config management, etc. This is accomplished in gestalt using these process services. User applications can consume and use these process as well.

Meta Service – The meta service is the “brain” of the gestalt framework, and doesn’t classify as a micro-service. It has a section all to itself, refer to the “Meta Layer” section of this document.

Security Service - This service acts as the authentication and authorization service for all aspects of the gestalt framework and can be used by user applications as well.



Task Service – The task service is responsible for tracking the status of any tasks. It’s primary use is to make all other service asynchronous.



Event Service – The event service is used as a message bus. Services and Applications can throw and listen for events. This service is used heavily for gestalt’s change management, policy and lambda services. The only adapter currently supported is kafka, and the service provides a rest interface and standardized message semantics. It includes a security filter for Kafka.


Meta Layer


The meta layer acts as the central point for configuration and management of for all Gestalt service, the integration layer, and for applications managed by the Gestalt Framework.

The name “Meta” is derived from the service being used to create meta-models of a variety of domains and then apply policy and configuration to them.


The Meta layer acts as both a graph and a hierarchal data structure at the same time. Every resource in meta can contain links to other resources.

The high level meta objects are:

Typically, resources in meta are modeled in a hierarchy under Organizations. This is pragmatic as it’s how most companies and organizations function in life. Organizations can contain sub organizations nested as deep as necessary.

Configuration and policy can be specified at the root organization and will be inherited to all sub organizations. There is a type of container called a “workspace” which must be owned by an org. Workspaces are basically containers for applications. The reasoning is that many times several applications must be combined to form a working platform, so for that reason we use the term workspace instead of application. The “application” namespace is reserved for the description of an application and its design time information.

The workspace container holds “environments’, which are a subtype of a workspace used to specify constraints. Environments contain application blueprints as well as running applications and their infrastructure.

The following diagram shows an example of a 3 node application cluster. You will note that at any level configuration can be specified and it will be inherited download. Services are exposed by clusters and nodes.

Environments also contain other meta data or links to external gestalt services. For example, an application using the gestalt security or transaction services would have links to them at the environment level. This provides the end user with the ability to browse their applications and see all the users/transactions/infrastructure/services those applications consume or expose in a particular environment.

Obtaining Gestalt

The gestalt framework is spread across three locations:

NOTE: Our public repos WILL NOT be published until June 29, 2016.

Public Github Repo -

Private Github Repo -

Artifactory contains the compiled binaries and Docker images. The public repo contains the source for the apache 2 licensed components. The private github repo is contains the source for the commercial components as well as being the repository used for all work on the framework.