Skip to main content

Distributed systems/applications are very popular these days as the applications that are being developed now don’t fit on a single computer. There can be various possibilities, such as the application generating a humongous amount of data or it has tons of users worldwide, and these things factors keep increasing day by day. At times, hardware fails to meet the increasing demands of CPU, memory, and disk space by the applications. That is where datacentres and the cloud come to the rescue to meet the increasing demands of resources.


Mesos is an open-source project that helps manage clusters by providing efficient resource allocation and isolation across distributed systems or applications.

A distributed system is typically a model where all the components are distributed over networked systems, interact with each other by passing messages, and work toward achieving a common goal. Now to enhance this architecture, a coordinator is introduced which is in sync with some number of workers and coordinate the execution.

The coordinators do the following tasks:

  • They’re responsible for running multiple processes and tasks simultaneously.

  • They make the system fault-tolerant by handling process failures.

  • They optimize the overall performance of the system.

Mesos orchestrates your computer cluster and shares resources efficiently across applications and frameworks. In Mesos anatomy, the distributed application is a framework, the coordinators are called schedulers, and the workers are tasks or executors.

where does mesos come into picture?

Mesos actually provides a layer of abstraction that is away from systems but towards resources. In the diagram shown below, the scheduler is in sync with ‘n’ executors. By introducing Mesos into this, it provides some level of indirection, i.e., rather than the scheduler needing to communicate with each of the executors, it will just communicate with Mesos and Mesos manages all the machines under the cover.


From a programmer’s perspective, every distributed application has the following common functionalities, like task distribution, task monitoring, task cleanup, failure detection, task startup, task killing, etc. All these functionalities are included in Mesos as API that programmers can leverage for their distributed applications. Also, Mesos makes it easy to run multiple distributed systems on a single cluster by sharing the resources dynamically and efficiently.

From a SysAdmin’s perspective, Mesos sits between IaaS and PaaS. Unlike IaaS, Mesos isn’t really about provisioning and managing instances; neither is it like PaaS where we deploy and manage applications and services. Developers build and run frameworks with resource layers abstracted. One can run Mesos on AWS EC2, Azure VM, or OpenStack, and also build a PaaS on top of Mesos to leverage Mesos API.


Mesos is comprised of a master daemon that manages agent daemons running on each node of the cluster and Mesos frameworks that run tasks on those agents. Master is responsible for fine-grained resource sharing across frameworks by giving them resource offers. Each resource offer consists of a list of following parameters:

<agendID, resource1: amount1, resource2: amount2, etc.>

The framework running on top of Mesos comprises of a scheduler and executors. Scheduler registers itself with Mesos Master and executor is a process that is launched on agent nodes to run framework’s tasks. The master decides how many resources are offered to each framework whereas, frameworks’ scheduler decides which resources to use from the offered resources.

Once the scheduler offers the resources, it sends Mesos a description of tasks it wants to run on them. Mesos, in turn, launches the tasks on corresponding agents.

A framework can be thought of as an application written directly against Mesos REST APIs and connects to one of the Mesos master. A framework can schedule/execute tasks on resources provided by the master. Slave consists of executors that will run different types of tasks.

The programming model that Mesos uses is very low level, unreliable, and one-way message passing. The calls are messages that the scheduler sends to Mesos master, and the events are something that Mesos master generates and send to scheduler.

The calls that scheduler makes are as follows:

  • Lifecycle Management Calls: REGISTER, REREGISTER, UNREGISTER

  • Resource Allocation: REQUEST, DECLINE, REVIVE


The events that scheduler can receive are:

  • Lifecycle Management: REGISTER, REREGISTERED

  • Resource Allocation: OFFERS, RESCIND

  • Task Management: UPDATE

Let’s walk through the timeline of these events to understand the flow:


The diagram above shows the event flow between the scheduler and Mesos (master).

  1. Whenever a scheduler wants to run some task on Mesos, it will send a REGISTER request to the Mesos master. The Mesos master interprets the request and after authentication sends a REGISTERED response to scheduler.

  2. After registration, the scheduler can make a resource REQUEST to Mesos master if it has any. If there are no specific requests from scheduler, it will start receiving OFFERS from Mesos master.

  3. Once the resources are allocated, scheduler can now decide what tasks to run. So it will send LAUNCH request to the master. Master then uses that information to propagate among the slave nodes.

  4. The UPDATE about the task will be sent by Mesos master to the scheduler. This update will convey if the task is RUNNING, if it FAILED, or if it is LOST.

  5. The OFFER and UPDATE are continuous events. Mesos master keeps sending resource OFFER to scheduler and UPDATE the task ran.


REGISTER events get triggered in a situation of failover. There is a certain possibility that scheduler’s communication with Mesos master might fail. During such a situation, another framework scheduler can take over and reconnect to Mesos master with the same framework ID, get authenticated, and resume the responsibility of the previous scheduler. Roles in Mesos can be referred to as a service, group, or framework which consumes resources from the pool. By default, the Mesos master uses the weighted Dominant Resource Fairness (wDRF) algorithm to allocate the requested resources. Every role has a weight of 1 assigned to it by default. Weights can also be assigned to customize the resource allocation process.


The scheduler gets a resource offer from Mesos master when it registers itself with Mesos Master. If the scheduler has specific requirements of resources, it can REQUEST it from Mesos master. So allocation of resources is offered allowing the framework to pick and choose which resources to run tasks at. Scalar, ranges, and sets represent the three different types of resources that Mesos agent has to offer. A single offer only consists of resources from a single slave. Also, the resources associated with an offer will not be re-offered to a particular framework unless the framework has already rejected those resources or the resources have been rescinded.


Launch event launches tasks once the offer is successful/accepted by the scheduler. Resources are concurrently offered to multiple frameworks at a given time, and the first framework to launch tasks using those resources will be able to use them. A framework that attempts to launch tasks using an invalid resource offer will get a TASK_LOST update for those tasks.


A task is a process that we run on a slave. Tasks can be run in one of two ways:

  1. Task with a command
  2. Task with an executor

Task with a command is a task that executes some shell command. One can run multiple tasks on the same slave with different commands.

Task with an executor is where an executor gets launched, and the task is passed to the executor. It is in the hands of the executor to decide how that task is going to run. Multiple tasks can be run using the same executor. These tasks can be shell commands, threads, etc.

Containerization comes into picture at tasks level. Containers can be just the tasks or the combination of Tasks and Executors together. As we pass multiple tasks to the executors, it will allow us to change resource constraints on that particular container dynamically. So the containers can be scaled in and scaled out based on the number of tasks it is running.

We shall now look at installing and configuring Mesos on an Ubuntu server. Zookeeper is also required here, which will allow all our hosts to connect to the correct master servers. Zookeeper is nothing but a distributed hierarchical key-value store for maintaining configuration information, naming registry and synchronization service. Zookeeper nodes use hierarchical namespaces to store their data which is much like a tree data structure. In Mesos, zookeeper is used to coordinate the master nodes.

Installing and configuring mesos

The only thing that is required to carry out this guide is an Ubuntu server running on the platform of your choice; be it AWS, Azure, or Virtual Box on local. For this lab, I shall be using Ubuntu Server 16.04 on AWS to install and configure Mesos.

For folks who want to use alternate options, follow the instructions given here to create a VM in Virtual Box. We shall use Ubuntu image to create a VM instead of CentOS given there. To create VM in Azure, follow the instructions given in this guide. The following ports are required to be open: 22, 2181, 5050 in Azure, AWS, or any other platform.

Mesos is generally installed along with Marathon which is a container orchestration tool for Mesos and DC/OS. Marathon is designed to launch long-running applications and can run almost any process that can be launched by regular shell. It will ensure that the launched process keeps running even if the slave node on which the process is running fails.

Provision EC2 instance

1. Log in to AWS Console and navigate to EC2.

2.Click Launch EC2 instance and select Ubuntu Server 16.04 LTS AMI (Eligible for Free tier) as shown below:


3.In Instance type, select t2.micro and proceed to Configure Instance details. Select VPC and Subnet (default). If you have a custom VPC, you can select that, but make sure Auto-assign Public IP is Enabled for this lab. Leave other settings as default. Click Add Storage.

4. We will leave the storage option as default. 8 GB General Purpose SSD as root volume mounted at /dev/sda1.


5.Next step is to add tags to EC2 instances. Tags are key-value pairs to identify our instance. Ideally, one should add two tags, i.e., Name and Environment. You can follow your own convention here.


6.Create a security group as shown below. Ports 22, 2181, 5050 need to be open. If you’re planning to install Marathon, open port 8080 also for Marathon UI.


7.Click on Review and Launch. Review and verify the details and click the Launch button. Select the key pair if you have any or create a new one. Download the Key Pair and click Launch.

8.Once your instance is up and running, SSH into that instance using the Public IP Address and the downloaded key pair. If you’re using Windows, generate a .ppk file using the downloaded pem file using PuTTyGen.


9.From Linux machine, you can SSH directly using pem and the IP address of the mesos instance using the following command:

ssh –i /location/of/pem ubuntu@MESOS-IP

NOTE: Make sure the pem file has only read permissions. Execute chmod 400 before SSH. If you intend to write on that file use chmod 600.

10.From Windows machine, generate .ppk file and use PuTTY to SSH into Mesos instance. Enter Public IP in Hostname and click SSH > Auth > select .ppk file there. Click Open. Login as ubuntu.


Installing Java 8 and Mesos

1. Once logged in, install Oracle Java JDK 8 (Java 8). Use the following commands to install:

[root@ip-10-102-5-183 ~]# sudo add-apt-repository ppa:webupd8team/java
[root@ip-10-102-5-183 ~]# sudo apt-get update
[root@ip-10-102-5-183 ~]# sudo apt-get install oracle-java8-installer

2. Execute the following command as root user to add OpenPGP key for Mesos packages from Ubuntu’s keyserver. If the below command fails try the other command.

[root@ip-10-102-5-183 ~]# sudo apt-key adv --keyserver --recv E56151F
[root@ip-10-102-5-183 ~]# sudo apt-key adv --keyserver --recv DF7D54CBE56151BF

3. Next step is to add Mesos repository by forming correct URL for our Ubuntu release

[root@ip-10-102-5-183 ~]# echo "deb xenial main" >> /etc/apt/sources.list.d/mesosphere.list

As we are using Ubuntu 16.04, the codename for it is xenial. If you’re using a different version, check the codename using the following command:

[root@ip-10-102-5-183 ~]# lsb_release -cs

4. Execute apt-get update to update package indexes.user_60255_5a91467b7a3a8.PNG_800.jpg

5.Now we will install Mesos. Zookeeper binaries included with Mesos will also get installed. Execute the following command:

[root@ip-10-102-5-182 ~]# apt-get -y install mesos


Before we start mesos-master and mesos-slave, we need to configure and verify few things. Run the hostname -f command and note the hostname of the instance.

Configuring Mesos

1.Navigate to /etc/zookeeper/conf directory. Edit the zoo.cfg file as shown below. Uncomment server.1 and replace zookeeper1 with your hostname. As we are using a single machine, localhost will also work. If you want to configure clustered environment, have other zookeeper nodes in place and uncomment those lines by updating the hostnames.


2.Edit myid file inside /etc/zookeeper/conf directory. Put integer 1 as the value in file myid. This integer is the server id for each zookeeper instance inside the cluster. It should be unique and shouldn’t be repeated in zookeeper cluster. It ranges from 1 to 255:

[root@ip-10-102-5-183 ~]# echo -n "1" > myid

3.Navigate to /etc/mesos directory and edit the zk file to point to the zookeeper instance. As it is a single machine, localhost will work fine, but we will still provide instance hostname as given below. Do the same for all mesos-master in cluster (if any). Internal IP can also be used instead of complete hostnames. In case of multiple zookeepers, this property can be set as:



4.Switch to /etc/mesos-master directory and edit the quorum file with an integer. A replicated group of servers in the same application is called a quorum, and in replicated mode, all servers in the quorum have the same configurations.

Note: The quorum should be set in a way that 50 percent of the master members should be present to make decisions. Ideally, this number should be greater than the number of masters divided by two. Since we are running single node, we set it to 1.

5.Next, we want to configure the hostname and IP address for our mesos-master. Create two files, ip and hostname, inside /etc/mesos-master directory with appropriate values.

Note: If the mesos is deployed in private subnet, use private IP for both files ip and hostname. But if deployed in public subnet for POC, use Public IP in hostname and private IP in ip file. Public IP will allow you to access Sandbox and view logs.

Starting zookeeper, mesos-master and mesos-slave

1.Now all the configurations are in place. We can now start mesos-master and mesos-slave. Use the following commands to start required services.

[root@ip-10-102-5-183 ~]# service zookeeper start
[root@ip-10-102-5-183 ~]# service mesos-master start
[root@ip-10-102-5-183 ~]# service mesos-slave start

2.Once all the services are started, make sure they’re running without any errors. The following should be the desired output on shell:




3.Once all the services are up and running, open your browser and browse PUBLIC_IP:5050 to view Mesos UI.


Now unless, or until, there are no frameworks created/running in agent, nothing will show up in Mesos UI. We can run any simple shell commands using mesos-execute as shown below:

[root@ip-10-102-5-183 ~]# mesos-execute --master=<HOSTNAME>:5050 --name="echo-test" --command=echo "Hello, World"

Let’s create a simple HelloWorld.jar that we will give to mesos-master to run on slave.

Creating a simple HelloWorld JAR

1. Create directory demo at any accessible location. Preferably in home directory:

[ubuntu@ip-10-102-5-183 ~]# cd
[ubuntu@ip-10-102-5-183 ~]# mkdir demo

2. Create a file inside demo directory using any editor of your choice (vi, vim or nano) and paste the following contents, then save and exit:

[ubuntu@ip-10-102-5-183 ~]# vim
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
System.out.println("This will run on mesos-slave");

3. Run the following command to compile the java file which will create a class file:

[ubuntu@ip-10-102-5-183 ~]# javac

4. Create a MANIFEST.MF file with the following contents:

Manifest-Version: 1.0
Main-Class: HelloWorld

5. Run the following command to create an executable jar file using java file, class file and MANIFEST.MF:

[ubuntu@ip-10-102-5-183 ~]# jar cmf MANIFEST.MF HelloWorld.jar HelloWorld.class

6. Now try running the generated jar using the command shown below:

Now we will run this jar using mesos-execute to run it on mesos-slave. Execute the following command to run this jar on mesos-slave:

[root@ip-10-102-5-183 ~]# mesos-execute --master=<HOSTNAME>:5050 --name="running-jar" --command="java -jar /location/of/jar/HelloWorld.jar"

The output of the execution is hsown below. Status 0 indicates that the task executed successfully and status 1 indicates that the task failed.


Now if you go to Mesos UI > Frameworks in the browser, you can see the following details:


Click on any of the IDs from the list to view more details about the task.



So this is how we can install, configure, and run tasks on Mesos. In the next guide, we will leverage Marathon along with Mesos to orchestrate Docker containers.

Sources / Resources

Learn more about DevOps and Containers from Linux Academy with new hands-on courses added every quarter. Learn more, Grow more!

Comments are disabled for this guide.