Skip to main content
Introduction

In the previous guide, we briefly covered Mesos, its architecture, its different components, installation, and configuration on a single node. In this guide, we will briefly cover Marathon and how we can leverage Marathon along with Mesos to orchestrate containers. For this guide, we will have the Mesos Master and Mesos Slave in different servers with Marathon installed in the Mesos Master.

Marathon

Marathon is a container orchestration tool for Mesos and DC/OS. It is designed to launch long-running applications and can run almost any process that can be launched by a regular shell. Marathon will ensure that the launched process keeps running even if the slave node on which the process is running fails. The default port for Marathon is 8080.

Some of the key features of Marathon include high availability, service discovery and load balancing, TCP and HTTP health checks for applications, the ability to bind persistent storage to an application, etc. It runs as an active/passive cluster with leader election capability to promise 100% uptime. It has REST APIs which can help in scripting and integration.

Mesosphere

The Mesosphere is a popular software solution that expands cluster management capabilities of Mesos by combining additional tools like Marathon and Chronos. It helps applications to scale easily by abstracting many challenges associated with scaling. Features of Mesosphere includes application scaling and scheduling, fault-tolerance and self-healing systems along with application service discovery and endpoint elasticity.

As discussed in the previous guide, Mesos architecture is comprised of master and slave daemons and frameworks. Zookeeper helps to coordinate among Mesos master nodes. Marathon serves as replacement of traditional init systems and can be used to start other Mesos frameworks. Chronos, initially developed by Airbnb, is a replacement for cron. It is a distributed and fault-tolerant scheduler that runs on top of Mesos and can be used for job orchestration.

Prerequisites

This guide is the advanced version of the previous one, so we will need multiple Ubuntu servers running on the platform of your choice, be it AWS, Azure, or Virtual Box on local. For this guide, I shall be using Ubuntu Server 16.04 on AWS to install and configure Mesos Master, Marathon on another server, and Mesos slave in others.

For folks who want to use alternate options, follow the instructions given here to create a VM in Virtual Box. We shall use Ubuntu image to create a VM instead of the CentOS given there. To create VM in Azure, follow the instructions provided in this guide. The following ports are required to be open: 22, 2181, 5050 in Azure, AWS, or any other platform.

As discussed above, we will install Marathon along with Mesos by completing the following:

  1. Provision three EC2 instances from which one will be The Mesos-master and other two will be Mesos-slaves. Follow this link to setup Mesos in three boxes.
  2. Once the instances are up and running, SSH into the Mesos-master instance using .pem (or .ppk) file and install Marathon.
  3. Once done, we will point Mesos-slave to point to Mesos-master server.
  4. We will then install Docker on Mesos-slaves to be able to create containers using Marathon.
  5. Once everything is done, we will look at creating Docker containers from Mesos UI as well as through command line by providing JSON application configuration.
  6. We will even look at how we can scale the application.
Hands-on mesos and marathon
Configuring Mesos-master

1. Navigate to /etc/zookeeper/conf directory. Edit the zoo.cfg file as shown below. Uncomment server.1 and replace zookeeper1 with your hostname. As we are using a single machine, localhost will also work. If you want to configure the clustered environment, have other zookeeper nodes in place and uncomment those lines by updating the hostnames.user_60255_5b2dd09e886bc.PNG2. Edit myid file inside /etc/zookeeper/conf directory. Put integer 1 as the value in file myid. This integer is the server id for each zookeeper instance inside the cluster. It should be unique and shouldn’t be repeated in the zookeeper cluster. It ranges from 1 to 255.

[root@ip-172-31-0-62 ~]# echo -n "1" > myid

3. Navigate to /etc/mesos directory and edit the zk file to point to the zookeeper instance. As it is a single machine, the localhost will work fine, but we will still need to provide the instance hostname as given below. Do the same for all Mesos-master in the cluster (if any). The Internal IP can also be used instead of complete hostnames. In case of multiple zookeepers, this property can be set as:

zk://172.31.0.62:2181,172.31.0.63:2181,172.31.0.64:2181/mesos

user_60255_5b2dd2a143c30.PNG4. Switch to the /etc/mesos-master directory and edit the quorum file with an integer. A replicated group of servers in the same application is called quorum, and in replicated mode, all servers in the quorum have the same configurations.

user_60255_5b2dd32164a1f.PNG

Note: The quorum file should be set in a way that 50 percent of the master members need to be present to make decisions. Ideally, this number should be higher than the number of the masters divided by two. Since we are running a single node, we set it to 1.

5. Next, we want to configure the hostname and IP address for our Mesos-master . Create two file IPs and hostnames inside the /etc/mesos-master directory with appropriate values. Use the hostname –f command to view the hostname of the instance.

6.Install Marathon in this box. To install marathon, download the tar.gz file and extract it to the /usr/share/marathon directory.

[root@ip-172-31-0-62 ~]# curl -O http://downloads.mesosphere.com/marathon/v1.5.1/marathon-1.5.1.tgz
[root@ip-172-31-0-62 ~]# tar xzf marathon-1.5.1.tgz
[root@ip-172-31-0-62 ~]# mv marathon-1.5.0-* /usr/share/marathon

Starting Zookeeper, Mesos-master and Marathon

1. Now, with all the configurations in place, we can start zookeeper, Mesos-master and Marathon. Use the following commands to start the required services.

service zookeeper start
service mesos-master start

2. To start Marathon, navigate to /usr/share/marathon/bin directory and execute the below command:

[root@ip-172-31-0-62:/usr/share/marathon/bin]# nohup ./marathon --master zk://ip-172-31-0-62.ap-south-1.compute.internal:2181/mesos --zk zk://ip-172-31-0-62.ap-south-1.compute.internal:2181/marathon >> /var/log/marathon.log 2>&1 &

3. Once all the services are started, make sure they’re running without any errors. The following should be the output on the shell. Invoke jps -l to view the java processes i.e. zookeeper and marathon.

user_60255_5b2dd6780b855.PNG
Use the following command to check if mesos-master is running without errors:
[root@ip-172-31-0-62 ~]# service mesos-master status

user_60255_5b2dd6f57462c.PNG_800.jpg

4. Once all the services are up and running, open your browser and browse <PUBLIC_IP>:5050 to view Mesos UI.

user_60255_5b2dd7f5b70c3.PNG_800.jpg

5. Browse <PUBLIC_IP>:8080 to view Marathon UI.

user_60255_5b2dd86269d1f.PNG_800.jpg

Configuring Mesos-slave

1. First, in the Mesos-slave, we don’t need to run zookeeper. Also, we have to make sure that Mesos-slave instances will run only Mesos-slave processes and not Mesos-master and zookeeper processes.

2.Stop zookeeper using the following command and create an override file so that it will not start automatically:

[root@ip-172-31-5-46 ~]# service zookeeper stop
[root@ip-172-31-5-46 ~]# systemctl disable zookeeper

3. Create an override file for Mesos-master to not let it start automatically in Mesos-slave instances and stop any currently running Mesos-master processes:

[root@ip-172-31-5-46 ~]# service mesos-master stop
[root@ip-172-31-5-46 ~]# systemctl disable mesos-master

4. We will now point Mesos-slave to the Mesos-master server. For this, navigate to /etc/mesos directory and edit the zk file to point to the Mesos-master server as shown below:

user_60255_5b2dda4b59949.PNG

5. Do this change for both the Mesos-slave instances and start the Mesos-slave using the command service mesos-slave start. Make sure it gets registered with Mesos-master. In Mesos UI, we can see both the slaves are registered and in an Activated state.

user_60255_5b2ddaaae28a6.PNG
Installing Docker on Mesos-slaves

As we want to run and orchestrate containers on Mesos-slaves, docker should be installed on both the Mesos-slaves. Follow the steps given below to install docker:

1.First, we will add the GPG key for the official docker repository and add the Docker repository to the APT sources. Use the following commands:

[root@ip-172-31-5-46 ~]# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
[root@ip-172-31-5-46 ~]# sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

2. Next, we need to perform an apt-get update to update the package database with Docker packages from the added repo.

3.We will be installing Docker from the official Docker repo and not from Ubuntu’s default repo. Execute the below command to install docker:

[root@ip-172-31-5-46 ~]# sudo apt-get install -y docker-ce

4. Check to make sure docker is running by executing the following command:
[root@ip-172-31-5-46 ~]# systemctl status docker

5. By default, running docker processes requires root privileges. To run all docker processes as a normal user, we have to add that user to the docker group. To do that, execute the following command. You have to log out and then log in again with the normal user to execute docker commands post addition to the docker group:
[root@ip-172-31-5-46 ~]# sudo usermod -aG docker ${USER}

Getting started with containers on marathon

We can now navigate to the Marathon UI to launch and orchestrate containers. We can launch containers from Docker Hub or from a private repository; whichever is configured with the docker machine. We will first launch a simple apache/nginx container through the Marathon UI, and then we’ll get familiar with basic Marathon JSON configuration for containers/tasks.

To launch a new container using Marathon:

1. Navigate to Marathon UI, i.e. <PUBLIC_IP>:8080

2. Click “Create Application” and give it a unique ID. Set proper CPU, Memory, Disk space and the number of instances for that container. An example is given below.

user_60255_5b2ddef550dc8.PNG
3. Click on the Docker Container tab and provide the details like Image name, Network type and Parameters which are key-value pairs.
user_60255_5b2ddf9e46131.PNG

4. Now we have to configure the port for the container, i.e., which port service will run in the container. The container will bind to the requested ports, and they’ll be dynamically mapped to one of the ports on the HOST machine.

user_60255_5b2ddff43e6b2.PNG

5. Click on the Health Checks tab to configure the health checks for our container. Provide the requested details as shown below. Click Create Application to deploy the container.

user_60255_5b2de0907970f.PNG

6. Once the container is deployed, you’ll see the following details on the Marathon Dashboard.

user_60255_5b2de1678fb54.PNG_800.jpg

7. Click on the application to get more insights and options to scale, restart, and destroy an application. Click on Configuration to view the JSON configuration of the running application. Click on the PUBLIC_IP:PORT to view the NGINX homepage.
user_60255_5b2de20498016.PNG
8. The Scale Application can be used to scale up and scale down an application by specifying the number of instances in the dialog box.
JSON configuration

Each application in Marathon represents a long-running service of which there would be multiple instances running across multiple hosts. Each application instance is referred to as a task. The application definition is a JSON configuration that describes everything required to start and maintain the tasks.

Let’s take a simple example that will print a message to stdout with a sleep time of 5 seconds after each iteration.

{   
"id": "linuxacadmey-echo",
"cmd": "while [ true ] ; do echo 'Hello Folks!' ; sleep 5 ; done",
"cpus": 0.1,
"mem": 16.0,
"instances": 1
}

As the name suggests, the id is the application instance/task ID you want to specify. The cmd is the command that gets executed by Mesos executor via /bin/bash –c ${cmd}. The cpus and mem sections specify the CPU and Memory requirements for the task, and instances are the number of instances for that particular task.

To use JSON configuration to create Marathon tasks, click Create Application from the Marathon UI, enable JSON mode from the top-right corner of the dialog box, and paste your JSON there as shown below.

user_60255_5b2de3c0f380a.PNG

Custom Docker container using Marathon

For our purposes, we will launch a custom docker container from Docker Hub with the help of JSON application configuration. We will use this JSON to Marathon using command-line tools. So let’s quickly create a JSON configuration for a task running our container. For this guide, I have already pushed Docker image to the Docker Hub that I’ll be using here. Alternatively, you can create your own Docker image and push it to Docker Hub:

1. To login to Docker Hub, use the following command:

[root@ip-172-31-5-46 ~]# docker login --username=<your-dockerhub-username> --email=<your-dockerhub-email>

2. Tag your image using the image ID and your docker hub username as given below:

[root@ip-172-31-5-46 ~]# docker tag IMAGE_ID armankoradia/demoapp:v1

3. We will use this image to create a container. Create a demoapp.json file with the content as follows:

{
"id": "linuxacademy-demoapp",
"cmd": null,
"cpus": 0.5,
"mem": 64,
"disk": 0,
"instances": 1,
"container": {
"docker": {
"image": "armankoradia/demoapp:v1"
},
"type": "DOCKER",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp",
"name": "webapp",
"labels": null
}
]
},
"networks": [
{
"mode": "container/bridge"
}
],
"healthChecks": [
{
"protocol": "HTTP",
"path": "/",
"portIndex": 0,
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 3
}
]
}

If the image is not present on the slave, Marathon will pull the image from the Docker Hub or any provided repository.

4. Execute the below command to create an application in Marathon with the above JSON configuration using the CURL command:

[root@ip-172-31-5-46 ~]# curl -XPOST -H "Content-type: application/json" http://MESOS_MASTER-MARATHON-IP:8080/v2/apps -d @demoapp.json

user_60255_5b2de61a79f22.PNG_800.jpg

5. After invoking this command on the Mesos-slave machine, navigate to the Marathon Dashboard to view the newly deployed app. Use the PublicIP:PORT given under application ID to browse our application.

user_60255_5b2de6e47e695.PNG_800.jpg
user_60255_5b2de7ab9f8e7.PNG

6. Navigate to the Mesos Master (UI) to view logs and information about running tasks and task history.
user_60255_5b2de7f9042be.PNG_800.jpg7. Click on the Sandbox located in the right-most column beside the task information to view and download the ERROR and OUTPUT logs of the executed/running task.
Conclusion

Following these steps, we can orchestrate applications running on Docker containers using Mesos and Marathon. This system makes it easy to manage and scale containers by providing a user-friendly interface. We can even group our applications based on their use-cases and deploy them under a group in Marathon. I hope this will give a good kick start to manage large-scale container ecosystems using Mesos and Marathon.

Sources / Resources

Learn more about DevOps and Containers from Linux Academy with new hands-on courses that are added every quarter. And this time in July there are 150+ new courses, challenges and learning activities coming up. Stay tuned. Learn more, grow more!

Comments are disabled for this guide.