The planning of edge cloud in deployment perspective focuses on achieving high availability, partitioning, eventual consistency, load balancing and replication.

1st Architecture: Centralised Control Plane Scenario

Centralized Control MVP Architecture 2.png [1]

In the first stage of deployment, we focus on the "easiest and fastest" deployment mechanism, with satisfying the MVP (minimum viable product) reference architecture. This will be the reference point to try and investigate different deployment mechanisms.  

 Regional Datacenter Centralized.png[1]

In the second stage, we try to modify the Centralised Control Plane edge cloud by adding Large/ Medium Edges in between the centralised DC and Small Edge. A deployment of this type will enjoy simplified management of compute resources across regions. From deployment aspects, we can investigate high availability, load balancing and replication. These test are described in details below. 

2nd Architecture: Distributed Control plane Scenario

The Centralised Control Plane Scenario comes at the cost of losing the ability to manage instances in an edge or far edge cloudlets during a network partition between the Regional data center and Edge. This can lose partitioning of the cloud, reduce its availability, and highly loaded datacenter. 

A deployment of this type will benefit from greater autonomy in the event of a network partition between the edge site and the main datacenter. This means that Distributed Control plane to handle complex tasks such as traffic engineering, enable load balancing, highly partitioned and available cloud.

Regional Datacenter Distributed Control.png[1]


Case tests for each architecture

For each architecture, we test and investigate different options of cloud setup and deployment 

Case 1

== Kubernetes on OpenStack  ==

Experiment with various ways of deploying OpenStack and configuring its services. We experiment by bringing in containers and Kubernetes on top of Openstack if users have the freedom to choose the best cloud environment to run any given application, or part of an application, while still enjoying scalability, control, and security. 

Case 2

== Openstack on Kubernetes ==

So in this case we experiment self-healing infrastructure, check if we can prove that OpenStack will be more resilient to the failure of core services and individual compute nodes, than previous deployment mechanism. We check also performance based on resource efficiencies that come with a container-based infrastructure. 

In this case, we experiment in particular Rolling Update that Kubernetes offers to Openstack. So we recheck performance again with Rolling Upgrade mechanism and see if we can ensure continuous delivery with zero downtime

During Testing of these cases, we also:

o Investigate load balancer to reduce latency using:

    • on openstack using HAProxy: https://en.wikipedia.org/wiki/HAProxy 
    • on kubernetes clusters using clusters federation:



    • To be more specific, I will be working on istio, canary deployment, kubernetes auto scaling, service discovery, high availability setup in kubernetes, deployment of mysql on kubernetes with helm, and spinnaker continuous delivery platform deployment .

    • Investigate providing high availability and replication between multiple instances of database engine by clusters federation using kubernetes, deploying Galera cluster for database, such as Keepalive or HAProxy, and virtual IP using VRRP 

• Investigate different database engines such as RDBMSs (availability and consistency), and dynamo db (availability and portability).

• Test cloud performance and benchmark using OpenStack Rally and Yardstick. Test includes:

o Latency and packet delay
o Scaling performance
o How different deployments affect the OS performance
o Performance of basic cloud operations
o Availability  Robustness and security

• Investigate the issue with containers caused by little flexibility in operating systems. Running containers on different operating systems such as Windows.

o What is the performance drop caused by running a minimal linux core and deploying the containers on top of that?
o Compare this with Windows Containers introduced in Windows 10 and Windows server 2016.


On extra time we also:

 Investigate different containers. Compare characteristics such as tradeoff between performance and security

o Linux containers (LXC): Linux admins can use Linux containers to create a layer of separation between the operating system kernel and the application layer.
o Docker containers: a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
o Kata containers: lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.

• Investigate high availability resource manager such as Linux Pacemaker. Check: https://docs.openstack.org/ha-guide

• Experiment OpenStack's Live-migration feature

o Preconditions
o Constraints
o Capabilities
o Evaluate the necessity of this when using Kubernetes.
o Encrypted live migration ( https://docs.openstack.org/security-guide/instance-management/security-services-for-instances.html#trusted-images/

References

 [1]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures


  • No labels
Write a comment...