container

Abel, a cloud architect, uses container technology to deploy applications/software including all its dependencies, such as libraries and configuration files, binaries, and other resources that run independently from other processes in the cloud environment. For the containerization of applications, he follows the five-tier conatiner technology architecture. Currently, Abel is verifying and validating image contents, signing images, and sending them to the registries. Which of the following tiers of the container technology architecture is Abel currently working in?

Abel, a cloud architect, uses container technology to deploy applications/software including all its dependencies, such as libraries and configuration files, binaries, and other resources that run independently from other processes in the cloud environment. For the containerization of applications, he follows the five-tier container technology architecture. Currently, Abel is verifying and validating image contents, signing images, and sending them to the registries.
Which of the following tiers of the container technology architecture is Abel currently working in?

Option 1 : Tier-1 : Developer machines
Option 2 : Tier-4 : Orchestrators
Option 3 : Tier-3 : Registries
Option 4 : Tier-2 : Testing and accreditation systems

1. Tier-1 : Developer machines

Containerization or container-based virtualization is an Operating System level virtualization method for deploying and running distributed applications without launching Virtual Machines for each application. The most popular implementation of containers, Docker, uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.

Technical advantages in Development

  • Isolated environments. Using containers while developing is a completely different experience to traditional development. Usually when you start a new job you would get a step-by-step tutorial to follow to configure your machine or a virtual machine image with a pre-configured environment. However, with Docker containers it is all about installing Docker and pulling/running containers to kick off your development environment.
  • Homogeneous environments. All environments (Development, Testing, Staging, Pre-Production and Production) are setup in the same way. The whole environment is kept within the container definition.
  • Continuous integration including infrastructure. Any changes in the container definition should trigger a new build and automated testing. Infrastructure is part of the development pipeline.
  • Microservices. Using containers facilitates the development of a microservices architectural pattern since it is easier to develop discrete and separately deployable components. Of course, on the other hand by incrementing the number of applications, it increases maintenance complexity, network latency, monitoring. This article Modules vs microservices clearly describes the operational complexity of microservices.
  • Only one virtual machine required. Usually a developer may be working on 2 or 3 different projects that may require different configurations or VMs. With Docker, different containers could be running on the same machine or VM.
2. Tier-4 : Orchestrators

Container orchestration software allows developers to deploy multiple containers for implementation within applications. These tools help IT administrators automate the process of running instances, provisioning hosts, and linking containers. These tools assist in optimizing orchestration procedures and extending the lifecycle of applications containing multiple containers. They can also facilitate deployment, identify failed container implementations, and manage application configurations. Companies use these to increase scalability and functionality of applications by adding containers and connecting information about repositories and networks. They can also improve container security by setting requirements for accessing containers and keeping components separated from one another. Container management platforms may include orchestration features, but many container orchestration solutions function as a complement to the management platform.

To qualify for inclusion in the Container Orchestration category, a product must:

  • Allow administrators to provision hosts
  • Schedule and automate container deployment
  • Run instances of multiple containers
  • Alert users of failed containers
3. Tier-3 : Registries

A container registry is a repository, or collection of repositories, used to store container images for Kubernetes, DevOps,  and container-based application development.

Container images

A container image is a copy of a container— the files and components within it that make up an application— which can then be multiplied for scaling out quickly, or moved to other systems as needed. Once a container image is created, it forms a kind of template which can then be used to create new apps, or expand on and scale an existing app.

When working with container images, you need somewhere to save and access them as they are created and that’s where a container registry comes in. The registry essentially acts as a place to store container images and share them out via a process of uploading to (pushing) and downloading from (pulling). Once the image is on another system, the original application contained within it can be run on that system as well.

In addition to container images, registries also store application programming interface (API) paths and access control parameters.

Public registries are great for individuals or small teams that want to get up and running with their registry as quickly as possible. They are basic in their abilities/offerings and are easy to use.

New and smaller organizations can take advantage of standard and open source images to start and can grow from there. As they grow, however, there are security issues like patching, privacy, and access control that can arise.

Private registries provide a way to incorporate security and privacy into enterprise container image storage, either hosted remotely or on-premises. A company can choose to create and deploy their own container registry, or they can choose a commercially-supported private registry service.

What to look for in a private container registry

A major advantage of a private container registry is the ability to control who has access to what, scan for vulnerabilities and patch as needed, and require authentication of images as well as users.

Some important things to to look for when choosing a private container registry service for your enterprise:

  • Support for multiple authentication systems
  • Role-based access control management (RBAC)
  • Vulnerability scanning capabilities
  • Ability to record usage in auditable logs so that activity can be traced to a single user
  • Optimized for automation

Role-based access control allows the assignment of abilities within the registry based on the user’s role. For instance, a developer would need access to upload to, as well as download from, the registry, while a team member or tester would only need access to download.

For organizations with a user management system like AD or LDAP, that system can be linked to the container registry directly and used for RBAC.

A private registry keeps images with vulnerabilities, or those from an unauthorized user, from getting into a company’s system. Regular scans can be performed to find any security issues and then patch as needed. 

A private registry also allows for authentication measures to be put in place to verify the container images stored on it. With such measures in place, an image must be digitally “signed” by the person uploading it before it can be uploaded to the registry. This allows that activity to be tracked, as well as preventing the upload should the user not be authorized to do so. Images can also be tagged at various stages so they can be reverted back to, if needed.

4. Tier-2 : Testing and accreditation systems

The official management decision given by a senior agency official to authorize operation of an information system and to explicitly accept the risk to agency operations (including mission, functions, image, or reputation), agency assets, or individuals, based on the implementation of an agreed-upon set of security controls.

formal declaration by a designated accrediting authority (DAA) or principal accrediting authority (PAA) that an information system is approved to operate at an acceptable level of risk, based on the implementation of an approved set of technical, managerial, and procedural safeguards. See authorization to operate (ATO). Rationale: The Risk Management Framework uses a new term to refer to this concept, and it is called authorization.

Identifies the information resources covered by an accreditation decision, as distinguished from separately accredited information resources that are interconnected or with which information is exchanged via messaging. Synonymous with Security Perimeter.

For the purposes of identifying the Protection Level for confidentiality of a system to be accredited, the system has a conceptual boundary that extends to all intended users of the system, both directly and indirectly connected, who receive output from the system. See authorization boundary. Rationale: The Risk Management Framework uses a new term to refer to the concept of accreditation, and it is called authorization. Extrapolating, the accreditation boundary would then be referred to as the authorization boundary.

Learn CEH & Think like hacker


This Blog Article is posted by

Infosavvy, 2nd Floor, Sai Niketan, Chandavalkar Road Opp. Gora Gandhi Hotel, Above Jumbo King, beside Speakwell Institute, Borivali West, Mumbai, Maharashtra 400092

Contact us – www.info-savvy.com

https://g.co/kgs/ttqPpZ

Leave a Comment