Welcome back again on new post on Kubernetes tutorial. In todays article, we will focus on configuring resource requests and resource limits.
Each Pod requires certain amount of memory and cpu to function but some unusual circumstances Pod can behave weirdly and may consume entire cluster resource causing application and node failures. This behaviour can be controlled by below two configuration in Pod.
Requests : It is minimum amount of resource require for Pod to function. Kubernetes will look for available nodes to schedule pod based on requested resource.
Limits : It is maximum amount of resource required by Pod can consume.
In this article , we will accomplish following points.
Create a Pod without Resource Configuration
Perform CPU Stress on Container
Observe Resource Utilisation of Pod and Nodes.
Enable Resource Configuration in Pod
Observe Resource Utilisation of Pod and Nodes again.
Pre-requisite : Minikube Cluster
Docker Image : progrium/stress
Let's Begin.
Step 1: Install Metric Server
Metrics server is require to measure resource utilisation in cluster. Execute below command to create metric server pods in cluster.
Edit deployment.yaml and following lines under container as shown in below example.
args:
- --cpu
- "2"
Your deployment.yaml container configuration should look like below.
Perfect..!! Now lets create pod in cluster
kubectl create -f deployment.yaml
Step 3: Observe Resource Utilisation of Nodes and Pod
After few second when Pod is in running state , check resource utilisation on node.
kubectl top nodes
Similarly you can also check resource utilisation of each running pod.
kubectl top pod <PodName>
kubectl top pods
You will notice CPU utilisation has gone up for stresstest pod which might cause application failure or node failure.
Now in real production scenario . it could lead to application downtime or business loss. So its always good practice to use resource configuration when you are creating any container.
Step 4: Configure Resource Request/Limit
Lets go ahead delete existing Pod and configure it again with Resource parameter.
kubectl delete -f deployment.yaml
vim deployment.yaml
Edit deployment.yaml file again and add below line of configuration.
with above configuration , pod can request minimum 0.5 CPU and can consume upto 1 Core.
Save deployment.yaml file with above configuration and lets re-create container in cluster.
kubectl create -f deployment.yaml
Step 5 : Observe Resource utilisation of Nodes and Pod
Cool , now let's go ahead and check node as well as pod resource utilisation.
Resource utilisation has came down to half from 1898 --> 1102 as we have limit configured to 1 Core which is maximum allowed by pod.
Conclusion : Resource utilisation play's important role in managing pod as well as stability of overall cluster. As best a practice , developer must configured resource request limits while working with real environment.
I have been struggling from couple of days to get my jdbc hive connection working using Zookeeper URI and after putting lot of effort finally I realised problem. In this post I will sharing my experience and steps I took to rectify problem. Notes : I am having a HDP 2.6 sandbox along with my JDBC program . Use case : Many companies does not provide HiveServer2 url for configuring your JDBC string and its important because if your hiveserver goes down , your whole job will abort (You will be missing Hive HA capabilities). So How do I dynamically determine second Hive Server URI ? Solution is to use Zookeeper Service Discovery feature. Zookeepers keeps all HiveServer 2 URI into its namespace. When client tries to make a connection to Hive , Zookeeper will look for available hive server and return URL to client program. You can quickly verify HiveServer namespace in zk Path with below command on HDP. /usr/hdp/current/zookeeper-client/bin/zookeeper-client...
Hello There, Welcome back to our new post on Kubernetes tutorials. In this example we will see how can we configure and access ConfigMap in Spring boot application. ConfigMaps are helpful where you have multiple environment (Dev,Test Prod etc) and you want to promote your container with different environment configuration. There are three ways Pods can access ConfigMap in containers. For todays Demo , we will use Environment Variable approach. Pre-requisite : MiniKube Cluster For Demo Purpose I have created Spring Boot application which will expose below end point. curl localhost:8080/configMap Docker Image : shashivish123/springboot Below is how Controller class looks like and value is being autowired from application.properties. Remember CONFIG_KEY value will be coming from Kubernetes ConfigMap Environment Configuration. Now Let's Begin , We will start by Creating firs ConfigMap in Kubernetes. Step 1: Create a namespace where we will ...
Apache Nifi was developed to automate the flow of data between different systems. Apache NiFi is based on technology previously called “Niagara Files” that was in development and used at scale within the NSA for the last 8 years and was made available to the Apache Software Foundation through the NSA Technology Transfer Program. Nifi is based on FlowFiles which are heart of it. A FlowFile is a data record, which consists of a pointer to its content (payload) and attributes to support the content, that is associated with one or more provenance events. The attributes are key/value pairs that act as the metadata for the FlowFile, such as the FlowFile filename. The content is the actual data or the payload of the file. Provenance is a record of what’s happened to the FlowFile. Each one of these parts has its own repository (repo) for storage. Each flowfile is processed by FlowFile processor . Processors have access to attributes of a given FlowFile and its content stream. Processo...
http://www.hardware.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
ReplyDeletehttp://www.computer-science.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.computers.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.child-health.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.weight-loss.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.healthcare-industry.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.autos.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.online-teaching.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.shopping.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/
http://www.clothing.sbm.pw/News/docker-and-kubernetes-training-%7C-docker-training-in-hyderabad/