Ultimate Guide to Cognizant Interview Questions for DevOps/Cloud Engineers (3+ Years Experience) - 2025
Preparing for a Cognizant interview as a DevOps or Cloud Engineer with 3+ years of experience requires a deep understanding of tools, processes, and problem-solving strategies. This guide provides extremely detailed explanations and examples for common interview questions, incorporating feedback from technical reviews to ensure completeness and accuracy.
1. Can you tell me about your recent project? What tools have you used? What is the application? How have you built the infrastructure?
Explanation
This question evaluates your hands-on experience with real-world projects. Highlight your role, tools, and infrastructure design.
Example Answer
Project Overview:
I led the development of a microservices-based healthcare analytics platform designed to process patient data in real time. The application aimed to reduce diagnostic delays by 40% using predictive analytics.
Tools Used:
- Version Control: Git (GitHub)
- CI/CD: Jenkins, Argo CD (GitOps)
- Containerization: Docker
- Orchestration: Kubernetes (EKS)
- Infrastructure as Code (IaC): Terraform
- Cloud Provider: AWS (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch)
- Monitoring: Prometheus, Grafana, ELK Stack
Application Architecture:
- Frontend: React.js served via S3 and CloudFront.
- Backend: Python-based microservices (Flask) for data processing.
- Database: PostgreSQL (RDS) with read replicas for scalability.
- Event-Driven Workflows: AWS Lambda for serverless data transformation.
Infrastructure Build:
- IaC with Terraform:
- Provisioned a VPC with public/private subnets, NAT gateways, and security groups.
- Deployed EKS clusters with managed node groups.
- Example Terraform snippet for an S3 bucket:
resource "aws_s3_bucket" "data_lake" { bucket = "healthcare-data-lake" acl = "private" versioning { enabled = true } }
- Kubernetes Setup:
- Used Helm to deploy Prometheus for monitoring and Cert-Manager for TLS certificates.
- CI/CD Pipeline:
- Jenkins pipelines built Docker images, ran unit tests, and deployed to EKS.
- Argo CD synchronized Kubernetes manifests from a Git repository (GitOps).
Why This Works:
- Demonstrates end-to-end ownership of infrastructure.
- Highlights AWS services (VPC, IAM) for security and scalability.
2. Which version control tool have you worked on?
Explanation
Git is the industry standard. Showcase advanced features like branching strategies.
Example Answer
I’ve used Git extensively with GitHub and GitLab. Key practices:
- Branching Strategy: GitFlow with
main
,develop
,feature/*
, andhotfix/*
branches. - Collaboration: Pull requests with mandatory code reviews.
- Advanced Commands:
# Interactive rebase to clean up commit history git rebase -i HEAD~3 # Reflog to recover lost commits git reflog
Merge vs. Rebase:
- Merge: Preserves history but can clutter it.
git checkout feature-branch git merge main
- Rebase: Linear history but rewrites commits.
git checkout feature-branch git rebase main
3. Have you worked on Jenkins?
Explanation
Highlight pipeline creation, plugins, and integrations.
Example Answer
Yes, I’ve built declarative pipelines for CI/CD. Example Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'docker build -t my-app:${GIT_COMMIT} .'
}
}
stage('Test') {
steps {
sh 'docker run my-app:${GIT_COMMIT} npm test'
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh 'kubectl apply -f k8s/deployment.yaml'
}
}
}
post {
failure {
slackSend channel: '#alerts', message: 'Build failed!'
}
}
}
Plugins Used:
- Blue Ocean: For visualizing pipelines.
- Credentials Binding: To securely manage secrets.
4. You accidentally pushed changes to the main branch. How do you revert them?
Explanation
Demonstrate Git expertise and collaboration awareness.
Example Answer
- Revert the Commit:
git revert <commit-hash> # Creates a new undo commit git push origin main
- If Not Yet Pulled by Others:
git reset --hard HEAD~1 # Erases the last commit git push --force origin main # Use with caution!
Best Practice: Protect main
with branch restrictions in GitHub/GitLab.
5. How will you sync the changes from main branch to the feature branch?
Explanation
Show collaboration and conflict resolution skills.
Example Answer
- Rebase for Clean History:
git checkout feature-branch git fetch origin git rebase origin/main
- Resolve Conflicts:
- Use
git status
to identify conflicts. - Edit files, then
git add
andgit rebase --continue
.
- Use
- Force Push:
git push --force origin feature-branch
6. What is the significance of providers in terraform?
Explanation
Providers enable Terraform to interact with cloud APIs.
Example Answer
- AWS Provider: Manages EC2, S3, etc.
provider "aws" { region = "us-east-1" profile = "prod" }
- Kubernetes Provider: Deploys pods, services.
provider "kubernetes" { config_path = "~/.kube/config" }
7. How do you manage the statefile in terraform?
Explanation
Statefiles track resource metadata. Secure them!
Example Answer
- Remote State with Locking:
terraform { backend "s3" { bucket = "tf-state-prod" key = "network/terraform.tfstate" region = "us-east-1" dynamodb_table = "tf-locks" } }
- Alternatives:
- Terraform Cloud: For collaboration and state versioning.
- Atlantis: Automated Terraform workflows via pull requests.
8. If a infrastructure deployment fails, how will you rollback to the previous infrastructure?
Explanation
Highlight backup strategies and IaC rollbacks.
Example Answer
- Revert Terraform Code:
git checkout main -- infrastructure/ # Restore previous code terraform apply
- State Rollback:
terraform state pull > current.tfstate terraform state push backup.tfstate
- Backup Tools: Use Velero for Kubernetes resource backups.
9. How do you setup different environments in terraform? How do you achieve this without using terraform workspaces?
Explanation
Workspaces can complicate state management. Use directories instead.
Example Answer
Folder Structure:
terraform/
├── dev/
│ ├── main.tf
│ └── variables.tf
└── prod/
├── main.tf
└── variables.tf
Variable Files:
dev.tfvars
:instance_type = "t2.micro"
prod.tfvars
:instance_type = "m5.large"
Apply Command:
terraform apply -var-file=env/prod.tfvars
10. What all AWS services have you worked on?
Explanation
List services and provide use cases.
Example Answer
- EC2: Hosted auto-scaled web servers.
- S3: Stored Terraform state and application logs.
- RDS: Managed PostgreSQL with read replicas.
- Lambda: Processed real-time data from Kinesis.
- VPC: Designed multi-AZ networks with private subnets.
- IAM: Enforced least privilege via roles/policies.
- CloudWatch: Monitored Lambda invocations and RDS CPU.
11. Have you built any pipeline from scratch?
Explanation
Show end-to-end pipeline design.
Example Answer
Pipeline for a Serverless App:
- Source: GitHub triggers on push to
main
. - Build: AWS CodeBuild runs
sam build
. - Test: CodeBuild executes
npm test
. - Deploy: SAM CLI deploys to AWS with
sam deploy --guided
. - Notify: Slack alerts on failure.
Tools: AWS CodePipeline, CodeBuild, SAM.
12. How did you measure the efficiency of your build time and reduction in support tickets. How you calculated it like 30% or so?
Explanation
Quantify improvements using DevOps metrics.
Example Answer
- Build Time: Reduced from 10m to 6m (40% gain).
- Deployment Frequency: Increased from weekly to daily.
- MTTR: Reduced from 2h to 30m via better monitoring.
- Support Tickets: 50% reduction after adding automated tests.
Formula:
[
\text{MTTR} = \frac{\text{Total Downtime}}{\text{Number of Incidents}}
]
13. Have you lead any team?
Explanation
Highlight Agile practices and metrics.
Example Answer
- Agile Metrics:
- Velocity: Tracked story points per sprint (avg: 35).
- Sprint Burndown: Monitored daily progress in Jira.
- Practices:
- Daily standups to resolve blockers.
- Retrospectives to improve processes.
14. Can you explain pod lifecycle? what are the different phases?
Explanation
Phases reflect pod status.
Example Answer
- Pending: Waiting for node assignment.
- Running: At least one container is active.
- Succeeded: All containers exited successfully.
- Failed: At least one container exited with an error.
- Unknown: Node communication issues.
Debugging:
kubectl describe pod <pod-name> # Check events
kubectl logs <pod-name> # View container logs
15. Can you explain the kubernetes architecture?
Explanation
Master (Control Plane) and Worker Nodes.
Example Answer
Control Plane Components:
- API Server: REST interface for cluster operations.
- etcd: Key-value store for cluster state.
-
- Scheduler: Assigns pods to nodes based on resource availability.
- Controller Manager: Manages controllers for replication, endpoints, etc.
Worker Node Components:
- Kubelet: Ensures containers are running in pods.
- Kube Proxy: Manages network rules for pod communication.
- Container Runtime: Runs the containers (e.g., Docker, containerd).
Diagram:
A visual representation of the architecture can help clarify the relationships between components.
16. Understanding Kubernetes Deployments vs. StatefulSets
Explanation
Differentiate between stateless and stateful applications.
Example Answer
Feature | Deployment | StatefulSet |
---|---|---|
Use Case | Stateless applications | Stateful applications |
Pod Identity | Pods are interchangeable | Pods have unique identities |
Storage | Volumes can be shared | Persistent Volumes are unique |
Scaling | Easy scaling | Ordered scaling |
Example Use Cases:
- Deployment: Web servers that can be scaled up/down easily.
- StatefulSet: Databases that require stable network identities and persistent storage.
17. Managing Secrets in Kubernetes
Explanation
Securely manage sensitive information.
Example Answer
- Kubernetes Secrets:
kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secret
- Using Secrets in Pods:
apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp image: myapp:latest env: - name: DB_USERNAME valueFrom: secretKeyRef: name: db-credentials key: username
Best Practices:
- Use tools like Sealed Secrets or SOPS for encryption.
- Integrate with HashiCorp Vault for dynamic secrets management.
18. ConfigMaps in Kubernetes
Explanation
Manage non-sensitive configuration data.
Example Answer
- Creating a ConfigMap:
kubectl create configmap app-config --from-literal=APP_ENV=production
- Using ConfigMap in Pods:
apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp image: myapp:latest env: - name: APP_ENV valueFrom: configMapKeyRef: name: app-config key: APP_ENV
19. Backup Strategies for Kubernetes
Explanation
Ensure data safety and recovery.
Example Answer
- Using Velero for Backups:
velero install --provider aws --bucket my-bucket --secret-file ./credentials-velero --use-restic
- Backing Up etcd:
ETCDCTL_API=3 etcdctl snapshot save snapshot.db
Best Practices:
- Schedule regular backups and test recovery procedures.
- Store backups in multiple locations for redundancy.
20. Networking Policies in Kubernetes
Explanation
Control traffic flow between pods.
Example Answer
- Creating a Network Policy:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-db-access spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend
Use Cases:
- Restrict access to sensitive services (e.g., databases).
- Implement micro-segmentation for enhanced security.
21. Service Mesh Overview
Explanation
Manage microservices communication.
Example Answer
-
Istio Features:
- Traffic management (routing, load balancing).
- Security (mTLS, authorization).
- Observability (tracing, metrics).
-
Alternatives:
- Linkerd: Lightweight and easy to use.
- Consul: Service discovery and configuration management.
Example Configuration for Istio:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp.example.com
http:
- route:
- destination:
host: myapp
port:
number: 80
22. Scaling Applications in Kubernetes
Explanation
Ensure applications can handle varying loads.
Example Answer
- Horizontal Pod Autoscaler (HPA):
Automatically scales the number of pods based on CPU utilization.kubectl autoscale deployment myapp --cpu-percent=50 --min=1 --max=10
- Cluster Autoscaler:
Automatically adjusts the size of the Kubernetes cluster based on resource requests.- Configuration: Set up with cloud provider integration (e.g., AWS, GCP).
Best Practices:
- Monitor resource usage and adjust limits/requests accordingly.
- Use metrics server for HPA to function effectively.
23. Logging and Monitoring in Kubernetes
Explanation
Track application performance and health.
Example Answer
- ELK Stack for Logging:
- Elasticsearch: Store logs.
- Logstash: Process logs.
- Kibana: Visualize logs.
apiVersion: v1 kind: ConfigMap metadata: name: logstash-config data: logstash.conf: | input { kubernetes { ... } } output { elasticsearch { hosts => ["http://elasticsearch:9200"] } }
- Prometheus for Monitoring:
- Scrapes metrics from applications and Kubernetes components.
- Grafana: Visualizes metrics with dashboards.
Best Practices:
- Set up alerts for critical metrics (e.g., CPU, memory).
- Use annotations in Kubernetes manifests for Prometheus scraping.
24. GitOps with Argo CD
Explanation
Manage Kubernetes deployments using Git as the source of truth.
Example Answer
- Setting Up Argo CD:
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Deploying an Application:
- Connect Argo CD to a Git repository containing Kubernetes manifests.
- Sync the application to deploy changes automatically.
Best Practices:
- Use Sealed Secrets for managing sensitive data in Git.
- Implement automated sync policies for continuous delivery.
Final Thoughts
Preparing for a Cognizant interview as a DevOps or Cloud Engineer requires a solid understanding of various tools and practices. This guide has provided detailed explanations and examples for common interview questions, ensuring you are well-equipped to demonstrate your expertise. Remember to stay updated with the latest industry trends and best practices, as the field of DevOps is constantly evolving. Good luck with your interview preparation!
0 Comments:
Post a Comment
Note: only a member of this blog may post a comment.
<< Home