Job Summary
Business Development Group, HCBU, HCLTech www.hcltech.com Digital Foundation / Full-Time We are HCLTech, one of the fastest-growing large tech companies in the world and home to 220,000+ people across 60 countries, supercharging progress through industry-leading capabilities centered around Digital, Engineering and Cloud. The driving force behind that work, our people, are diverse, creative, and passionate, raising the bar for excellence on a regular basis. We, in turn, work hard to bring out the best in them as we strive to help them find their spark and become the best version of themselves that they can be. If all this sounds like an environment you’ll thrive in, then you’re in the right place. Join us on our journey in advancing the technological world through innovation and creativity. AI Infrastructure Engineer- L3 The Role The AI Infrastructure Engineer (L3) provides advanced engineering and architectural expertise for high‑performance AI and ML infrastructure. This role focuses on building, optimizing, and scaling GPU/accelerator environments and distributed systems for large‑scale training and inference workloads. Competency Focus: High‑performance computing (HPC), distributed systems, Kubernetes, GPU orchestration, cloud optimization Keywords: Nvidia GPU Infrastructure, Kubernetes, GPU Cluster Administrator, Infrastructure SME, RCA Responsibilities: Deploy, configure, and manage GPU and AI accelerator platforms (NVIDIA A100/H100/L40, AMD Instinct, TPU). Troubleshot GPU hardware and software issues, including failures, thermal throttling, PCIe/NVLink topology, and driver conflicts. Install, upgrade, and maintain GPU software stacks, including drivers, CUDA, cuDNN, TensorRT, and firmware. Perform capacity planning and resource optimization for AI training, fine‑tuning, and inference workloads. Optimize Linux systems (Ubuntu, RHEL, Rocky) for AI/HPC workloads through NUMA, kernel, and clock tuning. Manage distributed and high‑performance storage systems, including BeeGFS, Lustre, Ceph,
Key Responsibilities
2. To architect| design and develop (through Team) solution for product/project & sustenance delivery
3. To ensure knowledge up-gradation and work with new technologies so that the solution is current and meets quality standards and the client requirements
4. Reviews on the architecture and design deliverables and support as an SME
5. To recommend client value creation initiatives and implement industry best practices
6. To train and develop team so as to ensure that there is an adequate supply of trained manpower in the said technology and deliver risks are mitigated
Skill Requirements
- Manage distributed and high‑performance storage systems, including BeeGFS, Lustre, Ceph, and high‑throughput NFS.
- Operate high‑bandwidth, low‑latency networks, including InfiniBand, RoCE, RDMA, and NVLink.
- Administer Kubernetes GPU clusters, leveraging NVIDIA GPU Operator, device plugins, MIG, and node feature discovery.
- Support AI and HPC orchestration platforms, including Kubeflow, Ray, MLflow, and Slurm/PBS.
- Configure and manage GPU scheduling and sharing strategies, such as node pools, quotas, job queues, and fair‑share policies.
- Optimize distributed training workflows using NCCL, PyTorch Distributed, Horovod, and DeepSpeed.
- Operate and tune LLM and inference runtimes, including vLLM, Triton Inference Server, and TensorRT‑LLM.
- Monitor and tune GPU utilization, memory allocation, and container-level performance.
- Automate cluster provisioning and operations using Terraform, Helm, Customize, and GitOps (ArgoCD/Flux).
- Build automation for GPU diagnostics, node onboarding, and model deployment workflows.
- Implement observability and telemetry using Prometheus, Grafana, NVIDIA DCGM, and OpenTelemetry.
- Lead deep‑dive root cause analysis for GPU, network, storage, and orchestration issues.
- Provide L3 support and work with L2/L1 teams for escalations.
- Drive production readiness, patching, hotfix rollout, and reliability improvements across AI infrastructure.
- Troubleshoot & escalation for complex platform failures
- Deep debugging of: NCCL hangs, GPU fabric issues and co-ordinate with OEM and support vendors on critical issues
- Review RCA, architecture documents, and change plans
- Act as technical advisor to leadership and customers
Qualifications & Experience
Bachelor’s degree in computer science, Engineering