RTHY Job Responsibility Primary responsibility of Confluent Platform Administrator will be to build, test and maintain Kafka Cluster and its eco-system which is deployed to run Data Streaming use cases for various business units Experience Overall 5+ years of experience out of which 2+ years around Confluent Platform administration Mandatory Job Requirements Manage single and multi-node Kafka cluster deployed on VM, Docker and Kubernetes Container platform. Experience with Confluent Platform running on-prem · Perform Kafka Cluster build, including Design, Infrastructure planning, High Availability and Disaster Recovery · Implementing wire encryption using SSL, authentication using SASL/LDAP & authorization using Kafka ACLs in Zookeeper, Broker/Client, Connect cluster/connectors, Schema Registry, REST API, Producers/Consumers, Ksql · Perform high-level, day-to-day administration and support functions ·Upgrades for the Kafka Cluster landscape comprising of Development, Test, Staging and Production/DR systems · Creation of key performance metrics, measuring the utilization, performance, and overall health of the cluster. · Capacity planning and implementation of new/upgraded hardware and software releases as well as for storage infrastructure. · Research and recommend innovative ways to maintain the environment and where possible, automate key administration tasks. ·Ability to work with various infrastructure, administration, and development teams across business units · Document and share design, build, upgrade and standard operating procedures. Conduct knowledge transfer sessions and workshops for other members in the team. Provide technical expertise and guidance to new and junior members in the team · Create topics, setup Apache Kafka MirrorMaker 2, Confluent Replicator to replicate the topics, create connect clusters, Schemas for the topics using Confluent Schema Registry · Configure various Opensource and licensed Kafka Source/Sink Connectors such as Kafka Connect for SAP HANA, Debezium Oracle and MySQL Connectors, Confluent JDBC source/sink, Confluent ADLS2 Sink connector and Confluent Oracle CDC source connector.. · Develop and maintain Unix scripts to perform day to day Kafka Admin and Security related functions using Confluent REST Proxy server · Setting up monitoring tools such as Prometheus, Grafanato scrape metrics from various Kafka cluster components (Broker, Zookeeper, Connect, REST proxy, Mirror Maker, Schema Registry …) and other endpoints such as webservers, databases, logs etc. and configure alerts for Kafka Cluster and supporting infrastructure to measure availability and performance SLAs · Experience with Confluent ksql to query and process Kafka streams · Knowledge of Kafka Producer and Consumer APIs, Kafka Stream Processing, Confluent Ksql · Availability to work in shifts, extended hours and to provide on-call support as required. There will be work over weekends at times depending on the project needs. · Must have excellent communications and interpersonal skills Preferred but Optional skills · Linux (SLES or RHEL) system administration (basic or advanced), creating shell scripts .. · Working experience on docker and Kubernetes clusters (opensource, Rancher, RedHat OCP, VMWare Tanzu) involving administration of containers(Operator level skills), deployments, updates, integration with products running outside of the cluster · Working knowledge with container registry such as Harbor, Quay, Nexus etc. Exposure to Container/artifact scanners such as Trivy, Claire … · Security related config for above listed software or any other tools in SSL for wire encryption, integration with AD for authentication and RBAC for authorizations · Implemented and supported any enterprise product such as any well-known ERP products, Data warehouse, Middleware etc. · Database administration skills in Oracle, MSSQL, SAP HANA, DB2, Aerospike, Postgres .. · Exposure to SaaS based observability platform like New Relic · Deployment of container images and pods using CI/CD pipelines using Jenkins or comparable tools. · Experience in building Kafka deployment pipelines using Terraform, Ansible, Cloud formation templates, shells etc. Worked in Public cloud environment such asAzure or AWS or GCP, preferably in Azure Note – Resource need to ready for F2F Intv at IBM location based on account request and Day 1 reporting post OB.
Job Types: Full-time, Regular / Permanent
Salary: ₹800,000.00 – ₹1,000,000.00 per year
Schedule:
Ability to commute/relocate:
Experience:
Speak with the employer
+91 8591871322
br{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Job Description For Schedule Your Interview Contact: HR Team 9389261776, 8439265145 Based on clients requirement...
Apply For This JobSolution Engineer Noida, Uttar Pradesh, India (https://aluperf.referrals.selectminds.com/jobs/81470/other-jobs-matching/location-only) New 1 additional location Chennai, Tamil Nadu, India Customer Services CNS Cloud and...
Apply For This JobFull Job Description Psst, here’s a secret for you! You are going to be working in a team which doesn’t...
Apply For This JobWe help the world run better Our company culture is focused on helping our employees enable innovation by building breakthroughs...
Apply For This JobFull Job Description Organizational ValuesRecruitmentPerformance ManagementSuccession PlanningPoliciesLearning and DevelopmentEmployee EngagementCompensationHealth and SafetyLegal and regulatoryShould have good command over languageMin 8...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> F&B Captain Duties and Responsibilities F & B captains is responsible for providing high-quality customer...
Apply For This Job