The Data Engineer is responsible for aggregating data from various systems into a central data lake and warehouse to be used for statistical analysis, reporting, and dashboards. Also build data pipelines to transport/exchange data between multiple systems using Kafka.
This position will utilize modern cloud data warehouse technology and works with both unstructured and structured data, and Confluent Kafka to exchange data back and forth between systems, lakes, and applications. This is not a traditional data warehouse position and will not involve traditional schema-based relational database design.
The successful candidate will understand cloud data warehouse technology, and Kafka, as well as how to organize and store data for use with advanced analytics.
KEY RESPONSIBILITIES
Develop, Maintain, support, and enhance the business intelligence data backend, including data warehouses and data lakes.
Performs needed assessments
Implement data transformations and data structures for data warehouse and lake/repository.
Process data using spark (PySpark)
Manage cloud and/or on-premises solutions for data transfer and storage.
Establish data structures for all enterprise data stored in business intelligence systems.
Collaborate and work with data analysts in various functions to ensure that data meets their reporting and analysis needs.
Establish interfaces between the data warehouse and reporting tools, such as PowerBI.
Assist data analysts with connecting reporting and analytics software to data warehouses, lakes, and other data sources.
Develop, create, maintain, and enhance the data pipelines using kafka over a variety of systems.
Manage access and permissions to data.
REQUIRED SKILLS
Bachelor’s Degree plus at least 5-7 years of experience with minimum 2+years in Data lakes, managing data warehouse and/or business intelligence systems. An advanced degree or certifications in a related field is a plus.
Knowledge, Skills & Abilities:
Demonstrated experience with setting up data structures (tables and views) for use with modern analytics software.
Expertise with Amazon Web Services, kafka, Spark programming, SQL-based database systems, and/or other enterprise data warehouse solutions.
Experience working with programming languages used in ETL and/or ELT environments, such as SQL and Python.
Experience working with DevOPS tools like Jenkins, Ansible, Terraform, GitHUB.
Eperience / exposure to Java is a plus.
br{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Position Overview: The District Lead will oversee the program implementation and lead the agenda of...
Apply For This JobOur vision is to transform how the world uses information to enrich life for all. Micron Technology is a world...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Mentor, tutor and advise students Plan, develop and implement lesson plans Respond to student inquiries...
Apply For This JobJob Description Achievement of business plan targets in terms of volume growth, product mix and market share . Responsibility for...
Apply For This JobJob Title: Advisor I, Customer Service Job Description The Advisor II, Customer Service position interfaces with customers via inbound calls,...
Apply For This JobJob Family: Information Technology Req ID: 410764 Hello eager tech expert! To create a better future, you need to think...
Apply For This Job