Description
RING, part of Amazon, focuses on smart home security products, primarily known for its video doorbells and security cameras. The RING system allows users to monitor their homes in real-time through video feeds, receive motion alerts, and communicate with visitors via two-way audio. RING’s ecosystem integrates with various smart home devices, enhancing overall security and convenience. It also includes a community aspect, where users can share and access local security information through the Neighbors app.
The Ring Data Science and Engineering team owns products and services for Ring’s growing analytics and operational needs. The portfolio of services managed by the team allow centralized data collection, aggregation and the building of standardized data models for analytics use cases.
Our mission is to accelerate innovation and promote data driven decision making across every aspect of Ring’s business. To accomplish this mission we build products and services that streamline data collection, deliver a set of standard, unambiguous metrics and analytics tools, identify opportunities to deploy machine learning and deliver actionable insights and provide tools that enable privacy by design for all of Ring’s products and services.
Key job responsibilities
• Deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines.
• Assist the Ring Data Science and Engineering team with management of our existing environment that consists of Redshift and SQL based pipelines. The activities around these systems will be well defined via standard operation procedures (SOP) and typically involve approving data access requests, subscribing or adding new data to the environment
• Data pipeline management (creating or updating existing pipelines)
• Perform maintenance tasks on Clusters.
• Assist the team with the management of our next-generation AWS infrastructure. Tasks includes infrastructure monitoring via CloudWatch alarms, infrastructure maintenance through code changes or enhancements, and troubleshooting/root cause analysis infrastructure issues that arise, and in some cases this resource may also be asked to submit code changes based on infrastructure issues that arise.
About the team
Ring Data Science and Engineering is organized into three sub-organizations: 1) Data Operations, 2) Data Warehouse, and 3) Data Science and Analytics. Originally established to drive Ring’s data strategy, governance, architecture, analytics platforms, and business insights, we are now expanding our focus to support Blink, Key, and Sidewalk.
Data Operations is responsible for large-scale data collection services (e.g., API services), near-real-time telemetry streaming (e.g., Kinesis/Kafka), and operational analytics platforms (e.g., Splunk). This organization is divided into four teams: EventStream, LogStream, Quick Action Service (QAS), and Database Engineering.
Data Warehouse handles foundational data engineering pipelines (e.g., Airflow jobs using EMR), analytics platforms (e.g., Athena, Redshift, Tableau), and privacy compliance automation services (e.g., API services). This org is also divided into four teams: Platforms, Business Vertical Data Pipelines, Data Quality, and Data Privacy. These two-pizza teams primarily consist of Data Engineers, Software Development Engineers, and System Development Engineers.
Data Science and Analytics is responsible for Ring’s foundational AI/ML models, core business metrics, shared data models, product analytics dashboards, and analyst/scientist support. This organization is split into two teams: Business Intelligence and Data Science, with team members primarily comprising Business Intelligence Engineers and Data Scientists.
Basic Qualifications
Experience with data modeling, warehousing and building ETL pipelines
Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
Experience with one or more scripting language (e.g., Python, KornShell)
2+ years of data engineering experience
Experience with SQL
Experience in Unix
Experience in Troubleshooting the issues related to Data and Infrastructure issues.
Preferred Qualifications
Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Knowledge of distributed systems as it pertains to data storage and computing
Experience in building or administering reporting/analytics platforms
Job Description Position: Jr. Social Media ExecutiveJob Type: PermanentCompany: Adit Digital Marketing AgencyExperience Required: 0 – 6 months: Assist with...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Job Description #Note: Female Candidate Preferred Job Designation: Data Entry Operator/Data ExecutiveRequired Experience: 0.6 years...
Apply For This JobAchieving business targets as laid down by acquiring new client relationships and maintaining them. Graduate in any discipline Identify target...
Apply For This JobVacancy Airport Representative Experience Minimum 2 years Qualification Graduate Basic English Speaking is required. Send your CV on Requirements Basic...
Apply For This JobJob details Job Type Regular / Permanent Full Job Description Experience11-15 Years CategoryOperations TypePERMANENT QualificationB. Tech Job Descriptions Managing projects,...
Apply For This JobHi Pidilite is hiring for Territory Sales Incharge Location : Bhopal Skill : Territory Sales Incharge Exp : 2 to...
Apply For This Job