Smaato is a Verve Group company. Its Digital Advertising Technology Platform gives publishers the controls to deliver seamless, tailored, and engaging experiences for their audiences and advertisers. Advanced targeting capabilities and competitive intelligence help publishers optimize their monetization strategy to create data-driven experiences and reach their full revenue potential. Founded in 2005, Smaato is headquartered in San Francisco, California, with additional offices in Hamburg, New York, Shanghai, and Singapore.
Tasks
The Data Engineering team at Smaato presents exciting challenges in technologies such as big-data, distributed computing and storage. We build reliable, peta bytes scale and high performance distributed systems using open source technologies such as Spark, Hadoop, Kafka, and Druid. We work closely with the Apache community, adopting the latest from the community. As part of the team, you will work on the application where all threads come together: Streaming, Processing, Storage the ad exchange. Our ultra-efficient exchange is capable of processing more than 50 billion ad requests daily. Consequently, every line of code you write matters as it is likely to be executed several billion times a day. We are one of the biggest AWS users with a presence in four different regions. If you want your code to make an impact this is the place to be.
As an Engineer of petabyte scale streaming ad tech environment, you will be part of and Engineering team that focuses on Kafka, Spark, Flink, and Druid to build a containerized, highly scalable big-data platform on our Hybrid Cloud Platform for supporting data warehousing, machine learning and Deep learning application.
Our Data Engineers work on our large-scale analytical databases and the surrounding ingestion pipelines. The job will involve constant feature development, performance improvements and platform stability assurance. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high-quality results are essential in this role.
What You’ll Do
Requirements
Qualifications
5-9 years of experience in Big-data platforms with deep understanding of Apache Kafka and/or Apache Druid
If interested, please send your resume to learn more.
Job Description Responsibilities Of Candidates Includes: Develop Strong And Innovative Digital Marketing Strategies, Using SEO, PPC Drive Traffic To Company...
Apply For This JobAt Iron Mountain we protect what our customers value most, from the everyday to the extraordinary. We build customer value...
Apply For This JobFull Job Description Job Reference JR00108672 Job Summary Assistant Manager – Administration Job Purpose “This position is open with Bajaj...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Responsibilities Posting and processing journal entries to ensure all business transactions are recorded Updating accounts...
Apply For This JobJob Description: As an HR Recruiter, you will play a crucial role in identifying, attracting, and selecting talented individuals to...
Apply For This JobJob Description Travel Agent : Our travel agency is searching for a friendly and professional travel agent to join our...
Apply For This Job