Smaato is a Verve Group company. Its Digital Advertising Technology Platform gives publishers the controls to deliver seamless, tailored, and engaging experiences for their audiences and advertisers. Advanced targeting capabilities and competitive intelligence help publishers optimize their monetization strategy to create data-driven experiences and reach their full revenue potential. Founded in 2005, Smaato is headquartered in San Francisco, California, with additional offices in Hamburg, New York, Shanghai, and Singapore.
Tasks
The Data Engineering team at Smaato presents exciting challenges in technologies such as big-data, distributed computing and storage. We build reliable, peta bytes scale and high performance distributed systems using open source technologies such as Spark, Hadoop, Kafka, and Druid. We work closely with the Apache community, adopting the latest from the community. As part of the team, you will work on the application where all threads come together: Streaming, Processing, Storage the ad exchange. Our ultra-efficient exchange is capable of processing more than 50 billion ad requests daily. Consequently, every line of code you write matters as it is likely to be executed several billion times a day. We are one of the biggest AWS users with a presence in four different regions. If you want your code to make an impact this is the place to be.
As an Engineer of petabyte scale streaming ad tech environment, you will be part of and Engineering team that focuses on Kafka, Spark, Flink, and Druid to build a containerized, highly scalable big-data platform on our Hybrid Cloud Platform for supporting data warehousing, machine learning and Deep learning application.
Our Data Engineers work on our large-scale analytical databases and the surrounding ingestion pipelines. The job will involve constant feature development, performance improvements and platform stability assurance. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high-quality results are essential in this role.
What You’ll Do
Requirements
Qualifications
5-9 years of experience in Big-data platforms with deep understanding of Apache Kafka and/or Apache Druid
If interested, please send your resume to learn more.
br{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Job description / KRAs: Responsible for providing correct & effective communication to existing & prospective...
Apply For This JobFull Job Description Responsible for daily administration, meeting and greeting, dealing with guests’ queries and complaints, and booking rooms. visitors...
Apply For This JobJob Family Descriptor Maintain customer accounts and ensure billing accuracy consistent revenue generation delivery of billing information to customers address...
Apply For This JobFull Job Description Opening: 1 Nos. Job ID: 24751 Employment Type: Full Time Reference: Work Experience: 0 To 2.0 Year(s)...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Company Description Netcentric is an award-winning Adobe Global Alliance Partner and a recognized expert in...
Apply For This JobJob Description Johnson & Johnson MedTech India is recruiting for a Medical Affairs Lead located in Mumbai, India. The position...
Apply For This Job