Smaato is a Verve Group company. Its Digital Advertising Technology Platform gives publishers the controls to deliver seamless, tailored, and engaging experiences for their audiences and advertisers. Advanced targeting capabilities and competitive intelligence help publishers optimize their monetization strategy to create data-driven experiences and reach their full revenue potential. Founded in 2005, Smaato is headquartered in San Francisco, California, with additional offices in Hamburg, New York, Shanghai, and Singapore.
Tasks
The Data Engineering team at Smaato presents exciting challenges in technologies such as big-data, distributed computing and storage. We build reliable, peta bytes scale and high performance distributed systems using open source technologies such as Spark, Hadoop, Kafka, and Druid. We work closely with the Apache community, adopting the latest from the community. As part of the team, you will work on the application where all threads come together: Streaming, Processing, Storage the ad exchange. Our ultra-efficient exchange is capable of processing more than 50 billion ad requests daily. Consequently, every line of code you write matters as it is likely to be executed several billion times a day. We are one of the biggest AWS users with a presence in four different regions. If you want your code to make an impact this is the place to be.
As an Engineer of petabyte scale streaming ad tech environment, you will be part of and Engineering team that focuses on Kafka, Spark, Flink, and Druid to build a containerized, highly scalable big-data platform on our Hybrid Cloud Platform for supporting data warehousing, machine learning and Deep learning application.
Our Data Engineers work on our large-scale analytical databases and the surrounding ingestion pipelines. The job will involve constant feature development, performance improvements and platform stability assurance. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high-quality results are essential in this role.
What You’ll Do
Requirements
Qualifications
5-9 years of experience in Big-data platforms with deep understanding of Apache Kafka and/or Apache Druid
If interested, please send your resume to learn more.
Job Description We’re looking for a System Engineer (Mumbai, India) We see this as an exciting opportunity for someone who...
Apply For This JobJob Description Job Location : Kolkata Position : Autocad Designer Experience : min 2 year Qualifications : Graduate Industry :...
Apply For This JobWays of Working – Office / Field: Employees are expected to work from the office on all days out of...
Apply For This Jobbr{display:none;}.css-58vpdc ul > li{margin-left:0;}.css-58vpdc li{padding:0;}]]> Qualification : Min. Graduate Salary : 25k – 40k Experience : Min. 5-8 Years Work...
Apply For This JobNW Planning and Optimization Specialist Noida, Uttar Pradesh, India (https://aluperf.referrals.selectminds.com/jobs/78405/other-jobs-matching/location-only) New Customer Services (http://aluperf.referrals.selectminds.com/landingpages/services-opportunities-at-nokia-187) MN Mobile Networks 22000003A9 Requisition #...
Apply For This JobIntroduction In this role, you’ll work in our IBM Client Innovation Center (CIC), where we deliver deep technical and industry...
Apply For This Job