Attention talented individual! This is not just another job posting.
As Switzerland’s leading digital hub, we provide our media and platforms with enabling technology solutions, to drive their businesses. We stand for interdisciplinary collaboration, innovation, and dynamic development.
We are on the move – and want to keep moving. We are farsighted. We are proactive. We are courageous. We are TX.
This is an opportunity to challenge yourself and make your mark on the world. You’ll finally be able to contribute your skills and expertise to a company - and to a product - making a real difference in the lives of millions of people every single day.
Doodle revolutionizes the way the world schedules meetings. Our suite of scheduling tools enables individuals and enterprises around the world to own their time, be more productive, and grow their businesses. We’re the industry leader in scheduling technology with 30 million monthly users, and we’re just getting started.
Now’s your chance to join 80 ambitious engineers, designers, product managers, marketers, and salespeople on a mission to make great meetings happen. Be a part of a global team headquartered in Zurich, with offices in Atlanta, New York City, Belgrade, and Berlin.
Who We’re Looking For:
You’ll be joining our Data team based in Berlin and Zurich offices. You and your team will lay the foundation for our data infrastructure that helps Engineers and Product Managers to develop personalized Doodle experiences and further accelerate growth. We build a state-of-the-art data pipeline using Kafka, real-time stream processors, spark and various databases (Redshift, Athena and MongoDB).
Ideally, You Have:
- BSc or MSc in Computer Science or a related field
- Real passion for collaboration and strong interpersonal skills
- Experience crafting and implementing high-performance microservices serving millions of requests a day and stream-processing of large data sets (Kafka experience is a huge plus)
- Strong grasp of database structure, design, query languages (e.g. SQL), large data sets, distributed systems, fundamentals of mathematics and statistical concepts
- Experience in building data pipelines, processing services and models in Python (airflow, scikit-learn or pandas know-how is a huge plus)
- Experience in one or more programming languages, including Java, C++, Python.
- Ideally knowledge/interest in Linux, Docker and container orchestration frameworks, such as Kubernetes
What the Role Involves:
- Develop and maintain tracking infrastructure, data pipeline, data lake, and data warehouse
- Create a self-service data ecosystem for the entire company
- Ensure the correctness, security, and accessibility of data across the company
- Exemplify best practices for data management
- Advise teams on data modelling, management, and architecture strategies
What’s awesome about working for us?
- Being a part of establishing a Swiss-based subsidiary - TX Services in Belgrade
- Top talent deserves a competitive salary
- Develop yourself and work on interesting projects in an open-minded and diverse culture
- Every Doodler gets a yearly budget for education, conferences, and learning resources
- Party with your colleagues at international retreats
- Travel to some of our lovely offices worldwide
- Premium Doodle memberships, also for your friends and family
So, Get in Touch!
At Doodle, we’re committed to providing an environment of mutual trust and respect, where equal employment opportunities (EEO) are available to all applicants and teammates without regard to age, race, color, disability, religion, gender and sexual orientation. Diversity and inclusion are of utmost importance to us. We’re committed to building a team that represents a variety of backgrounds, perspectives and skills. The more inclusive we are, the better our work and our products will be. We want to hear from you, so please don’t hesitate to apply!