- Be part of the global team building data platform to support gaming tenants.
- Build and deploy scalable and reliable data processing pipelines to move and aggregate large amounts of data.
- Develop internal products, frameworks and infrastructures.
- Tune and optimize data applications architecture to reach real-time capabilities.
- Promote best practices coding, code complete standards, and design patterns.
- Champion the overall strategy for data governance, security, and quality that ensure
- requirements are met.
- Passion for data engineering, quality, automation and efficiency, self-starter, loves challenges, independent and a tech savvy;
- Proficient in Java/Scala with +3 years of experience in writing Spark based batch and streaming data applications (Streaming\Structured Streaming\SparkSQL);
- Strong understanding of the Hadoop ecosystems (especially Kafka, hdfs, Zookeeper, Yarn, ORC, Parquet, Hive) and related technologies with +2 years of experience;
- Strong working knowledge of SQL or SQL-like data management languages with +3 years of experience;
- Strong Linux knowledge with bash scripting experience.
- English (both spoken & written).
- Must be a quick and capable learner.
- Advanced knowledge in parallel processing algorithms and techniques.
- Knowledge in advanced BigData topics and distributed computing.
- Knowledge in K8s, Docker, DeltaLake, DataMesh, Aerospike, AirFlow, Redis, Vertica
- Familiarity with data warehousing concepts and systems.
Apply for this job