Responsibilities:
- Be part of the global team building data platform to support gaming tenants.
- Build and deploy scalable and reliable data processing pipelines to move and aggregate large amounts of data.
- Develop internal products, frameworks and infrastructures.
- Involve in tuning and optimizing data applications.
- Practice Agile and Scrum methodologies to achieve team velocity and quality.
Requirements:
- Passion for data engineering, quality, automation and efficiency, self-starter, loves challenges, independent and a tech savvy;
- Deep Java knowledge with +2 years writing rest and/or micro-services distributed solution;
- Experience in Java/Scala with +2 years of experience in writing Spark based batch or streaming data applications (Streaming\Structured Streaming\SparkSQL);
- Understanding of the Hadoop ecosystems (such as Kafka, hdfs, Zookeeper, Yarn, ORC, Parquet, Hive) and related technologies with +1 years of experience;
- Working knowledge of SQL or SQL-like data management languages;
- Linux knowledge with bash scripting experience.
- English (both spoken & written).
- Must be a quick and capable learner.
ADVANTAGES:
- Advanced knowledge in parallel processing algorithms and techniques.
- Knowledge in advanced BigData topics and distributed computing.
- Spring Boost 2.0, Spring Cloud
- Knowledge in K8s, Docker, DeltaLake, DataMesh, Aerospike, AirFlow, Vertica
- Familiarity with data warehousing concepts and systems.
Apply for this job