Responsibilities
1. Construction of offline and real-time data systems for various online advertising businesses, and completion of model design, implementation and maintenance of data subject domains 2. R&D and iteration of data service interfaces and product requirements, code review, BUG repair and daily service operation and maintenance 3. In response to massive data processing and query needs, design a reasonable data system that adapts to business changes and meets diverse needs.
Qualifications
1. Bachelor degree or above in computer related majors, master basic computer theories such as computer composition principle, computer system structure, computer network and operating system 2. Master the principles of Hadoop ecosystem related technologies, such as MapReduce, Spark, Hive, Flink and have practical experience 3. Proficient in SQL, can use SQL for complex analysis, give execution plan and master and tune SQL, and proficient in data processing using programming languages such as Java, Python and Shell 4. More than 2 years of experience in data warehouse field, familiar with data warehouse model design, ETL research and development, and experience in massive data processing 5. Master data hierarchical architecture and dimensional modeling methods, understand data indicator system construction and data analysis methods 6. Good at communication, proactive work, strong sense of responsibility, and good teamwork ability. Extra points: 1. Contributor to open source communities such as Github 2. Have a background in data mining and statistics 3. Have the ability and experience of large-scale distributed service design.