Job Description:
Assist in designing, building and optimizing the data platform.
Implement data integration and pipelines to support data projects across different business units.
Ensure tools, processes, and services are well-designed, scalable, and reliable.
Participate in architecture, coding, testing, project planning, deployment, operation and documentation.
Realize data requirements, automate workflows, and streamline development processes.
Communicate with stakeholders to gather requirements and discuss technical solutions.
Job Requirements:
2+ years of experience in data engineering or a related field
Must be proficient in Python and SQL. Knowledge of other programming language is a plus
Understanding of modern data architecture and concepts
Experience in building data lake, data warehouse, ETL pipelines, data models
Hands-on experience with data stack on cloud (e.g. S3, Redshift, EMR, Kinesis, DynamoDB, BigQuery, Dataflow).
Knowledge in big data framework (e.g. Spark) is a plus
Knowledge in DevOps (e.g. git, CICD, IaC) is a plus
Exposure to Agile/Scrum methodologies and SDLC practices.
Ability to learn on-the-fly and take ownership
- Strong communication skill and ready to embrace ambiguity.
Passionate in problem solving and creating value from data
Effective verbal and written communication in English and Chinese.
Fresh graduate will also be considered.
*This position is part of the opportunities available at our GBA Event. Attend the event to connect with hiring managers and explore your next career move.
Similar jobs
More about PCCW/HKT
