● Develop and optimize data warehouses and data lakes using best practices
● Develop and maintain data warehouse and data lake metadata, data catalog, and user
documentation for internal business customers.


● BS or MS degree in Computer Science or a related technical field
● 4+ years of Python or Java development experience
● 4+ years of SQL experience (No-SQL experience is a plus)
● 4+ years of experience with schema design and dimensional data modeling
● Ability in managing and communicating data warehouse plans to internal clients
● Experience designing, building, and maintaining data processing systems, including
Data Lakes
● 2+ Years’ experience in using Cloud platforms by vendor, such as AWS, Azure, Google
with hands-on exposure to technology, such as S3, Redshift, AWS Batch or the

Key Responsibilities
● Develops and maintains scalable data pipelines and builds out new API integrations to
support growing data volume and complexity.
● Collaborates with analytics and business teams to improve data models that feed
business intelligence tools, increasing data accessibility and fostering data-driven
decision making across the organization.
● Implements processes and systems to monitor data quality, ensuring production data is
always accurate and available for key stakeholders and business processes that
depend on it.
● Writes unit/integration tests, contributes to engineering wiki, and documents work.
● Performs data analysis required to troubleshoot data related issues and assist in the
resolution of data issues.
● Works closely with a team of frontend and backend engineers, product managers, and
● Designs data integrations and data quality framework.
● Evaluates open source and vendor tools for data lineage.
● Works closely with all business units and engineering teams to develop strategies for
long term data platform architecture.


Salary: Upto 3000$

Remote fulltime: 8h/day


Tell us about your project

Send us a message and we’ll get back to you as soon as possible.