AWS Data Engineer
This role is for one of the Weekday's clients
Salary range: Rs 1500000 - Rs 4000000 (ie INR 15-40 LPA)
Min Experience: 6 years
Location: Hyderabad, Telangana, Andhra Pradesh
JobType: full-time
We are looking for an experienced AWS Data Engineer to design, build, and optimize scalable data solutions across cloud platforms. This role focuses on developing robust data pipelines, enabling efficient data processing, and supporting analytics and reporting needs. You will work closely with cross-functional teams to ensure data is reliable, accessible, and aligned with business requirements.
The position requires strong expertise in AWS data services, data modeling, and ETL pipeline development, along with hands-on experience in managing large-scale data environments. You will play a key role in ensuring smooth data operations from ingestion to consumption while maintaining performance and reliability.
Key Responsibilities
- Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, Kinesis, Step Functions, and EMR
- Create and manage AWS Glue crawlers and jobs for automated data ingestion and cataloging across structured and unstructured data sources
- Build and optimize data workflows using Apache Airflow and PySpark
- Design and manage data warehouse solutions using AWS Redshift, including performance tuning and query optimization
- Develop and maintain data models, data design frameworks, and source-to-target mappings (STTM)
- Enable seamless data consumption for analytics and reporting tools such as QuickSight, SageMaker, and other BI platforms
- Work with S3, RDS, and other AWS storage services to manage large-scale data efficiently
- Ensure data quality through automated testing, code coverage, and validation processes
- Support UAT, deployment, and go-live activities for data solutions
- Monitor and optimize cluster performance and resource utilization
- Implement security and governance practices using IAM and CloudTrail
- Collaborate with teams using version control systems like Git or SVN for code management
What Makes You a Great Fit
- Strong experience with AWS data services including Glue, Lambda, Kinesis, S3, EMR, Redshift, RDS, and Step Functions
- Proficiency in Python, SQL, and PySpark for data processing and transformation
- Hands-on experience with Apache Airflow for workflow orchestration
- Solid understanding of data warehousing concepts, data modeling, and data architecture
- Experience in designing scalable and high-performance data pipelines
- Familiarity with IAM, CloudTrail, and cloud security best practices
- Strong analytical and problem-solving skills with attention to detail
- Experience with version control tools such as Git or SVN
- Ability to manage end-to-end data lifecycle including development, testing, deployment, and support
- Strong communication and collaboration skills in a fast-paced environment