

Hey Guys, I have an opening to share with you.
Imagine new horizons...
Dassault Systèmes, the 3DEXPERIENCE Company, is a catalyst for human progress. We provide business and people with collaborative virtual environments to imagine sustainable innovations. By creating ‘virtual experience twins’ of the real world with our 3DEXPERIENCE platform and applications, our customers push the boundaries of innovation, learning and production.
Dassault Systèmes SE (French pronunciation:[daso sistem]) (abbreviated 3DS) is a French software corporation. It is among Fortune 50 list of the largest software companies that develops software for 3D product design, simulation, manufacturing and more. A Dassault Group subsidiary spun off from Dassault Aviation in 1981, it is headquartered in Vélizy-Villacoublay, France, and has around 20,000 employees in 140 different countries.
Within Asia Pacific, Dassault Systemes is spread across 10 countries namely India, Japan, China, Malaysia, Singapore, Thailand, Indonesia, Korea, Taiwan and Australia and constitutes more than 30% of 3DS users globally. 3DS IT Team in Asia Pacific is responsible to enable 3DS businesses across APAC with innovative IT Solutions and world class infrastructure. They are also responsible to manage Production Infrastructure spread across 3 large Data Centers at Singapore, Japan and Pune. Give power to your passion! Find the job you love at 3DS!
If we challenge the status quo, We can imagine the new horizons to improve the world! #Weare3ds, #3ds.
In the News: Forbes Ranks us #48 in the list of World’s top Most Innovative Companies.
What will your job be?
As a Data Engineer, you will be responsible for:
- Data Pipeline Development: Design, develop, and maintain robust ETL pipelines for batch and real-time data ingestion, processing, and transformation using Spark, Kafka, and Python.
- Data Architecture: Build and optimize scalable data architectures, including data lakes, data marts, and data warehouses, to support business intelligence, reporting, and machine learning.
- Data Governance: Ensure data reliability, integrity, and governance by enabling accurate, consistent, and trustworthy data for decision-making.
- Collaboration: Work closely with data analysts, data scientists, and business stakeholders to gather requirements, identify inefficiencies, and deliver scalable and impactful data solutions.
- Optimization: Develop efficient workflows to handle large-scale datasets, improving performance and minimizing downtime.
- Documentation: Create detailed documentation for data processes, pipelines, and architecture to support seamless collaboration and knowledge sharing.
- Innovation: Contribute to a thriving data engineering culture by introducing new tools, frameworks, and best practices to improve data processes across the organization.
Your key success factors:
- Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.
- Professional Experience: 2–3 years of experience in data engineering, with expertise in designing and managing complex ETL pipelines
Technical Skills:
- Proficiency in Python, PySpark, and Spark SQL for distributed and real-time data processing.
- Deep understanding of real-time streaming systems using Kafka.
- Experience with data lake and data warehousing technologies (Hadoop, HDFS, Hive, Iceberg, Apache Spark).
- Strong knowledge of relational and non-relational databases (SQL, NoSQL).
- Experience in cloud and on-premises environments for building and managing data pipelines.
- Analytical and Problem-Solving Skills: Ability to translate complex business requirements into scalable and efficient technical solutions.
- Collaboration and Communication: Strong communication skills and the ability to work with cross-functional teams, including analysts, scientists, and stakeholders.
- Location: Willingness to work from Pune (on-site).
Preferred Qualifications:
- Experience with ETL tools like SAP BODS or similar platforms.
- Knowledge of reporting tools like SAP BO for designing dashboards and reports.
- Hands-on experience building end-to-end data frameworks and working with data lakes.
