Data Architect

Location
Contract Type
Permanent
Salary
₹ 2,500,000
Published
Reference
29-16-12403
Academic title
B.Tech/B.E.
Job description

The key objectives of the Data Architect role includes:

Data Modeling: Designs and implements data models that align with the requirements of the Analytics team. This involves understanding the data needs of the team, identifying relevant data sources, and creating logical and physical data models that facilitate efficient data storage and retrieval.

Data Integration and ETL: Responsible for designing and implementing data integration processes, including Extract, Transform, Load (ETL) workflows. Ensure that data from various sources is collected, cleansed, transformed, and loaded into the appropriate data repositories for analysis.

Database Management: Oversees the management and administration of databases used by the Analytics team. This includes optimizing database performance, ensuring data security and access controls, and monitoring data quality and consistency.

Data Governance: Establish and enforces data governance policies and procedures within the Analytics team. Define data standards, data lineage, and data documentation practices to ensure data consistency, reliability, and compliance with regulatory requirements.

Collaboration and Communication: Collaborates closely with other members of the Analytics team, as well as stakeholders from different functional areas, such as supply chain, operations, and IT. Communicate effectively to understand data requirements, address technical challenges, and present data insights in a clear and understandable manner.

Connecting external APIs is also a crucial part of the purpose of the Data Architect role within the Analytics team. Connecting external APIs involves integrating and leveraging data from external sources, such as third-party vendors, suppliers, or industry-specific APIs, to enrich the data ecosystem of the SCM Analytics team. The Data Architect is responsible for identifying relevant external APIs that can provide valuable data for analysis and decision-making.

Requirements
  • Ability to create datawarehousing solutions by translating business requirements
  • Familiarity with ETL processes & tools.
  • Familiarity in scripting languages such as Python and/or PySpark o
  • Advanced knowledge of SQL for complex data transformation and aggregation.
  • Understanding of data formats – JSON, CSV, XML
  • Proficiency working with data warehouses, NoSQL..
  • Knowledge of cloud-based platforms – Azure Cloud..
  • Real time data processing – Apache, Kafka, Flink
  • Proficiency in performance optimization techniques.
  • Ensuring the data infrastructure is designed keeping security best practices in mind
Benefits

excellent perks

Other notes
For more related job opportunities visit https://in.grafton.com/en/job-search