Overview
Reliance Industries Limited is an Indian multinational conglomerate headquartered in Mumbai. Its businesses include energy, petrochemicals, natural gas, retail, telecommunications, mass media, and textiles
Reliance Jio Walk-In Interview 2024, Reliance Jio Walk-In Recruitment 2024, Reliance Jio Drive 2024, Reliance Jio Walk-In Jobs 2024, Reliance Jio Walk-In Jobs Openings in 2024
Jio is an Indian telecommunications company and a subsidiary of Jio Platforms, headquartered in Navi Mumbai, Maharashtra. It operates a national LTE network with coverage across all 22 telecom circles. Jio offers 4G and 4G+ services all over India and 5G service in many cities. Its 6G service is in the works.
Reliance Jio Walk In Interview on 29th June 2024
Location: Mumbai, Hyderabad, Gurgaon, Bangalore
Experience: 3 – 10 years
Walk-In Drive for Data Engineer on 29th June at Bangalore
Job Location Bangalore, Mumbai, Hyderabad & Gurgaon (candidates who are willing to move to other locations from Bangalore can attend this drive)
Job Description:
This role is for a Data Engineer with solid development experience who will focus on creating robust data pipelines and enhancing existing ETL processes.
You will be an integral part of development team examining requirements and designing optimal solutions.
The role is for a self-motivated individual with knowledge of Hadoop ecosystem and various Hadoop tools/services.
The candidate will perform hands-on activities including design, documentation, development and test of new functionality.
Job Responsibilities:
Build and extend re-usable platform components & frameworks within Big Data Ecosystem, having highly modular and extendible design.
Iterative improvement in existing solutions and technology stack by prototyping from latest bleeding edge technologies.
Capability to work efficiently in both the roles as an Individual Contributor (IC) or as a Team player
Good team player – mentor, guide and lead other developers & team members in building robust, highly resilient data processing pipelines.
Encourage use of correct coding practices through aggressive code reviews and TDD approach.
Desired Skills and Experience
Python, Apache Spark, Hive, Apache Kafka, Hadoop, Scala, MapReduce
Qualifications:
3 to 6 years of relevant work experience with a bachelors degree or masters degree, or a PhD.
Preferred:
Excellent coding skills in Java/Scala/python especially in OOP constructs & concurrency systems; experienced in building highly optimal software systems.
Well versed in shell scripting for working on Unix/Linux based systems.
Experience in designing, building, tuning & troubleshooting distributed, scalable data pipelines & data streaming solutions
In depth knowledge and hands-on experience of Hadoop based computing solutions including but not limited to MapReduce, Spark, Hive etc,.
Building real time streaming solutions including but not limited to Kafka, Spark Streaming, Structure Streaming, Apache Flink, Apache Beam, NiFi
For more details to Walk-In, Click here!
Data Engineer (3-8 yrs)- Bangalore Walk-in Interview
Location: Mumbai, Hyderabad, Gurgaon, Bangalore
Experience: 4 to 10 years of Experience
Remote/On-site/Hybrid: On – Site
Job Responsibilities:
End-to-End Data Pipeline Development:Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow.
Reusable Components & Frameworks:Develop reusable data pipeline components and contribute to the team’s data pipeline framework evolution.
Data Architecture & Solutions:Contribute to data architecture design, applying data modelling, storage, and retrieval expertise.
Data Governance & Automation:Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices.
Collaborative Problem Solving:Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights.
Mentorship & Knowledge Transfer:Guide and mentor junior data engineers, fostering knowledge sharing and professional growth.
Qualification:
Education: Bachelor’s degree or higher in Computer Science, Data Science, Engineering, or a related technical field.
Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts.
Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.).
Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus.
End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data.
Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery.
CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation.
Desired :
Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively.
Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders).
Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Desired:
Hive, Hadoop, Cloudera, HDInsight, Azure, PySpark, CI/CD, On-premise