Intel is Hiring for Fresher | Analyst | Graduate or Post Graduate | 0 - 1 yrs | Apply Now

Intel Technology is Hiring for Entry Level | Engineer | B.E,B.Tech,BCS,BCA,BSc | Big Data | Graduates | 0 – 3 yrs | Bangalore

Intel’s Information Technology Group (IT) designs, deploys and supports the information technology architecture and hardware/software applications for Intel. This includes the LAN, WAN, telephony, data centers, client PCs, backup and restore, and enterprise applications. IT is also responsible for e-Commerce development, data hosting and delivery of Web content and services.

Big Data Engineer

Job Description:

Plans, designs, develops and tests software systems or applications for software enhancements and new products including cloudbased or internetrelated tools. Analyzes requirements, tests and integrates application components; Ensure the system improvements are successfully implemented. Drives unit test automation. Be well versed in the latest development methodologies like Agile, Scrum, DevOps and test driven development. Should also enable solutions that take into account APIs, security, scalability, manageability, usability, and other critical factors that contribute to complete solutions. Usually holds an academic degree in Computer Science, Computer Engineering or Computational Science.

Qualifications
Navigate the Hadoop Ecosystem and know how to leverage or optimize the use of what Hadoop has to offer
– Hadoop development, debugging, and implementation of workflows and common algorithms
– Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with Hadoop tools
– Knowledge of building a scalable and integrated Data Lake for an Enterprise
– Use the HDFS architecture, including how HDFS implements file sizes, block sizes, and block abstraction. Understand default replication values and storage requirements for replication. Determine how HDFS stores, reads, and writes files.
– Analyze the order of operations in a MapReduce job, how data moves from place to place, how partition and combiners function, and the sort and shuffle process
– Analyze and determine which of Hadoop’s data types for keys and values are appropriate for a job. Understand common key and value types in the MapReduce framework and the interfaces they implement
– Organizing data into tables, performing transformations, performance tuning and simplifying complex queries with Hive and Impala
– How to pick the best tool for a given task in Hadoop, achieve interoperability, and manage recurring workflows
– Strong programing skills in Java or Python
– Working Knowledge of data ingestion using spark for supporting various file types like Json, Xml, Csv and databases
– Hands on development experience to extract the data from various sources like SFTP, Amazon S3 and other cloud data sources
– Designing optimal HBase schemas for efficient data storage and ingestion to HBase using the native API
– Knowledge of Kafka, Spark Streaming and stream data loads types and techniques
– Strong Sql and Data Analysis Skills
– Strong Shell Script or any other scripting language

https://lnkd.in/f9KEYpN