Skip to main content

How write operation done in HDFS?

How write operation done in HDFS? 

HDFS follows Write once Read many model, so we can't edit files which are already present in HDFS.
Syntax to write data in HDFS:
 hdfs dfs -put <local/file/path> <HDFS/location/where file needs to write>
 exmple:
        hdfs dfs -put /home/prajjwal/file1.txt  /landing_location/

To write a file in HDFS, a client needs to interact with master i.e. namenode (master). Namenode provides the address of the datanodes (slaves) on which client will start writing the data. Client can directly write data on the datanodes, now datanode will create data write pipeline.

The first DataNode will copy the block to another datanode, which intern copy it to the third datanode. Once it creates the replicas of blocks, it sends back the acknowledgment.

We can understand with the help of below cartoon diagram.


Thanks All.

Comments

Popular posts from this blog

JPS in Hadoop

What is JPS in Hadoop? For checking running process in our Hadoop cluster we use JPS command. JPS  stands for Java  Virtual Machine Process Status Tool  or [JVM Process Status tool]. Below are some important points which one should remember at the time of using JPS command. To check all running nodes on the host via jps, you need to run the command as r oot. Otherwise, jps will only show nodes which you have currently logged-in user as. Example of JPS What-is-JPS-in-Hadoop By Prajjwal

PySpark | How To Handle Nulls In DataFrame?

Handling NULL (or None) values is a crucial task in data processing,  as missing data can skew analysis, produce errors in data transformations,  and degrade the performance of machine learning models.  In PySpark, dealing with NULL values is a common operation when working with distributed  datasets. PySpark provides several methods and techniques to detect, manage, and clean up  missing or NULL values in a DataFrame. In this blog post, we’ll explore how to handle NULL values in PySpark DataFrames, covering  essential methods like filtering, filling, dropping, and replacing NULL values. Methods to Handle NULL Values in PySpark: PySpark provides several ways to manage NULL values effectively: Detecting NULLs: Identifying rows or columns with NULL values. Filtering: Excluding NULL values from the DataFrame. Dropping: Removing rows or columns with NULL values. Filling: Replacing NULL values with a specific value. Replacing: Substituting NULL values based on c...