Skip to main content

How to remove duplicates from data?






Removing Duplicates from Production Data in Real-Time Using SQL


Handling duplicates in production data requires efficient strategies to maintain data integrity and avoid system performance issues. Here’s a structured approach to achieve this:  


---


### **1. Prevention: Use Unique Constraints**  

The best way to deal with duplicates is to prevent them. Ensure your database schema is designed to enforce uniqueness:  

- **Primary Key**: Define a primary key to prevent identical rows.  

- **Unique Constraints**: Apply unique constraints to columns or combinations of columns that should not contain duplicate values.  


**Example:**  

```sql

ALTER TABLE my_table 

ADD CONSTRAINT unique_constraint_name UNIQUE (column1, column2);

```


---


### **2. Identifying Duplicates**  

Before removing duplicates, identify them using `GROUP BY` and `HAVING`:  


**Example:**  

```sql

SELECT column1, column2, COUNT(*) AS duplicate_count

FROM my_table

GROUP BY column1, column2

HAVING COUNT(*) > 1;

```


This query returns columns and their respective duplicate counts.


---


### **3. Deleting Duplicates**  

Use Common Table Expressions (CTEs) or subqueries to safely delete duplicate rows while keeping one instance of each record.


#### **CTE Approach:**  

```sql

WITH CTE AS (

  SELECT 

    id, 

    ROW_NUMBER() OVER (PARTITION BY column1, column2 ORDER BY id) AS row_num

  FROM my_table

)

DELETE FROM my_table

WHERE id IN (

  SELECT id

  FROM CTE

  WHERE row_num > 1

);

```


#### **Subquery Approach:**  

```sql

DELETE FROM my_table

WHERE id NOT IN (

  SELECT MIN(id)

  FROM my_table

  GROUP BY column1, column2

);

```


---


### **Key Considerations:**  

- **Performance:** Test queries on a subset of data before executing them in production to minimize the impact on system performance.  

- **Backups:** Always back up your data before running deletion queries.  

- **Indexing:** Ensure appropriate indexes are in place to optimize query performance.  


By following these steps, you can efficiently manage duplicates in production data without disrupting operations.  

Comments

Popular posts from this blog

JPS in Hadoop

What is JPS in Hadoop? For checking running process in our Hadoop cluster we use JPS command. JPS  stands for Java  Virtual Machine Process Status Tool  or [JVM Process Status tool]. Below are some important points which one should remember at the time of using JPS command. To check all running nodes on the host via jps, you need to run the command as r oot. Otherwise, jps will only show nodes which you have currently logged-in user as. Example of JPS What-is-JPS-in-Hadoop By Prajjwal

How write operation done in HDFS?

How write operation done in HDFS?  HDFS follows Write once Read many model, s o we can't edit files which are already present in HDFS. Syntax to write data in HDFS:  hdfs dfs -put <local/file/path> <HDFS/location/where file needs to write>  exmple:         hdfs dfs -put /home/prajjwal/file1.txt  /landing_location/ To write a file in HDFS, a client needs to interact with master i.e. namenode (master). Namenode provides the address of the datanodes (slaves) on which client will start writing the data. Client can directly write data on the datanodes, now datanode will create data write pipeline. The first D ataNode will copy the block to another datanode, which intern copy it to the third datanode. Once it creates the replicas of blocks, it sends back the acknowledgment. We can understand with the help of below cartoon diagram. Thanks All.