Top 50+ SQL Asked Question in Interview



It stands for Structured Query Language. A programming language used for interaction with relational database management systems (RDBMS). This includes fetching, updating, inserting, and removing data from tables.

  • Data Definition Language (DDL) – to define and modify the structure of a database.
  • Data Manipulation Language (DML) – to access, manipulate, and modify data in a database.
  • Data Control Language (DCL) – to control user access to the data in the database and give or revoke privileges to a specific user or a group of users.
  • Transaction Control Language (TCL) – to control transactions in a database.
  • Data Query Language (DQL) – to perform queries on the data in a database to retrieve the necessary information from it.

A structured storage space where the data is kept in many tables and organized so that the necessary information can be easily fetched, manipulated, and summarized.

A Database Management System (DBMS) is a software application that facilitates the creation, maintenance, and utilization of databases. It serves as an intermediary between users and the actual database, enabling efficient storage, retrieval, and manipulation of data.

A Database Management System (DBMS) is a software that allows users to interact with a database. It facilitates the creation, maintenance, and use of databases. On the other hand, a Relational Database Management System (RDBMS) is a type of DBMS that stores data in a structured format using rows and columns, with relationships defined between the data.

Key Differences:

1- Data Structure:

  • DBMS: In a DBMS, data is stored in files, which can be hierarchical, network, or object-oriented.
  • RDBMS: RDBMS stores data in tables with rows and columns. Tables can have relationships defined between them using keys.
2- Data Integrity:

  • DBMS: DBMS does not enforce referential integrity by default.
  • RDBMS: RDBMS enforces referential integrity through foreign key constraints, ensuring data consistency.
3- Query Language:

  • DBMS: DBMS may support its own query language.
  • RDBMS: RDBMS typically uses SQL (Structured Query Language) to interact with the database.
4- Scalability:

  • DBMS: DBMS may not be as scalable as RDBMS due to the lack of normalization and relationships.
  • RDBMS: RDBMS offers better scalability through normalization and efficient data retrieval using SQL queries.
5- Performance:

  • DBMS: Performance may vary based on the data structure and complexity.
  • RDBMS: RDBMS, with its relational model and optimized queries, often provides better performance for complex queries involving multiple tables.
6- Data Consistency:

  • DBMS: Data consistency needs to be managed at the application level.
  • RDBMS: RDBMS ensures data consistency through ACID properties (Atomicity, Consistency, Isolation, Durability).

SQL is a standard language for retrieving and manipulating structured databases. On the contrary, MySQL is a relational database management system, like SQL Server, Oracle or IBM DB2, that is used to manage SQL databases.
SQL (Structured Query Language):
SQL is a standardized programming language used to manage and manipulate relational databases. It serves as a universal language for interacting with databases, allowing users to perform tasks such as querying data, updating records, and defining database structures. SQL is not a database itself but a language used to communicate with databases.

Key Points about SQL:

  • Standardization: SQL is an ANSI (American National Standards Institute) and ISO (International Organization for Standardization) standardized language, ensuring consistency across database systems.
  • Functionality: SQL provides a set of commands (e.g., SELECT, INSERT, UPDATE, DELETE) to perform various operations on databases.
  • Portability: SQL queries can be executed on different database management systems (DBMS) that support SQL, making it versatile and widely adopted.
MySQL:
MySQL, on the other hand, is an open-source relational database management system (RDBMS) that uses SQL as its querying language. Developed by Oracle Corporation, MySQL is one of the most popular database systems used for web applications, data warehousing, and e-commerce platforms.

Key Points about MySQL:

  • Database Management System: MySQL is a specific implementation of a database management system that utilizes SQL for querying and managing databases.
  • Features: MySQL offers features like transactions, indexing, and stored procedures to enhance database functionality.
  • Scalability: MySQL is known for its scalability, making it suitable for small to large-scale applications.
Key Differences:
  • SQL vs. MySQL: SQL is a language used to interact with databases, while MySQL is a specific RDBMS that implements SQL.
  • Flexibility: SQL is a standard language that can be used with various DBMS, whereas MySQL is a specific database system.
  • Open Source: MySQL is open-source and free to use, while SQL is a language standard without a specific implementation.

A table is an organized collection of data stored in the form of rows and columns. Columns can be categorized as vertical and rows as horizontal. The columns in a table are called fields while the rows can be referred to as records.

Constraints are used to specify the rules concerning data in the table. It can be applied for single or multiple fields in an SQL table during the creation of the table or after creating using the ALTER TABLE command. The constraints are:

  • NOT NULL - Restricts NULL value from being inserted into a column.
  • CHECK - Verifies that all values in a field satisfy a condition.
  • DEFAULT - Automatically assigns a default value if no value has been specified for the field.
  • UNIQUE - Ensures unique values to be inserted into the field.
  • INDEX - Indexes a field providing faster retrieval of records.
  • PRIMARY KEY - Uniquely identifies each record in a table.
  • FOREIGN KEY - Ensures referential integrity for a record in another table.

A Primary Key in a database is a unique identifier for each record in a table. It ensures that each row in a table is uniquely identified and helps in maintaining data integrity. The Primary Key constraint enforces the uniqueness of the key and also ensures that it cannot have a NULL value. Typically, a Primary Key is created on one or more columns in a table to uniquely identify each record. It serves as a reference point for relationships between tables and is crucial for efficient data retrieval and manipulation operations. In essence, a Primary Key is like a fingerprint for each row, making it easily identifiable and distinct within the database structure.

A UNIQUE constraint is used to ensure that all values in a specific column or a group of columns are unique or distinct. This constraint guarantees that no two rows in a table will have the same value in the specified column(s). It helps maintain data integrity by preventing duplicate entries in critical fields.

When a UNIQUE constraint is applied to a column or a combination of columns, the database system automatically creates a unique index on those columns. This index enforces the uniqueness requirement, making sure that no duplicate values are allowed.

Here is an example of how to create a table with a UNIQUE constraint in SQL:
CREATE TABLE Employees ( EmployeeID INT PRIMARY KEY, EmployeeName VARCHAR(50) UNIQUE, DepartmentID INT, Salary DECIMAL(10, 2) );

In this example, the EmployeeName column has a UNIQUE constraint, meaning that each employee's name must be unique in the Employees table. If an attempt is made to insert a new row with a duplicate EmployeeName, the database will raise an error, ensuring data consistency.


Overall, the UNIQUE constraint plays a vital role in maintaining the quality and accuracy of data stored in a database by enforcing uniqueness at the column level.

A Foreign Key in SQL is a field or a combination of fields in a table that uniquely identifies a record in another table. It establishes a relationship between two tables by referencing the primary key of another table. This relationship ensures referential integrity, meaning that values in the Foreign Key column must exist in the referenced table's Primary Key column or be NULL.

When a Foreign Key constraint is defined, it enforces data integrity by preventing actions that would destroy the relationship between tables. For example, you cannot insert a value into a Foreign Key column that does not exist in the referenced table's Primary Key column. Similarly, you cannot delete a record from a table if there are dependent records in another table referencing it through a Foreign Key.

Here is an example of creating a Foreign Key in SQL:
CREATE TABLE Orders ( OrderID int PRIMARY KEY, ProductID int, OrderDate date, FOREIGN KEY (ProductID) REFERENCES Products(ProductID) );

In this example, the Orders table has a Foreign Key ProductID that references the ProductID column in the Products table. This ensures that every ProductID in the Orders table must exist in the Products table or be NULL, maintaining the integrity of the relationship between the two tables.

Primary Key

  • Primary key cannot have a NULL value.
  • Each table can have only one primary key.
  • By default, Primary key is clustered index and data in the database table is physically organized in the sequence of clustered index.
  • Primary key can be related with another table's as a Foreign Key.
  • We can generated ID automatically with the help of Auto Increment field. Primary key supports Auto Increment value.

Unique Key

  • Unique Constraint may have a NULL value.
  • Each table can have more than one Unique Constraint.
  • By default, Unique key is a unique non-clustered index.
  • Unique Constraint can not be related with another table's as a Foreign Key.
  • Unique Constraint doesn't supports Auto Increment value.

Foreign Key

  • Foreign key is a field in the table that is primary key in another table.
  • Foreign key can accept multiple null value.
  • Foreign key do not automatically create an index, clustered or non-clustered. You can manually create an index on foreign key.
  • We can have more than one foreign key in a table.
  • There are actual advantages to having a foreign key be supported with a clustered index, but you get only one per table. What's the advantage? If you are selecting the parent plus all child records, you want the child records next to each other. This is easy to accomplish using a clustered index.

A database index is a data structure that provides a quick lookup of data in a column or columns of a table. It enhances the speed of operations accessing data from a database table at the cost of additional writes and memory to maintain the index data structure.

CREATE INDEX index_name   /* Create Index */
ON table_name (column_1, column_2);
DROP INDEX index_name;   /* Drop Index */

There are different types of indexes that can be created for different purposes:

  • Unique and Non-Unique Index:

Unique indexes are indexes that help maintain data integrity by ensuring that no two rows of data in a table have identical key values. Once a unique index has been defined for a table, uniqueness is enforced whenever keys are added or changed within the index.

CREATE UNIQUE INDEX myIndex
ON students (enroll_no);

Non-unique indexes, on the other hand, are not used to enforce constraints on the tables with which they are associated. Instead, non-unique indexes are used solely to improve query performance by maintaining a sorted order of data values that are used frequently.

  • Clustered and Non-Clustered Index:

Clustered indexes are indexes whose order of the rows in the database corresponds to the order of the rows in the index. This is why only one clustered index can exist in a given table, whereas, multiple non-clustered indexes can exist in the table.

The only difference between clustered and non-clustered indexes is that the database manager attempts to keep the data in the database in the same order as the corresponding keys appear in the clustered index.

Clustering indexes can improve the performance of most query operations because they provide a linear-access path to data stored in the database.

Clustered Index:
  • Physical Sorting: A Clustered index determines the physical order of data rows in a table based on the indexed column(s). It rearranges the actual data pages to match the index order.
  • Unique Nature: By default, a Clustered index is unique, meaning that each row in the table is uniquely identified by the Clustered index key.
  • Storage: As the data rows are stored in the order of the Clustered index, only one Clustered index is allowed per table.
  • Performance: Clustered indexes are beneficial for range queries and queries that require sorting or grouping based on the indexed column.
Non-Clustered Index:
  • Logical Sorting: A Non-clustered index does not alter the physical order of data rows. Instead, it creates a separate structure that holds the index key values along with pointers to the actual data rows.
  • Non-Unique: Non-clustered indexes do not enforce uniqueness by default, allowing duplicate values in the indexed column(s).
  • Storage: Multiple Non-clustered indexes can be created on a single table without affecting the physical order of data.
  • Performance: Non-clustered indexes are advantageous for queries that involve joins, sorting, or filtering based on columns other than the indexed one.
Key Differences:
  • Data Order: Clustered indexes dictate the physical order of data, while Non-clustered indexes do not impact the data's physical arrangement.
  • Uniqueness: Clustered indexes are unique, whereas Non-clustered indexes can contain duplicate values.
  • Storage: Clustered indexes alter the data storage, allowing only one per table, whereas Non-clustered indexes create separate structures and permit multiple indexes.
  • Performance: The choice between Clustered and Non-clustered indexes depends on the query patterns and the nature of operations performed on the data.

Data Integrity in databases refers to the accuracy, consistency, and reliability of data stored within a database system. It ensures that the data is valid, correct, and secure throughout its lifecycle. There are several aspects to consider when discussing Data Integrity:

  • Entity Integrity: This aspect ensures that each row or record in a table is uniquely identified by a primary key. It prevents duplicate records and enforces uniqueness.
  • Referential Integrity: This ensures the relationships between tables are maintained correctly. For example, if a record in one table references a record in another table, the foreign key constraint ensures that the referenced record exists.
  • Domain Integrity: It enforces valid data types, formats, and ranges for columns. This prevents incorrect data from being inserted into the database.
  • User-defined Integrity: This involves custom rules and constraints defined by the user to maintain specific data requirements that are not covered by other integrity types.
  • Check Constraints: These constraints define rules that data must follow to maintain integrity. For example, limiting the values that can be inserted into a column.
  • Trigger-based Integrity: Triggers can be used to enforce complex integrity rules or actions when certain events occur in the database.

In SQL, a query is a request for data or information from a database. It allows users to retrieve, manipulate, and manage data stored in a relational database management system (RDBMS). Queries are written using SQL (Structured Query Language) to interact with databases effectively.

Components of a Query:
  • SELECT: Specifies the columns to retrieve data from.
  • FROM: Specifies the table(s) from which to retrieve data.
  • WHERE: Filters data based on specified conditions.
  • GROUP BY: Groups rows sharing a common value.
  • HAVING: Filters groups based on specified conditions.
  • ORDER BY: Sorts the result set based on specified columns.
  • JOIN: Combines rows from two or more tables based on a related column between them.
Example of a Query:
Let's consider a simple example to illustrate a query in SQL. Suppose we have a table named employees with columns employee_id, first_name, last_name, and salary. We want to retrieve the first and last names of employees earning more than $50,000.
SELECT first_name, last_name FROM employees WHERE salary > 50000;

In this example:


SELECT first_name, last_name specifies the columns to retrieve.

FROM employees indicates the table from which to retrieve data.

WHERE salary > 50000 filters the data based on the condition that the salary is greater than $50,000.

By executing this query, we would retrieve the first and last names of employees meeting the specified criteria.


Queries are fundamental to interacting with databases, enabling users to extract valuable insights and information efficiently. Understanding how to construct queries empowers users to manipulate and retrieve data according to their requirements.

In SQL, both the DROP and TRUNCATE statements are used to remove data from a table, but they serve different purposes and have distinct functionalities.

DROP Statement:
  • The DROP statement is used to delete an entire table from the database.
  • When you execute a DROP statement, it removes the table structure along with all the data, indexes, constraints, and triggers associated with that table.
  • It is a DDL (Data Definition Language) command and cannot be rolled back. Once a table is dropped, it is gone permanently.
The syntax for the DROP statement is as follows:
Drop table table_Name

TRUNCATE Statement:

  • The TRUNCATE statement is used to delete all the rows from a table, but it retains the table structure.
  • When you execute a TRUNCATE statement, it removes all the data from the table but keeps the table itself, along with its columns, constraints, indexes, and triggers.
  • It is a DDL (Data Definition Language) command and also cannot be rolled back. Once data is truncated, it is not recoverable.

The syntax for the TRUNCATE statement is as follows:

truncate table table_Name

Key Differences:

  • Functionality: DROP deletes the entire table structure, while TRUNCATE only removes the data inside the table.
  • Rollback: Neither DROP nor TRUNCATE can be rolled back, but TRUNCATE allows you to reuse the table structure.
  • Performance: TRUNCATE is faster than DROP since it does not log individual row deletions.
  • Permissions: DROP requires DROP permission on the table, while TRUNCATE requires DELETE permission on the table.


The TRUNCATE command is used to delete all the rows from the table and free the space containing the table.
The DELETE command deletes only the rows from the table based on the condition given in the where clause or deletes all the rows from the table if no condition is specified. But it does not free the space containing the table.

An aggregate function performs operations on a collection of values to return a single scalar value. Aggregate functions are often used with the GROUP BY and HAVING clauses of the SELECT statement. Following are the widely used SQL aggregate functions:

  • AVG() - Calculates the mean of a collection of values.
  • COUNT() - Counts the total number of records in a specific table or view.
  • MIN() - Calculates the minimum of a collection of values.
  • MAX() - Calculates the maximum of a collection of values.
  • SUM() - Calculates the sum of a collection of values.
  • FIRST() - Fetches the first element in a collection of values.
  • LAST() - Fetches the last element in a collection of values.

Note: All aggregate functions described above ignore NULL values except for the COUNT function.

A scalar function returns a single value based on the input value. Following are the widely used SQL scalar functions:

  • LEN() - Calculates the total length of the given field (column).
  • UCASE() - Converts a collection of string values to uppercase characters.
  • LCASE() - Converts a collection of string values to lowercase characters.
  • MID() - Extracts substrings from a collection of string values in a table.
  • CONCAT() - Concatenates two or more strings.
  • RAND() - Generates a random collection of numbers of a given length.
  • ROUND() - Calculates the round-off integer value for a numeric field (or decimal point values).
  • NOW() - Returns the current date & time.
  • FORMAT() - Sets the format to display a collection of values.

In SQL, a User-defined function (UDF) is a set of SQL statements that can be reused within SQL queries. These functions are created by users to encapsulate logic that can be applied repeatedly in queries, enhancing code reusability and maintainability.

Types of User-defined Functions:
1- Scalar Functions: These functions return a single value based on the input parameters. For example, a function that calculates the square of a number:
CREATE FUNCTION dbo.SquareFunction (@num INT) RETURNS INT AS BEGIN RETURN @num * @num; END

2- Table-Valued Functions: These functions return a table as the output. They can be used in the FROM clause of a SELECT statement. For instance, a function that returns a table of employees in a specific department:

CREATE FUNCTION dbo.GetEmployeesByDepartment (@deptId INT) RETURNS TABLE AS RETURN ( SELECT * FROM Employees WHERE DepartmentID = @deptId )

3- Inline Table-Valued Functions: These functions return a table variable directly. They are more efficient than multi-statement table-valued functions. An example of an inline table-valued function:

CREATE FUNCTION dbo.GetEmployeesByDepartment (@deptId INT) RETURNS TABLE AS RETURN SELECT * FROM Employees WHERE DepartmentID = @deptId

Benefits of User-defined Functions:

  • Code Reusability: Functions can be called from multiple queries, reducing code duplication.
  • Modularity: Logic encapsulation allows for easier maintenance and updates.
  • Performance: Properly designed functions can enhance query performance by reducing repetitive code execution.

Creating empty tables with the same structure can be done smartly by fetching the records of one table into a new table using the INTO operator while fixing a WHERE clause to be false for all records. Hence, SQL prepares the new table with a duplicate structure to accept the fetched records but since no records get fetched due to the WHERE clause in action, nothing is inserted into the new table.

SELECT * INTO Order_copy
FROM Order WHERE 1 = 2

A Stored Procedure in SQL is a precompiled collection of SQL statements stored in the database catalog. It is a reusable set of SQL statements that performs a specific task or a group of tasks. Stored Procedures enhance database performance, security, and maintainability by reducing network traffic and improving code reusability.

Key Characteristics of Stored Procedures:
  • Precompiled SQL Code: Stored Procedures are precompiled, which means the SQL statements are parsed, compiled, and optimized before execution, leading to faster query processing.
  • Parameterized: Stored Procedures can accept input parameters, enabling dynamic data processing based on user-defined values.
  • Encapsulation: They encapsulate complex business logic within the database, promoting data integrity and security by controlling access to the underlying tables.
  • Reusability: Stored Procedures can be called multiple times from different parts of an application, reducing code duplication and promoting maintainability.
  • Transaction Management: They support transaction management, allowing multiple SQL statements to be executed as a single unit of work, ensuring data consistency.
Syntax Example:
CREATE PROCEDURE sp_GetEmployeeDetails @EmployeeID INT AS BEGIN SELECT * FROM Employees WHERE EmployeeID = @EmployeeID; END

Benefits of Stored Procedures:

  • Improved Performance: Reduced network traffic and optimized query execution.
  • Enhanced Security: Controlled access to data and prevention of SQL injection attacks.
  • Code Reusability: Centralized logic that can be reused across applications.
  • Ease of Maintenance: Changes made in a Stored Procedure reflect across all applications using it.
  • Modularity: Encapsulation of logic for better organization and management.

1. Purpose:
=> Stored Procedure:

  • Used for performing a set of SQL statements and can contain programming logic.
  • Can execute multiple SQL queries and perform complex operations.
  • Ideal for tasks requiring conditional processing and looping.
=> Function:

  • Designed to return a single value or a table.
  • Cannot perform DML operations like INSERT, UPDATE, or DELETE.
  • Suitable for computations and returning specific values.
2. Return Type:
=> Stored Procedure:

  • Does not have a return type.
  • Can return multiple result sets using SELECT statements.
=> Function:

  • Must have a return type defined.
  • Returns a single value or a table.
3. Usage in Queries:
=> Stored Procedure:

  • Called using EXECUTE or EXEC command.
  • Can be used to perform operations and return result sets.
=> Function:

  • Invoked in SQL queries like a table or a scalar value.
  • Can be part of a SELECT statement or used in WHERE clauses.
4. Transaction Control:
=> Stored Procedure:

  • Can contain transaction management commands like COMMIT and ROLLBACK.
  • Used for handling complex transaction scenarios.
=> Function:

  • Cannot control transactions.
  • Transactions are managed by the calling process.
5. Error Handling:
=> Stored Procedure:

  • Supports TRY...CATCH blocks for error handling.
  • Can handle exceptions within the procedure.
=> Function:

  • Limited error handling capabilities.
  • Errors propagate to the calling environment.
6. Recursion:
=> Stored Procedure:

  • Supports recursion.
  • Can call itself within the procedure.
=> Function:

  • Limited support for recursion.
  • Recursive functions are less common and have restrictions.

Composite Keysa composite key is a combination of two or more columns that uniquely identify a record in a table. Unlike a single primary key that consists of a single column, a composite key involves multiple columns working together to create a unique identifier for each row.

Characteristics of Composite Keys:

  • Uniqueness: The combination of values in the composite key must be unique across all records in the table.
  • Order: The order of columns in a composite key is crucial, as changing the sequence of columns can result in a different key.
  • Data Integrity: Composite keys help maintain data integrity by ensuring that no duplicate records can exist based on the combined values of the key columns.
  • Performance: While composite keys offer uniqueness, they can impact performance, especially in large tables, as the database needs to index and search based on multiple columns.
Example of Creating a Composite Key in SQL
CREATE TABLE Employees ( EmployeeID INT, DepartmentID INT, PRIMARY KEY (EmployeeID, DepartmentID) );

In this example, the Employees table has a composite key consisting of EmployeeID and DepartmentID. Together, these two columns form a unique identifier for each employee within a department.


Benefits of Composite Keys:

  • Complex Relationships: Composite keys are useful when dealing with complex relationships between entities that require multiple attributes for identification.
  • Normalization: In some cases, using a composite key can help in normalizing the database structure by reducing redundancy.
  • Maintaining Data Integrity: By enforcing uniqueness across multiple columns, composite keys ensure data integrity and prevent inconsistencies.

ISNULL Function
The ISNULL function is specific to SQL Server and is used to replace NULL values with a specified replacement value. It takes two arguments: the expression to be evaluated for NULL and the replacement value if the expression is NULL. Here is an example of how ISNULL is used:
SELECT ISNULL(column_name, 'N/A') AS new_column_name FROM table_name;

In this example, if column_name is NULL, it will be replaced with 'N/A'.

COALESCE Function

The COALESCE function is an ANSI standard function supported by most relational database management systems. It returns the first non-NULL expression among its arguments. It can take multiple arguments and returns the first non-NULL value encountered. Here is an example of how COALESCE is used:-

SELECT COALESCE(column_name1, column_name2, 'N/A') AS new_column_name FROM table_name;

In this example, COALESCE will return column_name1 if it is not NULL. If column_name1 is NULL, it will return column_name2. If both column_name1 and column_name2 are NULL, it will return 'N/A'.


Key Differences

  • Portability: ISNULL is specific to SQL Server, while COALESCE is an ANSI standard function supported by many database systems.
  • Number of Arguments: ISNULL takes only two arguments, while COALESCE can take multiple arguments.
  • Return Value: ISNULL returns the data type of the first argument, while COALESCE returns the data type of the result expression.

In SQL, temporary tables are used to store temporary data that exists only for the duration of a session or a transaction. There are two types of temporary tables: local temporary tables and global temporary tables

Local Temporary Tables : 
  • Scope: Local temporary tables are only visible to the current session. They are automatically dropped when the session that created them ends.
  • Naming Convention: Local temporary tables are prefixed with a single pound sign (#). For example, #myLocalTempTable.
  • Usage: These tables are useful when you need temporary storage that is specific to a single session and should not interfere with other sessions.


CREATE TABLE #myLocalTempTable ( ID INT, Name VARCHAR(50) );

Global Temporary Tables : 

Scope: Global temporary tables are visible to all sessions. They are dropped when the last session using the table ends.

Naming Convention: Global temporary tables are prefixed with a double pound sign (##). For example, ##myGlobalTempTable.

Usage: Global temporary tables are handy when you need temporary storage that can be shared across multiple sessions.

CREATE TABLE ##myGlobalTempTable ( ID INT, Name VARCHAR(50) );

local temporary tables are session-specific and are automatically dropped when the session ends, while global temporary tables are visible across sessions and are dropped when the last session using them ends. Choose the appropriate type based on your specific requirements for temporary data storage in SQL.

In SQL Server, a schema is a container that holds database objects such as tables, views, procedures, functions, and more. It acts as a way to organize and manage database objects logically. Think of a schema as a blueprint or a namespace within a database.
Key Points about Schemas:
  • Logical Grouping: Schemas help in logically grouping database objects together. This organization aids in better management and maintenance of the database.
  • Security: Schemas play a crucial role in managing security within a database. By assigning permissions at the schema level, you can control access to multiple objects at once.
  • Default Schema: Each user in SQL Server has a default schema. When a user creates an object without specifying a schema, it is created in the user's default schema.
  • Schema Ownership: Schemas are owned by a database principal, usually a user or role. The owner has control over the objects within that schema.
  • Cross-schema References: Schemas allow for referencing objects across schemas. This enables better organization and separation of concerns within a database.

In SQL Server, the CASE statement is a powerful tool that allows for conditional logic within SQL queries. It is similar to the if-then-else logic found in other programming languages.

The basic syntax of the CASE statement is as follows:
SELECT CASE WHEN condition1 THEN result1 WHEN condition2 THEN result2 ... ELSE default_result END FROM your_table;

Here's a breakdown of the components:

  • WHEN condition THEN result: This part checks if a condition is met and returns a specific result if true.
  • ELSE default_result: If none of the conditions are met, the ELSE clause specifies the default result to return.

Let's illustrate this with an example. Suppose we have a table Employees with columns EmployeeID, FirstName, and LastName. We want to create a new column FullName that concatenates the first and last names.

SELECT EmployeeID, FirstName, LastName, CASE WHEN FirstName IS NOT NULL AND LastName IS NOT NULL THEN FirstName + ' ' + LastName ELSE 'Name Not Provided' END AS FullName FROM Employees;

In this example, the CASE statement checks if both the FirstName and LastName are not NULL. If they are not NULL, it concatenates them with a space in between. Otherwise, it returns 'Name Not Provided'.


The CASE statement can also be used in the WHERE clause, ORDER BY clause, and even nested within another CASE statement for more complex logic

Key Differences:
  • Data Model: SQL databases use a tabular structure with predefined schemas, while NoSQL databases offer various data models like document-based, key-value, column-family, and graph.
  • Scalability: NoSQL databases are horizontally scalable, enabling them to handle large datasets and distributed systems more efficiently than SQL databases.
  • Flexibility: NoSQL databases provide flexibility in data storage, allowing for dynamic schema changes and accommodating diverse data types.
  • Complexity: SQL databases are suitable for complex queries and transactions due to their structured nature, while NoSQL databases excel in handling simple queries on large datasets.
  • Consistency: SQL databases prioritize data consistency with ACID properties, whereas NoSQL databases may sacrifice consistency for availability and partition tolerance (CAP theorem).

Database Black Box Testing is a crucial aspect of software testing that focuses on examining the functionality of a database without requiring knowledge of its internal structures. In this testing approach, testers interact with the database through its external interfaces, such as queries, stored procedures, and triggers, without having visibility into the underlying code or database schema.

Key Aspects of Database Black Box Testing:

  • Data Integrity Testing: Verifying that data stored in the database remains accurate and consistent after various operations.
  • Data Validity Testing: Ensuring that the data stored in the database meets predefined constraints and validation rules.
  • Performance Testing: Evaluating the database's performance under different load conditions to identify bottlenecks and optimize queries.
  • Security Testing: Assessing the database's vulnerability to unauthorized access, SQL injection, and other security threats.
  • Concurrency Testing: Testing the database's ability to handle multiple transactions simultaneously without data corruption.

Benefits of Database Black Box Testing:

  • Independence: Testers do not need to understand the internal logic of the database, allowing for unbiased testing.
  • Comprehensive Testing: Covers a wide range of scenarios without being limited by internal implementation details.
  • Error Detection: Helps in identifying issues related to data integrity, performance, security, and scalability.
  • Risk Mitigation: Reduces the risk of deploying faulty database changes that could impact the overall system.
-- Example: Testing a stored procedure for data integrity CREATE PROCEDURE dbo.UpdateEmployeeSalary @EmployeeID INT, @NewSalary DECIMAL(10, 2) AS BEGIN UPDATE Employees SET Salary = @NewSalary WHERE EmployeeID = @EmployeeID END

In this SQL example, a stored procedure UpdateEmployeeSalary is tested without knowing the internal implementation details. Test cases can be designed to validate the correctness of salary updates without accessing the procedure's source code.

Entities:
In the realm of databases, an entity refers to a distinct object, person, place, or thing about which data is stored. Entities are represented as tables in a relational database, where each row corresponds to a specific instance of that entity, and each column represents an attribute or characteristic of the entity. For example, in a database for a school, entities could include students, teachers, and classes.

Entities are crucial in database design as they help structure and organize data in a meaningful way. Each entity has a unique identifier known as a primary key, which distinguishes it from other entities in the same table. Primary keys ensure data integrity and enable efficient data retrieval.

Relationships:
Relationships, on the other hand, define how entities are connected or associated with each other within a database. They establish links between tables based on common attributes, enabling data retrieval across multiple entities. Relationships are classified into different types, such as one-to-one, one-to-many, and many-to-many, based on how entities are related to each other.

  • One-to-One Relationship: A single record in one table is related to only one record in another table.
  • One-to-Many Relationship: A single record in one table can be related to multiple records in another table.
  • Many-to-Many Relationship: Multiple records in one table can be related to multiple records in another table.
Establishing relationships between entities is essential for maintaining data consistency and avoiding data redundancy. Foreign keys are used to create relationships between tables by linking the primary key of one table to a column in another table.

Normalization in databases is a crucial process that involves organizing data in a structured way to reduce redundancy and dependency. It aims to minimize data anomalies and ensure data integrity by breaking down large tables into smaller, related tables. This process follows a set of rules known as normal forms to eliminate data redundancy and improve database efficiency.

Advantages of Normalization:
  • Data Integrity: By reducing redundancy and dependency, normalization helps maintain data integrity. Updates, inserts, and deletes are less likely to cause anomalies, ensuring data consistency.
  • Efficient Use of Storage: Normalization optimizes storage space by eliminating redundant data. Smaller tables require less storage, leading to efficient use of resources.
  • Improved Query Performance: Normalized databases often perform better in terms of query speed. Smaller tables with fewer columns can be queried more efficiently.
  • Simplified Updates: With normalized data, updates only need to be made in one place, reducing the chances of inconsistencies across the database.
  • Scalability: Normalized databases are more scalable as they can handle growth and changes in data requirements more effectively. Adding new data is easier without impacting existing data.
  • Easier Maintenance: Maintenance tasks such as backups, indexing, and data manipulation become more straightforward in normalized databases due to the organized structure.
  • Reduction of Redundancy: Normalization eliminates redundant data, which not only saves storage space but also reduces the risk of inconsistencies that can arise from duplicate information.
  • Smoother Database Design: Following normalization principles leads to a well-structured database design that is easier to understand and maintain over time.

OLTP stands for Online Transaction Processing. It is a type of system that manages and facilitates high volumes of transaction-oriented applications. In OLTP systems, the emphasis is on quick query processing, maintaining data integrity in multi-access environments, and ensuring concurrent user interactions.

Key Characteristics of OLTP Systems:
  • Transactional Nature: OLTP systems handle a large number of short online transactions, such as inserting, updating, and deleting small amounts of data in real-time.
  • Concurrency Control: OLTP systems support multiple users concurrently accessing and modifying data without compromising data integrity. Techniques like locking mechanisms are used to manage concurrent access.
  • Data Normalization: Data in OLTP systems is typically normalized to reduce redundancy and improve data consistency. This ensures efficient storage and retrieval of data.
  • Indexing: Indexes are crucial in OLTP systems to speed up data retrieval operations, especially in scenarios where frequent querying of specific data subsets is required.
  • ACID Properties: OLTP systems adhere to the ACID (Atomicity, Consistency, Isolation, Durability) properties to ensure reliable and secure transaction processing.

Example of OLTP Query:
-- Example: Retrieve all orders for a specific customer SELECT OrderID, OrderDate, TotalAmount FROM Orders WHERE CustomerID = '123'

In summary, OLTP systems are designed for operational efficiency, ensuring that day-to-day transactions are processed swiftly and accurately. They form the backbone of applications like e-commerce platforms, banking systems, and airline reservations, where real-time data processing is critical for business operations.

In 1986, a team lead by Computer Science Professor Michael Stonebraker created PostgreSQL under the name Postgres. It was created to aid developers in the development of enterprise-level applications by ensuring data integrity and fault tolerance in systems. PostgreSQL is an enterprise-level, versatile, resilient, open-source, object-relational database management system that supports variable workloads and concurrent users. The international developer community has constantly backed it. PostgreSQL has achieved significant appeal among developers because to its fault-tolerant characteristics.
It’s a very reliable database management system, with more than two decades of community work to thank for its high levels of resiliency, integrity, and accuracy. Many online, mobile, geospatial, and analytics applications utilise PostgreSQL as their primary data storage or data warehouse.

In SQL, the RANK() and DENSE_RANK() functions are used to assign a rank to each row within a result set based on the specified criteria. While both functions are used for ranking, they differ in how they handle ties in the ranking process.

RANK() Function:
The RANK() function assigns a unique rank to each distinct row in the result set. If there are ties (i.e., rows with the same value for the ordering criteria), the RANK() function leaves gaps in the ranking sequence to account for the tied rows. The next rank after a tie is incremented by the number of tied rows.

Here is an example of how the RANK() function works:
SELECT name, score, RANK() OVER (ORDER BY score DESC) AS rank FROM students;

DENSE_RANK() Function:

On the other hand, the DENSE_RANK() function also assigns a unique rank to each distinct row in the result set. However, unlike RANK(), DENSE_RANK() does not leave gaps in the ranking sequence when there are ties. It ensures that the ranks are consecutive without any gaps, even if there are tied rows.


Here is an example of how the DENSE_RANK() function works:

SELECT name, score, DENSE_RANK() OVER (ORDER BY score DESC) AS dense_rank FROM students;

Key Differences:

  • Handling Ties: RANK() leaves gaps in the ranking sequence for tied rows, while DENSE_RANK() assigns consecutive ranks without gaps.
  • Sequential vs. Non-Sequential: RANK() produces a sequential ranking that may have gaps, whereas DENSE_RANK() provides a non-sequential ranking without gaps.
  • Use Cases: RANK() is useful when you want to distinguish tied rows distinctly, while DENSE_RANK() is preferred when you need a continuous ranking without gaps.

SQL Injection is a malicious technique used by attackers to manipulate SQL statements in a web application's input fields. It occurs when user input is not properly sanitized or validated before being used in SQL queries. This vulnerability allows attackers to inject malicious SQL code into the application's database queries, potentially gaining unauthorized access to sensitive data, modifying data, or even executing administrative operations on the database.

How SQL Injection Works
When a web application dynamically constructs SQL queries using user input without proper validation, attackers can exploit this vulnerability by inserting SQL commands into the input fields. For example, consider a login form where the SQL query to validate user credentials is constructed as follows:
SELECT * FROM users WHERE username = '{input_username}' AND password = '{input_password}';

If the input fields are not sanitized, an attacker could input ' OR '1'='1 as the username and password, resulting in the following query:

SELECT * FROM users WHERE username = '' OR '1'='1' AND password = '' OR '1'='1';

This modified query always returns true, allowing the attacker to bypass authentication and gain unauthorized access.


Implications of SQL Injection

SQL Injection poses significant risks to the security and integrity of a database-driven application. Some of the potential consequences include:


  • Unauthorized access to sensitive data
  • Data manipulation or deletion
  • Extraction of confidential information
  • Complete compromise of the database server
  • Execution of arbitrary commands on the database

Mitigating SQL Injection

To prevent SQL Injection attacks, developers should implement the following best practices:


  • Use parameterized queries or prepared statements to sanitize user input
  • Avoid dynamic SQL construction based on user input
  • Implement input validation and encoding
  • Limit database user permissions to reduce the impact of successful attacks
  • Regularly update and patch the application and database software

In the realm of databases, the ACID property stands as a fundamental concept that ensures the reliability and consistency of transactions. Let's delve into each component of the ACID acronym to grasp its significance:

1. Atomicity
Atomicity guarantees that a transaction is treated as a single unit of operation. It implies that either all operations within a transaction must succeed, or if any operation fails, the entire transaction is rolled back to its initial state. This all-or-nothing principle ensures data integrity and prevents partial updates that could lead to inconsistencies.

2. Consistency
Consistency ensures that a database remains in a valid state before and after any transaction. It enforces predefined rules and constraints, preserving data integrity. When a transaction is executed, the database transitions from one consistent state to another, maintaining the overall correctness of the data.

3. Isolation
Isolation guarantees that the execution of multiple transactions concurrently does not interfere with each other. Each transaction operates independently, unaware of other transactions running concurrently. Isolation levels such as Read Uncommitted, Read Committed, Repeatable Read, and Serializable control the visibility of data changes made by concurrent transactions.

4. Durability
Durability ensures that once a transaction is committed, its changes are permanently saved, even in the event of a system failure. The changes made by a committed transaction persist and are not lost, providing reliability and data persistence.

By adhering to the ACID properties, databases maintain data integrity, consistency, and reliability, crucial for ensuring the accuracy and trustworthiness of stored information. These properties form the foundation for robust transaction management in database systems.

In SQL, NULL values are distinct from zero or blank spaces. Understanding the differences is crucial for database management and querying accuracy.

NULL Values:
  • NULL represents the absence of a value or an unknown value.
  • It is not the same as zero or an empty string.
  • When a field contains a NULL value, it means the value is missing or unknown.
  • Comparing NULL to any other value, including another NULL, using standard comparison operators like '=' or '<>' will result in an unknown or NULL outcome.
Zero:
  • Zero is a numerical value that represents the number zero.
  • It is a specific value and is different from NULL.
  • When a field contains zero, it means the value is explicitly set to zero.
Blank Spaces:
  • Blank spaces or empty strings are different from NULL.
  • An empty string is a string with no characters.
  • It is distinct from NULL, which represents the absence of a value.

There are two types of subquery namely, Correlated and Non-Correlated.

Correlated subquery: These are queries which select the data from a table referenced in the outer query. It is not considered as an independent query as it refers to another table and refers the column in a table.

Non-Correlated subquery: This query is an independent query where the output of subquery is substituted in the main query.

To obtain the third-highest salary from the employee_table, we can use the following SQL query:
SELECT DISTINCT Salary FROM employee_table ORDER BY Salary DESC LIMIT 1 OFFSET 2;

In this query:

  • SELECT DISTINCT Salary selects only distinct salary values to avoid duplicates.
  • ORDER BY Salary DESC sorts the salaries in descending order, placing the highest salary at the top.
  • LIMIT 1 OFFSET 2 limits the result to one row starting from the third row, effectively retrieving the third-highest salary.

By utilizing the ORDER BY clause in descending order and combining it with LIMIT and OFFSET, we can efficiently fetch the third-highest salary from the employee_table

In the realm of databases, a relationship refers to the connection between tables that is established based on common fields. These relationships are crucial for structuring data efficiently and maintaining data integrity. There are primarily three types of relationships that can exist between tables in a relational database: one-to-one, one-to-many, and many-to-many.

One-to-One Relationship: In a one-to-one relationship, each record in Table A is related to only one record in Table B, and vice versa. This type of relationship is not very common in database design but can be useful in certain scenarios where data needs to be separated for organizational or security reasons.

One-to-Many Relationship: The most common type of relationship, a one-to-many relationship, occurs when a single record in Table A can be related to one or more records in Table B. For example, in a database for a library, one author can have multiple books. This relationship is established by having a foreign key in the "many" side table that references the primary key in the "one" side table.

Many-to-Many Relationship: A many-to-many relationship exists when multiple records in Table A can be related to multiple records in Table B. To implement this type of relationship, a junction table is used. This junction table typically contains the primary keys of both tables as foreign keys, creating a bridge between the two entities.

Establishing and maintaining these relationships is fundamental in relational database management systems (RDBMS) like SQL. By defining relationships between tables, you can ensure data consistency, reduce redundancy, and improve query performance. SQL provides mechanisms like foreign keys and JOIN operations to work with these relationships effectively.

The MERGE statement in SQL, also known as the UPSERT operation (a combination of UPDATE and INSERT), serves a crucial role in database management. It allows users to perform multiple operations like INSERT, UPDATE, or DELETE in a single atomic statement based on specified conditions. Here are some key reasons why the MERGE statement is essential:

1- Efficiency: The MERGE statement streamlines the process of handling data synchronization between source and target tables. Instead of executing separate INSERT, UPDATE, and DELETE statements, the MERGE statement combines these operations into a single statement, reducing the number of round trips to the database and enhancing performance.

2- Atomicity: One of the primary advantages of using the MERGE statement is its atomic nature. It ensures that either all parts of the statement are executed successfully, or none at all. This guarantees data integrity and consistency, especially in scenarios where multiple operations need to be performed together.

3- Simplicity: By consolidating multiple operations into a single statement, the MERGE statement simplifies the code and makes it more readable. It eliminates the need for complex procedural logic or conditional statements to achieve the desired outcome, making the code more maintainable and easier to understand.

4- Concurrency Control: In a multi-user environment where simultaneous transactions can occur, the MERGE statement helps in managing concurrency issues effectively. It allows users to update or insert records based on specific conditions, preventing conflicts and ensuring data accuracy.

5- Performance Optimization: When dealing with large datasets, using the MERGE statement can significantly improve performance compared to executing individual INSERT, UPDATE, and DELETE statements. It minimizes the overhead associated with processing each operation separately, leading to faster data manipulation.

In SQL, both the 'HAVING' and 'WHERE' clauses are used to filter data, but they serve different purposes and are applied at different stages of query execution.
WHERE Clause:
  • The 'WHERE' clause is used to filter rows before any grouping or aggregation is performed.
  • It is applied to individual rows in the result set based on a specified condition.
  • The 'WHERE' clause is used with the 'SELECT', 'UPDATE', 'DELETE', and 'INSERT' statements.
SELECT * FROM employees WHERE department = 'Sales';

HAVING Clause:

  • The 'HAVING' clause is used to filter rows after the grouping has been done.
  • It is applied to grouped rows based on a specified condition.
  • The 'HAVING' clause is used with the 'GROUP BY' clause in SELECT statements that contain aggregate functions.

SELECT department, COUNT(*) as total_employees FROM employees GROUP BY department HAVING COUNT(*) > 5;

Key Differences:

The 'WHERE' clause is used with individual rows, while the 'HAVING' clause is used with groups of rows.

'WHERE' is applied before data is grouped, whereas 'HAVING' is applied after data is grouped.

Aggregate functions like COUNT, SUM, AVG, etc., can be used in the 'HAVING' clause but not in the 'WHERE' clause.

'WHERE' filters rows based on conditions involving individual columns, while 'HAVING' filters groups based on conditions involving aggregate values.

Some of the available set operators are – Union, Intersect or Minus operators.

A view is a virtual table which consists of a subset of data contained in a table. Since views are not present, it takes less space to store. View can have data of one or more tables combined and it depends on the relationship.

Views in SQL serve as virtual tables that allow users to encapsulate complex queries into a single, easily accessible entity. They are essentially saved SQL queries that can be treated as tables, enabling users to retrieve specific data without having to rewrite the query each time. Views offer several advantages and use cases in database management:

  • Data Security: Views can restrict access to certain columns or rows of a table, providing an additional layer of security by controlling the data that users can retrieve.
  • Simplifying Complex Queries: Views simplify complex queries by predefining joins, aggregations, or filters, making it easier for users to retrieve the required data without dealing with intricate SQL syntax each time.
  • Data Abstraction: Views abstract the underlying data model, allowing users to interact with a simplified representation of the data, which can be particularly useful in scenarios where the database schema is complex.
  • Performance Optimization: Views can enhance query performance by storing the results of a query and allowing users to access the data more efficiently, especially when dealing with frequently executed queries.
  • Code Reusability: Views promote code reusability by encapsulating commonly used queries, reducing redundancy and promoting a more modular approach to database querying.
  • Logical Data Independence: Views provide logical data independence by separating the way data is stored from the way it is presented, enabling changes to the underlying tables without affecting the applications that use the views.

Stored Procedures in SQL offer several advantages and disadvantages that can impact database performance, security, and maintenance.
Advantages of Stored Procedures:

1- Improved Performance: Stored Procedures can enhance performance by reducing network traffic as the SQL code is executed on the database server rather than sending multiple queries from the client.
CREATE PROCEDURE sp_GetEmployeeDetails AS BEGIN SELECT * FROM Employees; END

2- Enhanced Security: Stored Procedures provide an additional layer of security by controlling access to data tables. Users can execute the procedure without needing direct table access.

3- Code Reusability: Stored Procedures promote code reusability as the same procedure can be called from multiple applications or scripts, reducing redundancy and promoting consistency.

4- Ease of Maintenance: Changes to database schema or logic can be centralized within the Stored Procedure, making maintenance and updates more manageable.


Disadvantages of Stored Procedures:


1- Limited Portability: Stored Procedures are often database-specific, which can limit portability across different database management systems.

2- Debugging Complexity: Debugging Stored Procedures can be more challenging compared to debugging application code, as they run on the database server and may not provide detailed error messages.

3- Version Control: Managing versions of Stored Procedures can be cumbersome, especially in environments with multiple developers working on the same database.

4- Performance Overhead: In some cases, poorly optimized Stored Procedures can lead to performance issues, especially if they involve complex logic or inefficient queries.

SQL operators are symbols used to perform operations on one or more values in SQL statements. These operators include arithmetic operators (+, -, *, /), comparison operators (=, <>, >, <, >=, <=), logical operators (AND, OR, NOT), and more. They help manipulate data, filter results, and perform calculations in SQL queries. Understanding and utilizing SQL operators is essential for writing efficient and effective SQL queries. below more categorized answer.

  • Arithmetic operators: These operators are used to perform mathematical operations on numerical data. The categories of this operators are addition (+), subtraction (-), multiplication (*), division (/), remainder/modulus (%), etc.
  • Logical operators: These operators evaluate the expressions and return their results in True or False. This operator includes ALL, AND, ANY, ISNULL, EXISTS, BETWEEN, IN, LIKE, NOT, OR, UNIQUE.
  • Comparison operators: These operators are used to perform comparisons of two values and check whether they are the same or not. It includes equal to (=), not equal to (!= or <>), less than (<), greater than (>), less than or equal to (<=), greater than or equal to (>=), not less than (!<), not greater than (!>), etc.
  • Bitwise operators: It is used to do bit manipulations between two expressions of integer type. It first performs conversion of integers into binary bits and then applied operators such as AND (& symbol), OR (|, ^), NOT (~), etc.
  • Compound operators: These operators perform operations on a variable before setting the variable's result to the operation's result. It includes Add equals (+=), subtract equals (-=), multiply equals (*=), divide equals (/=), modulo equals (%=), etc.
  • String operators: These operators are primarily used to perform concatenation and pattern matching of strings. It includes + (String concatenation), += (String concatenation assignment), % (Wildcard), [] (Character(s) matches), [^] (Character(s) not to match), _ (Wildcard match one character), etc.

There is a built-in function in SQL called GetDate(), which is used to return the current timestamp.

Joins are used to merge two tables or retrieve data from tables. It depends on the relationship between tables. According to the ANSI standard, the following are the different types of joins used in SQL:

  • INNER JOIN
  • SELF JOIN
  • LEFT OUTER JOIN
  • RIGHT OUTER JOIN
  • FULL OUTER JOIN
  • CROSS JOIN

  • TRUNCATE
  • ROLLBACK
  • COMMIT 
  • DELETE

  • All of the mentioned 
  • Finding and diagnosing slow-running queries
  • Capturing the series of Transact-SQL statements that lead to a problem
  • Correlating performance counters to diagnose problems

  • The order of occurrence determines the order of evaluation.
  • NOT will be evaluated first ;AND will be evaluated second ;OR will be evaluated last. 
  • AND will be evaluated first ;OR will be evaluated second ;NOT will be evaluated last.
  • NOT will be evaluated first ;OR will be evaluated second; AND will be evaluated last. 

  • Select * from weather where temperature  > (Select * from weather where city ='Paris')
  • Select * from weather where temperature >'Paris' temperature
  • Select * from weather where temperature > (Select city from weather where city='Paris')
  • Select * from weather where temperature > (Select temperature from where city='Paris') 

  • UNIQUE
  • Primary Key
  • None of the mentioned
  • Foreign Key 

  • The SQL Alter Table deletes data from database table.
  • The SQL Alter Table clause is used to delete a database table
  • The SQL Alter Table clause is used to insert data database table
  • The SQL Alter Table clause modifies a table definition by altering , adding , or deleting table columns and/or constraints 

  • INNER JOIN 
  • LEFT JOIN
  • FULL OUTER JOIN
  • RIGHT JOIN

  • After the execution of 'Delete' Operation , COMMIT and ROLLBACK statements can be performed to retrieve the lost data , while 'Truncate ' do not allow it. 
  • After the execution of 'Delete' and 'Truncate' operation no retrieval is possible for the lost data
  • After the execution of 'Delete' and 'Truncate'  operation retrieval is easily possible for the lost data
  • After the execution of 'Truncate' operation , COMMIT and ROLLBACK statements can be performed to retrieve the lost data while 'Delete' does not allow it


Talk to us?

Post your blog